5 years of Test Driven Development, Visualized

Here’s a 15 minute video covering nearly 5 years of active Mir development in C++.

We (the original Mir team), started the project by reading “Growing Object-Oriented Software, Guided by Tests” by Steve Freeman and Nat Price (book site).  We followed the philosophy of test-driven development
closely after that, really throughout the whole project.

This is a video generated by the program ‘gource’. What you see is every file or directory is a node. Little avatars, one for every contributor zoom around the files and zap them when a change is made. If you watch closely, some contributors will fixate on a few files for a bit (bug hunting, maybe?) and sometimes the avatars zap a lot of files during close succession, during feature additions and refactoring.

A few things to point out about TDD in the visualization. At the end, the 3 biggest branches are headers, source code and tests. The size of those components remained roughly the same throughout the whole project, which shows that we were adding a test, and then fleshing out the source code (and supporting headers from there). When an avatar zaps a lot of files at once, there’s always a corresponding zapping of the tests tree and the source tree. Write the test first! Very cool to see how things come together with TDD on a large, sustained, and high quality code base.


Posted in Coding | Leave a comment

Mir 0.24 Release

Mir 0.24 was just released this week!

We’ve reworked a few things internally and fixed a fair amount of bugs. Notably, our buffer swapping system and our input keymapping system were reworked (Alt-Gr should now work for international keyboards). There was also some improvements made to the server API to make window management better.

I’m most excited about the internal buffer swapping mechanism changes, as its what I’ve been working to release for a while now. The internal changes get us ready for Vulkan [1], and improve our multimedia support [2], improve  our WiDi support, and to reduce latency in nested server scenarios [3].

Mir ImageThis is prep work  for releasing some new client API functions (perhaps in 0.25, depending on how the trade winds are blowing… they’re currently gated in non-public project directories here). More on that once the headers are released.

Vulkan is  a new rendering API from Khronos designed to give finer-grained gpu control and more parallel operation between CPU and the GPU.

Especially multimedia decoding and encoding, which need more arbitrary buffer control.

“Unity8” runs in a nested server configuration for multiuser support (among other reasons). unity-system-compositor controls the framebuffer, unity8 sessions connect to unity-system-compositor, and clients connect to the appropriate unity8 session. More fine-grained buffer submissions allow us to forward buffers more creatively, making sure the clients have zero-copy more often.

Posted in Uncategorized | Leave a comment

ATTiny85 PWM from Timer/Counter1


I’ve been tinkering with the ATTiny chip a bit lately, and I wanted to hook up of my stepper motors to it. This chip has 2 timers, and a few pins that can output PWM signals.

I had OC1B on PB4/pin3 free, and the Timer/Counter1 module looked a bit better than the Timer/Counter0 module for the relatively-long pulse needed to control the stepper motor.

First thing to figure out is what I wanted the PWM signal to look like. Stepper motors care more about the pulse length than the frequency. My motor accepted 700-1500us as the control range, so I decided to go with a 4000us period (250Hz).

The way that the Timer/Counter1 module works is it will increase its count from 0 to a certain register value (OCR1C), and then reset to zero. In PWM mode, the OC1B pin is cleared when the counter hits a certain value (OCR1B), and set when the counter is 0. So, if you control OCR1C, you can control the the period, and by controlling OCR1B, you can control the pulse width. I decided to use the full width of the counting (8 bits), so OCR1C would be set to 0xFF.

Next I had to figure out how quickly the counter would be incrementing. This is selected via the system clock rate, and the prescaler on the timer. I needed the 16Mhz PLL clock on the chip (CLKSEL=0x0001), so that was fit. I selected 256 as the prescaler value so that:

16Mhz / 256 (prescalar) / 256 (OCR1C) = ~4ms.

Time to write some code!

//fuses: L: 0xE1 H: 0xDD E: 0xff
#define F_CPU 16500000
#include <avr/io.h>
#include <util/delay.h>
void main()
    //Set Pin3/PB4 to output
    DDRB = 1 << DDB4;
    //approximately a 700us pulse
    OCR1B = 0x2D;
    OCR1C = 0xFF;

    TCCR1 = 1 << CTC1 | //clear on match with OCR1C
            9 << CS10;  //set prescaling to clk/256
    GTCCR = 1 << PWM1B | //enable PWM mode on OC1B
            2 << COM1B0; //clear OC1B when we hit OCR1B
    for (;;)


Now that that was written, hooked up a simple circuit, and hooked my logic analyzer to a resistor on the output pin to verify the output:


The logic analyzer showed:


Success! A 4ms period with a 700us pulse width. I could now drive my stepper motor, and by changing OCR1B, I could designate which position the motor was in.


Posted in Hardware | Leave a comment

Mir and Vulkan Demo

This week the Mir team got a Vulkan demo working on Mir! (youtube link to demo)

I’ve been working on replumbing mir’s internals a bit to give more fine grained control over buffers, and my tech lead Cemil has been working on hooking that API into the Vulkan/Mir WSI.

The tl;dr on Vulkan is its a recently finalized hardware accelerated graphics API from Khronos (who also proved the OpenGL APIs). It doesn’t surplant OpenGL, but can give better performance (esp in multithreaded environments) and better debug in exchange for more explicit control of the GPU.

Some links:
Khronos Vulkan page

Wikipedia Vulkan entry

short video from Intel at SIGGRAPH with a quick explanation

longer video from NVIDIA at SIGGRAPH on Vulkan


If you’re wondering when this will appear in a repository near you, probably right after the Ubuntu Y series opens up (we’re in a feature freeze for xenial/16.04 LTS at the moment).

Posted in Coding, mir, Multimedia, Ubuntu | Leave a comment

New Mir Release (0.18)

Mir Image

If a new Mir release was on your Christmas wishlist (like it was on mine), Mir 0.18 has been released! I’ve been working on this the last few days, and its out the door now.  Full text of changelog. Special thanks to mir team members who helped with testing, and the devs in #ubuntu-ci-eng for helping move the release along.


  • Internal preparation work needed for Vulkan, hardware decoded multimedia optimizations, and latency improvements for nested servers.
  • Started work on plugin renderers. This will better prepare mir for IoT, where we might not have a Vulkan/GLES stack on the device, and might have to use the CPU.
  • Fixes for graphics corruption affecting Xmir (blocky black bars)
  • Various fixes for multimonitor scenarios, as well as better support for scaling buffers to suit the the monitor its on.


  • Use libinput by default. We had been leaning on an old version of the Android input stack. Completely remove this in favor of using libinput.


  • Quite a long list of bug correction. Some of these were never ‘in the wild’ but existed in the course of 0.18 development.

What’s next?

Its always tricky to pin down what exactly will make it into the next release, but I can at least comment on the stuff we’re working on, in addition to the normal rounds of bugfixing and test improvements:

  • various Internet-o-Things and convergence topics (eg, snappy, figuring out different rendering options on smaller devices).
  • buffer swapping rework to accommodate different render technologies (Vulkan!) accommodations for multimedia, and improve latency for nested servers.
  • more flexible screenshotting support
  • further refinements to our window management API
  • refinements to our platform autodetection

How can I help?

Writing new Shells

A fun way to help would be to write new shells! Part of mir’s goals is to make this as easy to do as possible, so writing a new shell always helps us make sure we’re hitting this goals.

If you’re interested in the mir C++ shell API, then you can look at some of our demos, available in the ‘mir-demos’ package. (source here, documentation here)

Even easier than that might be writing a shell using QML like unity8 is doing via the qtmir plugin. An example of how to do that is here (instructions on running here).

Tinkering with technology

If you’re more of the nuts and bolts type, you can try porting a device, adding a new rendering platform to mir (OpenVG or pixman might be an interesting, beneficial challenge), or figuring out other features to take advantage of.

Standard stuff

Pretty much all open source projects recommend bug fixing or triaging, helping on irc (#ubuntu-mir on freenode) or documentation auditing as other good ways to start helping.

Posted in Coding, mir, Ubuntu | 2 Comments

Small Run Fab Services

For quite a while I’ve been just using protoboards, or trying toner transfer to make pcbs, with limited success.

A botched toner transfer attempt

A hackaday article (Why are you still making PCB’s?) turned me on to low cost, prototyping pcb runs. Cutting my own boards via toner transfer had lots of drawbacks:

  • I’d botch my transfer (as seen above), and have to clean the board and start over again. Chemicals are no fun either.
  • Drilling is tedious.
  • I never really got to the point where I’d say it was easy to do a one-sided board.
  • I would always route one-sided boards, as I never got good enough to want to try a 2 layer board.
  • There was no solder mask layer, so You’d get oxidation, and have to be very careful while soldering.
  • Adding silkscreen was just not worth the effort.

I seemed to remember trying to find small run services like this a while ago, but coming up short. I might be coming late to the party of small-run pcb fabs, but I was excited to find services like OSHpark are out there. They’ll cut you three 2-layer pcbs with all the fixins’ for $5/square inch! This is a much nicer board and probably at a cheaper cost than I am able to do myself.

Here’s the same board design (rerouted for 2layer) as the botched one above:

The same board design as above, uploaded into OSHpark

You can upload an Eagle BRD file directly, or submit the normal gerber files. Once uploaded, you can easily share the project on OSHpark. (this project’s download). You have to wait 12 days for the boards, but if I’m being honest with myself, this is a quicker turnaround time than my basement-fab could do! I’m sure I’ll be cutting my own boards way less in the future.

Posted in Hardware | Leave a comment

Bjarne on C++11

header image

I saw this keynote quite a while ago, and I still refer to it sometimes, even though its almost 3 years old now. Its a good whirlwind tour of the advances in C++11.

Posted in Coding | Leave a comment

More Usable Code By Avoiding Two Step Objects

header image

Two step initialization is harmful to the objects that you write because it obfuscates the dependencies of the object, and makes the object harder to use.

Harder to use

Consider a header and some usage code:

struct Monkey
    void set_banana(std::shared_ptr const& banana);
    void munch_banana();
    std::shared_ptr const& banana;

int main(int argc, char** argv)
    Monkey jim;

Now jim.munch_banana(); could be a valid line to call, but the reader of the interface isn’t really assured that it is if the writer wrote the object with two step initialization. If the implementation is:

Monkey::Monkey() :
void Monkey::set_banana(std::shared_ptr const& b)
    banana = b;
void Monkey::munch_banana()

Then calling jim.munch_banana(); would segfault! A more careful coder might have written:

void Monkey::munch_banana()
    if (banana)

This still is a problem though, as calling munch_banana() is silently doing nothing; and the caller can’t know that. If you tried to fix by writing:

void Monkey::munch_banana()
    if (banana)
        throw std::logic_error("monkey doesn't have a banana");

We’re at least to the point where we haven’t segfaulted and we’ve notified the caller that something has gone wrong…. But we’re still at the point where we’ve thrown an exception that the user has to recover from.

Obfuscated Dependencies

With the two-step object, you need more lines of code to initialize it, and you leave the object “vulnerable”.

auto monkey = std::make_unique();

If you notice, between lines 1 and 2, monkey isn’t really a constructed object. It’s in an indeterminate state! If monkey has to be passed around to an object that has a Banana to share, thats a recipe for a problem. Other objects don’t have a good way to know if this is a Monkey object, or if its a meta-Monkey object that can’t be used yet.

Can we do better?

Yes! By thinking about our object’s dependencies, we can avoid the situation altogether.
The truth is; Monkey really does depend on Banana..
If the class expresses this in its constructor, ala:

struct Monkey
    Monkey(std::shared_ptr const& banana);
    void set_banana(std::shared_ptr const& banana);
    void munch_banana();
    std::shared_ptr banana;

We make it clear when constructing that the Monkey needs a Banana. The coder interested in calling Monkey::munch_banana() is guaranteed that it’ll work. The code implementing Monkey::munch_banana() becomes the original, and simple:

void Monkey::munch_banana()

Furthermore, if we update the banana later via Monkey::set_banana(), we’re still in the clear. The only way the coder’s going to run into problems is if they explicitly set a nullptr as the argument, which is a pretty easy error to avoid, as you have to actively do something silly, instead of doing something reasonable, and getting a silly error.

Getting the dependencies of the object right sorts out a lot of interface problems and makes the object easier to use.

Posted in Coding | Leave a comment

Bad Metaphysics Costs Maintainability

header image

I find myself doing a lot of metaphysical thinking in my day to day work as a coder. Objects that are cohesive and are valid metaphysical analogues to common experiences make it much easier to read, understand, and fix existing code.

Taking an example:

struct ServiceQueue
    void place_customer_at_back();
    void service_front_customer();

This class maps well to a physical problem we encounter frequently in our day to day lives; customer service at a bank teller window, perhaps, or ordering a hamburger at fast food chain. Taking things to the realm of computers, ServiceQueue also maps to many computer problems pretty well as well. A packet arrives over the network, and we want to service the packet in a FIFO manner.

This class is cohesive and easily understood by someone new to the code because it maps well to a well understood concept; that of a Queue. By making use of the shared metaphysical concept of physical queueing, we introduce the new coder to the abstract queue that’s implemented in software. The new coder can understand and verify the interface and implementation quickly.

Now lets spice the interface up with a metaphysical error. Say that we have the above class, and we encounter an error condition that happens when the network becomes disconnected. One might be tempted to add a function like:

struct ServiceQueue
    void place_customer_at_back();
    void service_front_customer();
    void handle_network_disconnected();

The problem in adding this function is twofold; it makes the code harder to understand, and it makes writing a new object more difficult.

Rapid Understandibility

The first issues is that that analogue to a real life concept is diluted, and with enough dilution, will eventually be lost. This makes it more difficult to rapidly understand the role an object implementing the interface has.

I bet the coder that added “handle_network_disconnected” saw that a lot of the implementation where the error could be handled from was “conveniently” in the class implementing the interface, and punched “handle_network_disconnected” in. But did you catch the metaphysical error? ServiceQueue is no longer named properly, its become a different object. Its a ServerQueueThatCanBeDisconnected; and the analogy to the physical queue is weakened. It takes a bit more explaining to a new coder to explain what sort of interface ServiceQueue is. This additional explanation needed to the new coder makes it much more difficult to understand the object and the problem that is being solved. Consequently, its harder to maintain, and takes longer to debug, because of the added cost of understanding. 1

Alternative Implementations

With each error like this, it becomes a bit harder to write an implementation for the interface that solves a similar problem. ServiceQueue with “handle_network_disconnected” fits the network-packet problem, but its been made more difficult to use this interface with the myriad other problems (like the bank teller problem).

Now, in the practical world of software, we’re used to seeing this all the time. We can mentally handle one metaphysical error per interface quite easily. The actual problem comes in much worse scenarios, where there are multiple holes punched through the interface. Eventually, it can get to the point where the object really has no physical manifestation and the interface gets renamed to something ambiguous, like “ServiceManager”. At this point, the object has sluggish understandability, and is irreplaceable. We’ve found ourself with some difficult to maintain software!

It might take a bit of refactoring to get things right, but in the end, its worth it; both practically, and metaphysically.

This post originally appeared on kdubois.net, and is (c) Kevin DuBois 2015

Posted in Coding | Leave a comment

A few years of Mir TDD

asm header

We started the Mir project a few years ago guided around the principles in the book, Growing Object Oriented Software Guided by Tests. I recommend a read, especially if you’ve never been exposed to “Test-driven development”

Compared to other projects that I’ve worked on, I find that as a greenfield  TDD project Mir has really benefitted from the TDD process in terms of ease of development, and reliability. Just a few quick thoughts:

  • I’ve found the mir code to be ready to ship as soon as code lands. There’s very little going back and figuring out how the new feature has caused regressions in other parts of the code.
  • There’s much less debugging in the intial rounds of development, as you’ve already planned and written out tests for what you want the code to do.
  • It takes a bit more faith when you’re starting a new line of work that you’ll be able to get the code completed. Test-driven development forces more exploratory spikes (which tend to have exploratory interfaces), and then to revisit and methodically introduce refactorings and new interfaces that are clearer than the ropey interfaces seen in the ‘spike’ branches. That is, the interfaces that land tend to be the second-attempt interfaces that have been selected from a fuller understanding of the problem, and tend to be more coherent.
  • You end up with more modular, object-oriented code, because generally you’re writing a minimum of two implementations of any interface you’re working on (the production code, and the mock/stub)
  • The reviews tend to be less about whether things work, and more about the sensibility of the interfaces.
Posted in Coding, mir | Leave a comment