Bad Metaphysics Costs Maintainability

header image

I find myself doing a lot of metaphysical thinking in my day to day work as a coder. Objects that are cohesive and are valid metaphysical analogues to common experiences make it much easier to read, understand, and fix existing code.

Taking an example:

struct ServiceQueue
    void place_customer_at_back();
    void service_front_customer();

This class maps well to a physical problem we encounter frequently in our day to day lives; customer service at a bank teller window, perhaps, or ordering a hamburger at fast food chain. Taking things to the realm of computers, ServiceQueue also maps to many computer problems pretty well as well. A packet arrives over the network, and we want to service the packet in a FIFO manner.

This class is cohesive and easily understood by someone new to the code because it maps well to a well understood concept; that of a Queue. By making use of the shared metaphysical concept of physical queueing, we introduce the new coder to the abstract queue that’s implemented in software. The new coder can understand and verify the interface and implementation quickly.

Now lets spice the interface up with a metaphysical error. Say that we have the above class, and we encounter an error condition that happens when the network becomes disconnected. One might be tempted to add a function like:

struct ServiceQueue
    void place_customer_at_back();
    void service_front_customer();
    void handle_network_disconnected();

The problem in adding this function is twofold; it makes the code harder to understand, and it makes writing a new object more difficult.

Rapid Understandibility

The first issues is that that analogue to a real life concept is diluted, and with enough dilution, will eventually be lost. This makes it more difficult to rapidly understand the role an object implementing the interface has.

I bet the coder that added “handle_network_disconnected” saw that a lot of the implementation where the error could be handled from was “conveniently” in the class implementing the interface, and punched “handle_network_disconnected” in. But did you catch the metaphysical error? ServiceQueue is no longer named properly, its become a different object. Its a ServerQueueThatCanBeDisconnected; and the analogy to the physical queue is weakened. It takes a bit more explaining to a new coder to explain what sort of interface ServiceQueue is. This additional explanation needed to the new coder makes it much more difficult to understand the object and the problem that is being solved. Consequently, its harder to maintain, and takes longer to debug, because of the added cost of understanding. 1

Alternative Implementations

With each error like this, it becomes a bit harder to write an implementation for the interface that solves a similar problem. ServiceQueue with “handle_network_disconnected” fits the network-packet problem, but its been made more difficult to use this interface with the myriad other problems (like the bank teller problem).

Now, in the practical world of software, we’re used to seeing this all the time. We can mentally handle one metaphysical error per interface quite easily. The actual problem comes in much worse scenarios, where there are multiple holes punched through the interface. Eventually, it can get to the point where the object really has no physical manifestation and the interface gets renamed to something ambiguous, like “ServiceManager”. At this point, the object has sluggish understandability, and is irreplaceable. We’ve found ourself with some difficult to maintain software!

It might take a bit of refactoring to get things right, but in the end, its worth it; both practically, and metaphysically.

This post originally appeared on, and is (c) Kevin DuBois 2015

Posted in Coding | Comments Off

A few years of Mir TDD

asm header

We started the Mir project a few years ago guided around the principles in the book, Growing Object Oriented Software Guided by Tests. I recommend a read, especially if you’ve never been exposed to “Test-driven development”

Compared to other projects that I’ve worked on, I find that as a greenfield  TDD project Mir has really benefitted from the TDD process in terms of ease of development, and reliability. Just a few quick thoughts:

  • I’ve found the mir code to be ready to ship as soon as code lands. There’s very little going back and figuring out how the new feature has caused regressions in other parts of the code.
  • There’s much less debugging in the intial rounds of development, as you’ve already planned and written out tests for what you want the code to do.
  • It takes a bit more faith when you’re starting a new line of work that you’ll be able to get the code completed. Test-driven development forces more exploratory spikes (which tend to have exploratory interfaces), and then to revisit and methodically introduce refactorings and new interfaces that are clearer than the ropey interfaces seen in the ‘spike’ branches. That is, the interfaces that land tend to be the second-attempt interfaces that have been selected from a fuller understanding of the problem, and tend to be more coherent.
  • You end up with more modular, object-oriented code, because generally you’re writing a minimum of two implementations of any interface you’re working on (the production code, and the mock/stub)
  • The reviews tend to be less about whether things work, and more about the sensibility of the interfaces.
Posted in Coding, mir | Comments Off

Mir Android-platform Multimonitor

My latest work on the mir android platform includes multimonitor support! It should work with slimport/mhl; Mir happily sits at an abstraction level above the details of mhl/slimport. This should be available in the next release (probably mir 0.13), or you can grab lp:mir now to start tinkering.

Posted in Coding, mir, Ubuntu | 2 Comments

SVG Hardware Drawer Labels

I recently made a set of SVG labels for my hardware small parts bin in Inkscape for the common Akro-Mills 10164 small parts organizer. Its sized to print the labels the correct size on a 11″x8.5″ sheet of paper (results may vary, so make sure to resize for whatever drawer and printer you have)


The Labels in action

I thought I’d share them here in SVG format, which should make it pretty easy for you to download and customize. (Eg, you could change the resistor color codes to your set of resistors, change the values, etc). If you do sink a lot of effort into adapting the file, please share-back (open source!) via the comments, and I’ll update the file so others can use it.

Drawer Labels

Drawer Labels

SVG file (copyright (c) 2015 Kevin DuBois, Licensed under CC BY-NC-SA)

Posted in Hardware, Open Source | Comments Off

Saleae Logic 8 Review

Over the break, I got to play a bit with the Saleae Logic 8 logic analyzer. Its the mid-range model from Saleae, and it works with Ubuntu. I wrote about the predecessor to the Logic 8 a while back, before Linux support was around. I finally got to do a bit of tinkering with the new device, under Ubuntu Vivid.

logic 8

Logic 8

The device itself came packaged only in the carrying case that is provided. Inside the zippered carrying case was the Logic 8 itself, 2 4×2 headers with 6-inch leads, 16 logic probes, a micro-usb cable, a postcard directing you to the support site, and a poster of Buzz Aldrin in the Apollo cockpit.
The Logic 8 is made out of machined anodized aluminum and is only about 2×2 inches. It’s sturdy-feeling, and the only ports are the micro-usb to connect to the computer, and the 16 logic probe pins (8x ground+signal). There’s a blue LED on the top.


Package Contents

The test leads seem pretty good. I’m used to the J-hook type leads, and these have two pincers that come out. I’ve been able to get the leads into more places than I would have with a J-hook type logic probe.

Bonus Inspiration

Another really interesting feature is that this logic analyzer can do analog sampling. Each of the Logic 8 test leads can perform analog sampling. The device can sample faster if you’re only using one analog channel. One channel can sample at 10M samples/second, and running all 8 will sample at 2.5M samples/second. According to the literature, frequencies above the Nyquist frequency of the sample rate get filtered out before hitting the onboard ADC. If you’re anything like me, most of the my electronics tinkering doesnt require me to look at signals above this sampling rate, and I could see using the oscilloscope less and using the Logic 8 for some analog signals work too.

Underside of Logic 8

Underside of Logic 8

The Software:
The Logic 8 software is available (freeware, closed source) on the website and will simulate inputs if there’s no device connected, so you can get a pretty good feel for how the actual device will work. It was largely hassle-free, although I did have to unpack it in /opt because it wasn’t packaged. Overall, it was pretty intuitive to configure the sampling, set up triggers, and test my circuit. The look and feel of the software was much better than a lot of other electronics tools I’ve used.

Trying it out:
I was working on a pretty simple circuit that takes a sensor input and outputs to a single 7-segment. Its composed of a BCD-7segment decoder chip and an ATtiny13. (easy enough to program with the ubuntu packages ‘avrdude’ and ‘gcc-avr’).


Circuit Under Test (ATtiny13, a light sensor, and a BCD to 7 segment decoder)

Its not electrically isolated from the circuit, but I would expect that for the price point. Just make sure that you don’t have any ground loops between your computer and the circuit under test. I don’t typically build circuits that really need a earth-ground, so I don’t see that being much of an issue.

So, my first run, I connected it to the GPIO pins on the AVR and varied the voltage from 0-2.5V on the ADC pin.

ADC to 4bit digital

ADC to 4bit digital

Yay, my circuit (and avr program) was working.

I am pleased with the Logic 8, and am even more excited to have a hassle free way to measure logic and analog signals in Ubuntu!

Posted in Hardware, Reviews | Comments Off

Mir Device Showcase!

Here’s a video of Mir powering a few different GPUs:

  • Nexus 10 (ARM Mali T-604 GPU)
  • Nexus 4 (Qualcomm Adreno 320 GPU)
  • Nexus 7 (2012) (Nvidia Tegra 3)
  • Galaxy Nexus (PowerVR)

This is a pretty big milestone, as we’re now in a position where Mir works well with 4 big Android gpu vendors.


The only disclaimer on the video is that some of the code hasn’t trickled to the images yet, and the tablet support is still a work in progress. Onwards and upwards!

Posted in Hardware, mir, Ubuntu | 2 Comments

Friendly Mir Links

Just a friendly reminder, but Mir is open! Here are some useful links.


We’ve put effort into sharing as much as possible and lowering the knowledge-barrier to entry for the project. We want you to understand how your pixels will be painted under Mir. Here’s some good links:

Mir documention:
This is all generated right from the trunk code (lp:mir’s doc/ folder)
We also generate api documentation on same site:


The code is all available on launchpad: lp:mir
The reviews are all done on out in the open: active reviews
Our continuous integration is on jenkins like the rest of the Ubuntu projects:
Lastly there are no secret branches or anything like that anymore. We’re operating fully in the open! :)

Coordination and Planning

We do all of our coordination surrounding the code on freenode’s #ubuntu-mir channel. Since this is an Ubuntu channel, its logged. Here’s an example: #ubuntu-mir log You’ll see in the logs that we really do our updates, coordination and planning all on this channel.

We have our blueprints out in the open too. You can see our upcoming plans and the upcoming work items that are slated.

We’ve got a mailing list on launchpad too! Join up and stay abreast of all the latest email chains.

Posted in Coding, mir, Open Source, Ubuntu | 2 Comments

Mir and Android GPU’s

With Ubuntu Touch, (and mir/unity next) we’re foraying into a whole new world of android drivers. Given the community’s bad memories from the past about graphics, let’s clear up what’s going on, and how we’ll steer clear of the murky waters of new driver support and get rock-solid Ubuntu on mobile platforms.

Android Driver Components and their Openness

First let’s talk about openness. Driver ecosystems tend to be complex, and android is no exception. To get a driver to work on android, the gpu vendors provide:

  1.  a kernel module
    The kernel module must be GPL compatible and this part of the driver is always open. This part of the driver has the responsibility of controlling the gpu hardware, and its main responisibility is to manage the incoming command buffers and the outgoing color buffers.
  2. libhardware implementations from android HAL.
    These libraries are the glue that takes care of some basic operations the userspace system has to do, like composite a bunch of buffers, post to the framebuffer, or get a color buffer for the graphics driver to use. These libraries (called gralloc, hwc, fb, among others) are sometimes open, and sometimes closed.
  3. an OpenGLES and EGL userspace library
    These are the parts that program the instructions for the GPU, and they are the ‘meat and potatoes’ of what the vendors provide. Unfortunately this code is closed source, as many people already know. Just because they are closed source though doesn’t mean we don’t have some idea of what’s going on in them though. They rely on the open source parts and have been picked apart pretty well by various reverse-engineering projects (like freedreno)

All the closed parts of the driver system are used via headers that are Apache license or  Khronos license. These headers are API’s that change slowly, and do so in a (relatively) non-chaotic manner controlled by Google or the Khronos groups. These APIs are very distinct from DRM/gbm/etc that we see on ‘the free stack’

The drivers are not 100% open, and its not 100% closed either.  Without the closed source binaries, you can’t use the core GLES functionality that you want, but enough parts of the system are open that you can infer what big parts of the system are doing. You can also have an open source ecosystem like Mir or android built around them because we interface using open headers.

As far as openness goes, its a grey area; its acceptable to call them blob drivers though :)


We have a lot of bad memories about things not working. I remember fighting all the time with my compiz drivers back in the days of AIGLX and the like. Luckily when we’re working on Mir and phones, we’ve remembered all this pain and have a reasonable way that we’ll jump onto the new driver platform without any wailing or gnashing of teeth.

The biggest advantage we have with the mobile drivers is that they are based around a fixed industry API that has proven itself on hundreds of millions of devices. We’re not reinventing the wheel here! We’re not heading out on our own to invent our own HAL layer, and we do not have to brow-beat gpu vendors into supporting a new API.

With this, we pick up a lot of the goodness that comes with out-of-the box Android drivers, like great power management, performance, and stability. Mir can use android drivers as they come from the driver vendor, and we’re using them in a well-known way.

Drivers and hardware support are the foundation of a well performing, amazing computing experience. With Mir and Ubuntu Next, we’re not building our house upon sand, we’re building it upon rock!

A Sneak Peek

Here’s a sneak peek of Mir (from lp:mir). This is a demo of just mir and (essentially) the Qt client that the Ubuntu Touch interface uses; this is not using Unity Next. The device is a Nexus4 with and Adreno320 gpu.

Posted in Coding, mir, Open Source, Ubuntu | 7 Comments

Mir and Android FAQ

There’s been some murmurs and uncertainty about Mir and Ubuntu Touch support, so here’s a quick FAQ.

Does mir support android drivers?

Yes! We put great care into our platform abstraction so that when you run on mesa desktop drivers, you use our mesa/gbm platform, but when you run mir inside of an Ubuntu Touch phone/tablet, you use the android platform to get full OpenGLES acceleration.

What sort of acceleration do you provide with android drivers?

Full acceleration! More specifically, this means that entire path, from client render to framebuffer post, is OpenGLES accelerated and there is no-copy of the color buffers. This gives mir clients and Unity Next the performance it needs to succeed.

Android uses java. Does this mean mir uses java?

Heavens no. Mir has no java inside. We are proudly a C++11 project (and we actually use the great new stl additions that come in C++11, like lambdas, smart pointers, and the like)

Do I need android tools (eg, the android SDK or NDK) to develop mir on Ubuntu Touch?

Nope! All of our dependencies and build tools come from debian packages wtihin Ubuntu. We use the gcc arm toolchains (and cross toolchains) available in the ubuntu repositories. The adb tool (from package android-tools-adb) is useful for development, but not necessary.

What devices do you run on?

We are focusing on the nexus line at first, but we should be able to enable all devices, regardless of gpu vendor. (we’re aiming for ICS drivers and newer).  We keep a list of the devices we are focusing on here.

How does Mir support android drivers?

We use the binaries available directly from the android gpu vendors as they are. Android driver developers have invested [b,m]illions of dollars to make sure that android drivers run well on android, and mir does not throw this effort away by trying to reinvent the wheel. Android drivers use 1) the android kernel, 2) the bionic libc and 3) the userspace driver libraries. We use this exact combination in mir so that Mir can be just as solid as the Android display system.

Does this mean mir uses bionic libc?

In short, no. The mesa/GBM platform goes nowhere near bionic libc. On the android platform, we carefully and cleverly tinker with the linker so that the Android drivers use bionic, but the mir code and libraries use the normal gnu libc. Mir code uses gnu libc that we all know and love, but we let the android drivers use libc they know and love (bionic).

Did you consider using some of the android components (eg surfaceflinger) instead of writing Mir?

Yes we did. We found that:

  1. Surfaceflinger is very tied to the android system and would take a large amount of porting work to run inside of Ubuntu Touch.
  2. Surfaceflinger is currently focused on a simplistic z-order based compositing, we needed something that can support the full Unity experience you expect on a desktop. The complex “launchers” you use in android are not part of surfaceflinger.
  3. Finally, adapting surfaceflinger to use mesa/gbm drivers is a ton of work (and probably not possible). We love the free stack drivers and need to support them for the desktop.

Do Mir clients care what platform (Android or mesa/GBM) they are running on?

Nope! A mir client will be able to run on a mesa/gbm platform or an android platform. We took great care to make sure that the clients are agnostic to the underlying OpenGLES driver model. There is no recompilation and no platform detection needed.

How can I find out more?

Easiest way is to pop on #ubuntu-mir (freenode) and ask me (kdub) a question. Mir is entirely open source so reading through the documentation and code there is also an option.

Posted in Coding, mir, Open Source, Ubuntu | 7 Comments

Why we Make

This is a great talk about the fundamentally human aspect of making. Our race’s advantage in the universe isn’t the sharpest claws or thickest skin, its the ability to make, bend, and discover. This video is an cool little anecdote from Adam Savage of mythbusters of how he embraced his inner maker:

People make things in all different sorts of ways, from a 4 year old making arts and crafts to a professional engineer building the Space Station (or a world class operating system :) ). Its fundamental to being human, so get out there and create!

Posted in Hardware, Random | Comments Off