Whirlwind Tour of the Graphics Stack

Graphics drivers have often been a sticking point in the Linux ecosystem. Getting them to work the way you want can be (and historically was) tricky.

Ever wonder why?

The simple answer is that the software stack needed to support graphics cards are complicated. Very complicated.  There are many layers and features the graphics stack needs to run in the way you want it to.

The simple idea is a graphics card is a specialized co-processor. Its really good, and designed for hugely parallelized mathematical computations needed to process 3d images.

A GPU is a distinct processor, so it needs to be programmed by the CPU in order to compute anything. The CPU has to manage all the memory the GPU needs to work with, and it needs to tell the GPU exactly what to do. The CPU needs to assemble all the byte-level instructions for the GPU. And it needs to do all of this on the fly, at least 60 times per second! Quite impressive that it works.

In linux, the chunk of code that makes the commands for the GPU is usually libGL.so. This creates a bunch of memory, and fills commands for the GPU to process. It is, essentially, a real-time assembler that uses OpenGL calls as input. It takes a lot of dynamic, clever coding to get this chunk of code to create the assembly-level instructions for the GPU. Once the commands for the GPU are assembled, and all the graphics buffers have been allocated, managed, and collected, libGL.so is done with its work (for this one frame). Again, this is not a trivial matter. A modern desktop GPU has hundreds of registers to program for each frame. Compare that to the 32 registers an ARM CPU has!

At this point, all the things the GPU needs to do its work are in memory. Commands are assembled, and buffers are allocated. However, all this information was created in userspace! It still has to get through to the kernel (and later, the hardware) somehow. Here’s where the kernel driver kicks in. libGL.so submits all the information to the kernel driver. The kernel driver then takes care of submitting to the hardware to work on. It has to keep track of when and what it submitted, and keep track of when the GPU finishes its work. The kernel driver is an important intermediary between the program that wants to use the GPU and the actual hardware.

When the GPU is done, the kernel driver is told, and eventually, this information gets propogated back to the user of the OpenGL spec. Now we’re ready to do this all over again for the next frame!

If you’ve followed my explanation so far, thats great! 🙂 There’s a lot of other layers of complication like…

  1. We have to deal with virtual memory! The CPU’s virtual addresses are not necessarily the same as the GPU’s virtual address. You have to program various MMU’s to make sure the CPU and the GPU are referencing the same physical address. You have to ensure all the MMU’s are coherent with each other as well…
  2. What about many programs using the OpenGL stack at once? Its possible that there are 2 programs that want to use the hardware concurrently. This adds all sorts of locking and threading.
  3. Say you’re rendering a 1080p (1920×1080) scene, using 32 bit color at 60 frames per second. This is:
    1920 x 1080 x 32 x 60 = 3.9 * 10^9 bits of information per second!
    You have to move 500 megabytes per second onto the framebuffer (screen). By the time you’ve finished watching Hot Tub Time Machine, you’ve consumed more bits of data than the digitalization of the entire Library of Congress. Moving this much data takes special ways and special hardware paths.
  4. GPU’s need initializition and special care to get going….
  5. The graphics stack on the CPU side isn’t simple either… Ever hear of X11?

Its really amazing what goes on every second in a computer. Its also why I like working in graphics!

 

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *