twobit auto builder part 2

In my last post on automating my project builds with buildbot I covered the relevant buildbot configs. The issue that I left unresolved was the triggering of the builds. Buildbot has tons of infrastructure to do this and their simple would be more than sufficient for handling my OE projects that typically have 3-5 git repos. But with OpenXT there are enough git repos to make polling with buildbot impractical. This post covers this last bit necessary to get builds triggering with a client side hook model.


Now that we’ve worked out the steps necessary to build the software we need to work out how to triggers builds when the source code is updated. Buildbot has extensive infrastructure for triggering builds ranging from classes to poll various SCM systems in a “pull” approach (pulled by buildbot) to scripts that plug into existing SCM repos to run as hook scripts in a “push” approach (pushed by the SCM).

Both approaches have their limitations. The easiest way to trigger events is to use the built in buildbot polling classes. I did use this method for a while with my OE projects but with OpenXT it’s a lost cause. This is because the xenclient-oe.git repo clones the other OpenXT repos at the head of their respective master branches. This means that unlike other OE layers the software being built may change without a subsequent change being made to the meta layer. To use the built in buildbot pollers you’d have to configure one for each OXT git repo and there’s about 60 of them. That’s a lot of git pollers and would require a really long and redundant master config. The alternative here is to set up a push mechanism like the one provided by the buildbot in the form of the script.

This sounds easy enough but given the distributed nature of OpenXT it’s a bit harder than I expected but not in a technical sense. Invoking a hook script from a git server is easy enough: just drop the script into the git repo generally as a ‘post-commit’ hook and every time someone pushes code to it the script will fire. If you control the git server where everyone pushes code this is easy enough. Github provides a similar mechanism but that would still makes for a tight coupling of the build master to the OpenXT Github org and here in lies the problem.

Tightly coupling the build master to the OpenXT Github infrastructure is the last thing I want to do. If there’s just one builder for a project this approach would work but we’d have to come up with a way to accommodate others wanting to trigger their build master as well. This could produce a system where there’s an “official” builder and then everyone else would be left hanging. Building something that leaves us in this situation knowingly would be a very bad thing as it would solve a technical problem but produce a “people problem” and those are exponentially harder. The ideal solution here is one that provides equal access / tooling such that anyone can stand up a build master with equal access.

Client-side hooks

The only alternative I could come up with here is to mirror every OpenXT repo (all 60 of them) on my build system and then invent a client side hook / build triggering mechanism. I had to “invent” the client side hooks because Git has almost no triggers for events that happen on the client side (including mirrors). My solution for this is in the twobit-git-poller.

It’s really nothing special. Just a simple daemon that takes a config specifying a pile of git repos to mirror. I took a cue from Dickon and his work here by using a small bit of the Github API to walk through all of the OpenXT repos so that my config file doesn’t become huge.

The thing that makes this unique is some magic I do to fake a hook mechanism as much like post-receive as possible. This allows us to use a script like the unmodified. So I expanded the config file to specify an arbitrary script so others can expand on this idea but this script is expected to take input identical to that produced by the post-receive and expected by the script.

This is implemented with a naive algorithm: every time the git poller daemon is run we just iterate over the available branches, grab the HEAD of each before and after the fetch and then dump this data into the hook script. I don’t do anything to detect whether or not there was a change even, I just dump data into the hook script. This allows us to use the script unmodified and buildbot is smart enough to ignore a hook even where the start and end hashes are the same. There are some details around passing authentication parameters and connection strings but those are just details and they’re in the code for the interested reader.

With this client side hooking mechanism we no longer need the poller objects in buildbot. We can just hook up a PBChangeSource that listens for data from the twobit-git-poller or any other change source. AFAIK this mechanism can live side by side with standard git hooks so if you’ve got an existing buildbot that’s triggered by some git repos that your team is pushing to using this poller shouldn’t interfere. If it does let me know about it so we can sort it out … or you could send me a pull request :)

Wrap Up

When I started this work I expected the git poller to be something that others may pick up and use and that maybe it would be adopted as part of the OpenXT build stuff. Now that I think about it though I expect the whole twobit-git-poller to be mostly disposable in the long run. We’ve already made huge progress in aligning the project with upstream Open Embedded and I expect that we’ll have a proper OE meta layer sooner than later. If this actually comes to pass there won’t be enough git repos to warrant such a heavy weight approach. The simple polling mechanisms in in buildbot should be sufficient eventually.

Still I plan to maintain this work for however long it’s useful. I’ll also be maintaining the renevant buildbot config which I hope to generalize a bit. It may even become as general as the Yocto autobuilder but time will tell.

twobit auto builder part 1

I’ve been a bit out of commission for the past month: starting a new job and moved a long way to be close to it. Before the move I had a bit of a backlog of work to write about and things have finally settled down enough for me to start working that off. This first post is about the work I’ve done to learn a bit about buildbot and how to use it to build some stuff I’ve been working on.

I’ve always been a bit of a “build junkie” and I’m not ashamed to admit that I really like to write Makefiles by hand. Learning buildbot was only partially motivated by masochism though. The other half was motivated by my need to build a few different projects including the meta-measured OE layer, the meta-selinux OE layer and OpenXT.

I’ve put this work up on github and it still needs some work before it’s portable to anything beyond my build setup. I hope to make this reproducible by others but before I do that I want to:

  1. introduce these repos
  2. cover some of the buildbot internals but only the bits required to make the build work

In doing this work I definitely fell down the rabbit hole a bit and started reading a pile of buildbot code but a lot of it was above and beyond what’s necessary to get this stuff building. I’m often guilty of wanting to understand everything about a piece of code before I start using it but that’s not always the right approach. I’ll do you all a solid and leave this stuff out to save you the hassle. This first post will cover the repo containing the buildbot config I’m using. My next post will cover some support code I’m using to trigger builds. I’ll focus on the bits necessary to build OpenXT initially and will include a quick write-up of my other side projects soon after.


There are two repos that make up this work. The first is the actual buildbot config for both the build master and slave(s). It’s up on github here: twobit-buildbot. The second repo is a twisted python daemon that supports the build master and slave. I’m calling this last repo twobit-git-poller. This post will discuss this prior. My next post will cover the git poller.

Build Master

The buildbot config for both the master and the slave(s) I’m using are available on Github here: twobit-buildbot Buildbot configs are a bit hard to call “configs” with any seriousness since they’re proper python code. Still, it’s “config-like” in that there’s really no business logic here. It’s just a pile of classes that you filled with data and passed to the buildbot guts for execution.

If I go line by line through the config then this will quickly digress into a buildbot tutorial. There are plenty of those out there and if you’re interested in getting a deeper understanding then you should start with the buildbot docs. The steps that buildbot goes through to build a project are all wrapped up in a container called a BuildFactory. In my repo the BuildFactory for OpenXT is defined here. This class wraps an array of BuildStep objects that define a series of steps required to build the software.

The BuildStep objects are processed in the order they’re passed to the BuildFactory. I’ll run through these in order just to give you a feeling for the build progression:

  1. The first thing in building OXT is to clone the build scripts. Buildbot knows where on the build slave to check this stuff out so all we do is provide some metadata to tell the buildbot git fetcher how we want the checkout to happen. OpenXT doesn’t yet support incremental builds so for now each new build requires a clean checkout.
  2. Next we execute a shell command on the slave to set up the build config. To do this we use the ShellCommand class which is exactly what you’d expect: it executes shell commands. But with buildbot it’s always a question of where: on the master or on the slave? AFAIK for most build bot stuff you should assume that the action will happen on the slave unless otherwise noted. I’ve rigged the build master configs such that the script name isn’t used explicitly. Instead it’s a shell variable that the slave is expected to set here. I’ll describe this a bit more in the next section describing the build slave.
  3. The next build steps are a series of shell commands that execute the series of build steps that make up the build of the OXT build. These are hard coded as the minimum necessary to create an installer iso. These are all invocations of the script. We could just as well just execute the script and have it get the build steps from the .config file but it’s nice to execute them one by one so that the buildbot UI shows them as unique steps. This also makes it easier to go through the output logs when things go wrong.
  4. Once the main build steps are done we need to get the build output to a place where people can download it. So the next step runs a MasterShellCommand to create a directory on the build master. In the case of my build master this is on an NFS mount that gets served up by my webserver. That should tell you how we serve up the build output once it’s on the build master.
  5. Now that we’ve got a place to stash the installer iso we need to move it from the build slave to the build master. OXT has some script functions in the build script to rsync build artifacts to various places. Buildbot has a build step to handle this for us so I’ve chosen to use that instead. Hopefully we’ll retire the bulk of the custom OXT scripts and use the tools available like the FileUpload class in buildbot.
  6. Unfortunately I ran into a nasty buildbot limitation after getting the build upload to work. The version of buildbot packaged for Debian Wheezy runs the buildbot daemon with very paranoid umask (022). This was resolved 3 years ago but Wheezy doesn’t have the fix. I opted to hack around this with an extra build step instead of requiring people to install buildbot from source or apply a patch to the Debian packages. This hack is ugly but it just runs a scrip on the build master and fixes up the permissions on the newly uploaded file and directory housing it.

The BuildFactory describes how to build something but we still need something to do the building. This is the job of the build slave. The BuildlerConfig is an object that associates a build slave with a BuildFactory. When the build master determines that a build needs to be run the buildbot code will go through the available BuilderConfig objects, find the right config, and then walk the build slave through the right series of BuildSteps.

Build Slave

Naturally we assume that the build slave executing these steps is capable of building the project which in this case is OpenXT. That is to say you’ve followed the steps on the OpenXT wiki to set up a build machine and that you’ve tested that it works. Further, I’ve only tested this with the buildbot package from Debian Wheezy. The build master can be a simple Wheezy system, no need to jump through the hoops to make it an OXT build system since it won’t be building any code.

The config for a build slave is vastly more simple than the master. It only needs to be able to connect to the master to receive commands. The only really interesting bit of the build slave config is in a few of the environment variables I set.

Each of the OE images build my slave is building requires a sort of generic “setup” step where the slave sets some values in a config file. This is simple stuff like the location of a shared OE download directory or something specific to OpenXT like the location of the release signing keys. In each case there’s a script in a bin directory. For OpenXT that’s here.

The path to these scripts doesn’t need to be known by the build master. Instead the build master expects the slave to have a specific environment variable pointing to this script. The build master then tells the slave to execute whatever this variable points to. The build slave only has to set this variable properly in its config like this.

Wrap Up

That’s just a quick rundown of the buildbot bits I’ve strung together to build my pet projects and OpenXT. There’s one glaring gap in this: how buildbot knows when it needs to run builds. This is usually a pretty simple topic but with OpenXT there were a few things I had to work around. I’ll fill this in next time.

Till then I’ll be keeping the configs for my build master and slave on github and as up to date as possible. Feel free to reproduce this work in your own environment if you’re interested. I may or may not keep the build master exposed publicly in the future. For now it’s just intended to be an example and a convenient tool.

OpenXT: Contingencies Abandoned

This post is long overdue. I’ve been experimenting with using OE as a means to building measured systems for a while now. Back before Openxt became a reality I was hedging bets and working on some overlapping tech in parallel. Now that OpenXT is available as OSS I’m going to abandon some of this work and shift focus to OpenXT. I do however feel like there should be some record of the work that was done and some explanation as to why I did it and how it relates to OpenXT.

Here goes …

Building systems with security properties

All of this nonsense began with some experimentation in using OE as a means to build measured systems. For some reason I think that a sound integrity measurement architecture is necessary for the construction of software systems with meaningful security properties. All of the necessary component parts were available as open source but there were few examples showing how they could be integrated into a functional whole. Those that did were generally research prototypes and weren’t maintained actively (need references). My work on meta-measured was the first step in my attempt to make the construction of an example measured system public and easily buildable.

Building a measured systems with the Xen hypervisor as a primary component was another part of this work. I wasn’t using virtualization for the sake of virtualization though. Xen was a means to an end: its architecture allows for system partitioning in part through the Isolated Driver Domain model like the example I describe here. The NDVM is just the “low hanging fruit” here but it serves as a good example of how OE can be used to build very small Linux VMs that can serve as example device domains. There are ways to build smaller IDDs but IMHO a Linux image < 100MB is probably the point of diminishing returns currently. Hopefully in the future this will no longer be the case and we'll have IDDs based on unikernels or even smaller things.

Small, single purpose systems are important in that they allow us to extend the integrity measurement architecture into more fine-grained system components. Ideally these IDDs can be restarted so that the integrity state of the system can be refreshed on a periodic basis. By disaggregating the Xen dom0 we increase the granularity of our measurements from 1 (dom0) to 1 + the number of disaggregated components. By restarting and remeasuring these components we provide better "freshness" properties for systems that remain on-line for long periods of time.

This of course is all built on the initial root of trust established by hardware and the software components in meta-measured. Disaggregation on the scale of previously published academic work is the end goal though with the function of dom0 reduced to domain construction.

The final piece of this work is to use the available mandatory access control mechanisms to restrict the interactions between disaggregated components. We get this by using the XSM and the reference policy from Xen. Further, there will always be cases where it’s either impossible or impractical to decompose some functions into separate VMs. In these cases the use of the SELinux MAC policy within Linux guests is necessary.

The Plan

So my plan went something like this: Construct OE layers for individual components. Provide reference images for independent test. One of these layers will be a “distro” where the other components can be integrated to become the final product. This ended up taking the form of the following meta layers:

  • meta-measured: boot time measurements using the D-RTM method and TPM utilities
  • meta-virtualization: recipes to build the Xen hypervisor and XSM policy
  • meta-selinux: recipes to build SELinux toolstack and MAC policy
  • meta-integral: distro layer to build platform and service VM images

Some of these meta layers provide a lot more functionality than the description given but I only list the bits that are relevant here.

Where I left off

I had made significant progress on this front but never really finished and didn’t write about the work as a whole. It’s been up on Github in a layer called ‘meta-integral‘ (i know, all the good names were taken) for a while now and the last time I built it (~5 months ago) it produced a Xen dom0 and an NDVM that boots and runs guests. The hardest work was still ahead: I hadn’t yet integrated SELinux into dom0 and the NDVM, XSM was buildable but again, not yet integrated and the bulk of disaggregating dom0 hadn’t even yet begun.

This work was a contingency though. When I began working on this there had been no progress made or even discussion of getting OpenXT released as OSS. This side project was an outlet for work that I believe needs to be done in the open so that the few of us who think this is important could some day collaborate in a meaningful way. Now that OpenXT is a real thing I believe that this collaboration should happen there.

Enter OpenXT: aka the Future

Now that OpenXT is a reality the need for a distro layer to tie all of this together has largely gone away. The need for ‘meta-integral’ is no more and I’ll probably pull it down off of Github in the near future. The components and OE meta layers that I’ve enumerated above are all still necessary though. As far as my designs are concerned OpenXT will take over only as the distro and this means eliminating a lot of duplication.

In a world where OpenXT hadn’t made it out as OSS I would have had the luxury of building the distro from scratch and keeping it very much in line with the upstream components. But that’s not how it happened (a good thing) so things are a bit different. The bulk of the work that needs to be done for the project to gain momentum now is disentangling these components so that they can be developed in parallel with limited dependencies.

Specifically we duplicate recipes that are upstream in meta-virtualization, meta-selinux and meta-measured. To be fair, OpenXT actually had a lot of these recipes first but there was never any focus on upstreaming them. Eventually someone else duplicated this work in the open source and now we must pay off this technical debt and bring ourselves in-line with the upstream that has formed despite us.

What’s next?

So my work on meta-integral is over before it really started. No tragedy there but I am a bit sad that it never really got off the ground. OpenXT is the future of this work however so goal number one is getting that off the ground.

More to come on that front soon …

Apache VirtualHost config gone after Wheezy to Jessie upgrade

Here’s a fun one that had me running in circles for a while today:

I’ve been running deluge and the deluge-webui on Debian Wheezy for a while now. Pretty solid. I needed to download a torrent using a magnet URI today and deluge-webui on Wheezy won’t do it. This feature was added to the webui in 1.3.4 though so the version in Jessie should work.

I did the typical dist-upgrade song and dance per the usual but after the upgrade Apache was all hosed up. It was just showing the default example page. All of access logs that would normally go to my configured virtual host were landing in /var/log/apache2/other_vhosts_access.log which is all wrong. I started out thinking it was the hostname of the system that got messed up but that was a dead end.

I started making progress when I found the command

apache2ctl -S

This dumps out a bunch of data about your configuration and it basically said that my VirtualHostM configuration was empty:

VirtualHost configuration:

Yeah it was basically an empty string. This seemed wrong but I wasn’t sure what to expect really. After banging around a bit longer and getting no where I finally decided to just disable and re-enable my site configuration. This was desperation because my site config was already linked into /etc/apache2/sites-enabled so it must have been enabled … right?

a2dissite mysite

But disabling it failed! It gave me some sort of “no such file” error. Whaaaaaa?. So I ran the commend through strace and it turns out that the new apache2 package on Jessie expects the site config file to have the suffix .conf. Changing the name of my site config fragment fixed this and I was then able to enable the config as expected.

That was unbelievably annoying. Hopefully this will save someone else a few minutes.

First OpenXT build

UPDATE; 2014-11-26 I’ve disabled comments on this post. All discussion about building OpenXT should be on the mailing list:!forum/openxt
UPDATE; 2014-07-25 This page has been superseded by the documentation available on the OpenXT github wiki:
UPDATE: 2014-06-20 added section about using bash instead of dash
UPDATE: 2014-06-19 fix git clone URI
UPDATE: 2014-06-18 add genisoimage package to list of required packages.
UPDATE: 2014-06-18 remove the section on mangling the manifest file now that I’ve upstreamed a patch.
UPDATE: 2014-06-18 to reflect default mirror being available now.
UPDATE: 2014-06-18 to clarify location of STEPS variable and the setupoe step.

With the transition of XT to OpenXT I’m realizing that the mark of most successful open source projects is good tools and great documentation. Right now we’re a bit short on both. The tools to build the core of OpenXT aren’t our own, they’re maintained by the upstream OpenEmbedded and Yocto communities. The documentation to build OpenXT on the other hand is our responsibility. This is a quick recap of my first build of the code that’s up on github with step-by-step instructions so that you can follow along at home.

The Existing Docs

The closest thing to build docs that we have on github are a README that was left over from a previous attempt to open source this code. That work was done under the name “XenClient Initiative” or XCI for short. This project was before my time on the project but my understanding is that it wasn’t particularly successful.

I guess you can call OpenXT our second go at the open source thing. The instructions in this file are way out of date. They need to be replaced and hopefully this write-up will be the first step in fixing this.

Build Machine

There’s a lot of technical debt that’s built up over the years in the OpenXT code base. The first and most obvious bit of technical debt is in our build system. We require that the build be done on a 32 bit Debian Squeeze system. The 64 bit architecture may work but it’s untested AFAIK.

We require Squeeze for a number of reasons the most obvious of which is our dependency on the GHC 6.12 compiler. Wheezy ships with version 7 and our toolstack hasn’t been updated to work with the new compiler yet. To the Haskell hackers out there: your help would be much appreciated. No doubt there are other dependencies and issues that would need to be worked around in an upgrade but we know this one to be a specific and prominent issue.

The requirement for a 32 bit build host is likely that only 32 bit build hosts have been tested. Any one out there who tries a 64 bit build or a build on Wheezy please report your results so we can get documentation together for the tested and supported build hosts.

Required Packages

The initial list of packages required to build OpenXT can be obtained from the OE wiki. The requirements are:

sed wget cvs subversion git-core coreutils unzip
texi2html texinfo docbook-utils gawk python-pysqlite2
diffstat help2man make gcc build-essential g++
desktop-file-utils chrpath

The list is pretty short as far as build requirements go because OE builds nearly all of the required tools as part of the build. This is a big part of what makes OE so great.

Additionally we require a few extra packages:

ghc guilt iasl quilt bin86 bcc libsdl1.2-dev liburi-perl genisoimage

Packages like guilt and quilt are used in our bitbake patch queue class in the expected way. ghc is the Hasekll compiler which is required to … build the Haskell compiler (much like OE requires gcc to build gcc). genisoimage is used by our build scripts to put the final installer ISO together.

The remaining dependencies: iasl, bin86, bcc, libsdl1.2-dev, and liburi-perl are another instance of technical debt. These packages should be built in OE as dependencies of other packages. Instead our recipes take a short cut and require they be installed on the build host. This seems like a smart shortcut but it’s a shortcut that ends in cross-compile misery. This may be what causes issues between 32 and 64 bit build hosts.

A good example of how to fix these issues already exists. If you’ve been following upstream Yocto development the Xen recipe contributed there gets this right depending on the dev86-native and iasl-native packages. OpenXT would benefit from pulling in the meta-virtualization layer and using this recipe (thanks Chris!)

Bash vs Bourne

Bitbake recipes contain a lot of shell functions and fragments. Per the these will be executed by the host systems /bin/sh. Unfortunately lots of build metadata (including the OpenXT build metadata) is rife with ‘bashisms’. Because of this, Linux distros that don’t link /bin/sh to /bin/bash will cause builds to fail.

The way to resolve this is laid out in the Ubuntu section of the “OE and Your Distro” docs as Debian and thus Ubuntu use the dash shell instead of bash by default. Switching dash for bash is pretty easy thankfully:

sudo dpkg-reconfigure dash

Executing the command above will result in a dialog screen asking you whether or not you want to use dash as your default shell. Select ‘No’ and your system will use bash instead.

I’ve gone ahead and filed a ticket to get ‘bashisms’ out of the first place I ran into them in OpenXT: If you’ve got some time to kill it would be helpful if someone could track down more of our dirty laundry like this, or better yet, send in a pull request to sort some of this out.

Clone and Configure

If you’re following along you should now have a 32 bit Debian Squeeze build host with some additional packages installed. The next step is to clone the base OpenXT git repo:

git clone git://

This will give you a directory named openxt that contains our build scripts. Change into this directory and we’ll take a look at the important files.

Firstly the file you’ll most often customize in here is the .config but you don’t have one yet. Copy the example-config file to .config so we can modify it for our environment:

cp example-config .config

The .config file is read by the script that … does the build. There are a hand full of variables in .config that are interesting for our first build. For now we’ll ignore the rest.


We’ll start with STEPS. This one isn’t defined in the example-config but it’s initialized in the script. Since this script imports all variables from the config we can add it to the config manually to get the desired effect.

This variable defines the build steps that will be carried out when is run with no options. The default steps don’t all work yet so we’ll set this to the minimum that we can get away with for now:


I’ve left out the necessary setupoe step because I typically run this one manually and check that my variables get populated in the OE local.conf successfully. You don’t need to do it this way but it may help you get a better understanding of how the configuration works. After I go through the necessary variables we’ll go back to setupoe.


Due to some of the software used in OpenXT being a bit old the upstream mirrors of a few source tarballs are no longer available. Further we require a number of Intel binary ACM modules for TXT to function properly. Intel hides these zips behind a lawyer wall requiring users to accept license terms before they can be downloaded. That’s great CYA from their legal department but it prevents any automated build from downloading them in the build so we have to mirror them ourselves (which is permitted by their license).

When I did my first build the other day the URL from the example configuration file wouldn’t resolve for me. So I set up my own mirror:


This should be fixed so check the default mirror first. If it doesn’t work feel free to clone my mirror in your local environment but do me a favor and go easy on the bandwidth please.

UPDATE: The default mirror is fixed. You should use the default value ( and not my mirror … well really you should set up your own mirror because there’s no way to tell how long the default mirror will stay up and it’s nice to keep redundant traffic to a minimum.

Signing Certs

The topic of signing releases is a huge one so for now we’ll just stick to the minimum work required to get our build signed once it’s done. There are several relevant config variables for this:


We require that each builder create their own keys for development and release builds. In this example I’m using self signed certs so it’s as simple as possible. Use the following commands to create your keys and your self signed certs:

openssl genrsa -out cakey.pem 2048
openssl req -new -x509 -key cakey.pem -out cacert.pem -days 1095

You’ll need two key/cert pairs, one for the automated signing of the build (a ‘dev’ key) and the certificate for a production signing key. All protection of the production signing key is the responsibility of whoever is signing the release. I’ll cover this in another post at another time. For now just make the keys and the certs and put the variables in the .config file.


For those of you who have used OE in a previous life you know how huge an OE build directory can get. Part of this is caused the the OE download cache which is the directory where OE caches downloaded source code. In the OE local.conf file this is specified by the DL_DIR variable. We use OE_BUILD_CACHE_DL instead.

Personally my build system has a RAID SSD set up to keep my builds as fast as possible but I can’t afford enough SSDs to support having my DL_DIR on my SSD storage. Typically I’ll use larger but slower storage (NFS RAID array) for my download mirror that I share between projects. Often times I’ll just link that slower storage mount directly into my build tree to keep everything in one place. Do whatever works best for you and remember this is completely optional. You can leave this out and the build will just use a directory in your build but remember this will make your build much larger:


Even with the download cache on a separate volume an OpenXT build takes up a lot of disk space. This minimal build of just the core service VMs and no in-guest tools weighs in at 74G. My download cache is shared between all of my OE projects so I can’t say exactly how large the cache for a fresh OpenXT build will be. My combined cache is ~20G but I’d expect the files for OpenXT are a small portion of that.

Start Your Engines

That’s about all it should take. Next I run the script explicitly executing the setupoe step:

./ -s setupoe

This step clones all of the git repos you need to build OpenXT (mostly OE meta-layers) and then processes your .config file to generate the OE local.conf. You can take a look at this file to see the variables and make sure they’re set to the right thing. This can be very helpful when debugging or if you want to manually change them for some reason.

After OE has been setup you can just run the script and then go drink coffee for the next 6 hours:

./ | tee build.log

If all goes well at the end of this you’ll have an ISO with the OpenXT installer on it ready to go here:


There’s no magic here. The comma separated list of steps we put into the STEPS variable can all be invoked one at a time the same way we ran setupoe. So if you want to build just the dom0 initramfs you can do so like this:

./ -s initramfs | tee initramfs.log

Piping the build output to a file is always useful when things break (and they will break). My recommendation would be to build a few steps individually before you go and do a full build. I’ll probably do another post about the individual build steps and how all that works next.

Now go forth, build, install, and report bugs.

XT is Open

The last few years of my processional life have been, well, interesting … as in the apocryphal Chinese curse. Going into the details of the business end of this is beyond the scope of what belongs on this website. The bits related to the technology of XenClient XT however have been written about here in the past (see my tags for XenClient and my work on measured launch). The technology is always the most interesting and important stuff anyways and this time it’s not different.

This post marks an important mile stone: the core technology that makes up XenClient XT is now available in the open source as OpenXT. In my opinion this moment is long overdue. There have been several attempts to move this project to an open source development model in the past. Most of these efforts were before my time so I can’t claim any fame there. Those working on this project before me who realized that this technology would be best served by an open development model were indeed before their time and well ahead of the “decision makers” who held them back.

My only hope now is that we aren’t too late. That there is still some value in this code and that the world hasn’t passed us by. That the small community of people who care about the intersection of virtualization and security will rally and contribute to the code base to help us pay off the technical debt that has built up over the years and push forward new features and architectural advancements.

The new OpenXT tag will track work on this project. I filed the first bug against our bitbake metadata just now so hopefully it’s the first of many filed and fixed in the future. Happy hacking!

building HVM Xen guests

On my Xen systems I’ve run pretty much 99% of my Linux guests paravirtualized (PV). Mostly this was because I’m lazy. Setting up a PV guest is super simple. No need for partitions, boot loaders or any of that complicated stuff. Setting up a PV Linux guest is generally as simple as setting up a chroot. You don’t even need to install a kernel.

There’s been a lot of work over the past 5+ years to add stuff to processors and Xen to make the PV extensions to Linux unnecessary. After checking out a presentation by Stefano Stabilini a few weeks back I decided I’m long overdue for some HVM learning. Since performance of HVM guests is now better than PV for most cases it’s well worth the effort.

This post will serve as my documentation for setting up HVM Linux guests. My goal was to get an HVM Linux installed using typical Linux tools and methods like LVM and chroots. I explicitly was trying to avoid using RDP or anything that isn’t a command-line utility. I wasn’t completely successful at this but hopefully I’ll figure it out in the next few days and post an update.

Disks and Partitions

Like every good Linux user LVMs are my friend. I’d love a more flexible disk backend (something that could be sparsely populated) but blktap2 is pretty much unmaintained these days. I’ll stop before I fall down that rabbit hole but long story short, I’m using LVMs to back my guests.

There’s a million ways to partition a disk. Generally my VMs are single-purpose and simple so a simple partitioning scheme is all I need. I haven’t bothered with extended partitions as I only need 3. The layout I’m using is best described by the output of sfdisk:

# partition table of /dev/mapper/myvg-hvmdisk
unit: sectors
/dev/mapper/myvg-hvmdisk1 : start=     2048, size=  2097152, Id=83
/dev/mapper/myvg-hvmdisk2 : start=  2099200, size=  2097152, Id=82
/dev/mapper/myvg-hvmdisk3 : start=  4196352, size= 16775168, Id=83
/dev/mapper/myvg-hvmdisk4 : start=        0, size=        0, Id= 0

That’s 3 partitions, the first for /boot, the second for swap and the third for the rootfs. Pretty simple. Once the partition table is written to the LVM volume we need to get the kernel to read the new partition table to create devices for these partitions. This can be done with either the partprobe command or kpartx. I went with kpartx:

$ kpartx -a /dev/mapper/myvg-hvmdisk

After this you’ll have the necessary device nodes for all of your partitions. If you use kpartx as I have these device files will have a digit appended to them like the output of sfdisk above. If you use partprobe they’ll have the letter ‘p’ and a digit for the partition number. Other than that I don’t know that there’s a difference between the two methods.

Then get the kernel to refresh the links in /dev/disk/by-uuid (we’ll use these later):

$ udevadm trigger

Now we can set up the filesystems we need:

$ mkfs.ext2 /dev/mapper/myvg-hvmdisk1
$ mkswap /dev/mapper/myvg-hvmdisk2
$ mkfs.ext4 /dev/mapper/myvg-hvmdisk3

Install Linux

Installing Linux on these partitions is just like setting up any other chroot. First step is mounting everything. The following script fragment

# mount VM disks (partitions in new LV)
if [ ! -d /media/hdd0 ]; then mkdir /media/hdd0; fi
mount /dev/mapper/myvg-hvmdisk3 /media/hdd0
if [ ! -d /media/hdd0/boot ]; then mkdir /media/hdd0/boot; fi
mount /dev/mapper/myvg-hvmdisk1 /media/hdd0/boot
# bind dev/proc/sys/tmpfs file systems from the host
if [ ! -d /media/hdd0/proc ]; then mkdir /media/hdd0/proc; fi
mount --bind /proc /media/hdd0/proc
if [ ! -d /media/hdd0/sys ]; then mkdir /media/hdd0/sys; fi
mount --bind /sys /media/hdd0/sys
if [ ! -d /media/hdd0/dev ]; then mkdir /media/hdd0/dev; fi
mount --bind /dev /media/hdd0/dev
if [ ! -d /media/hdd0/run ]; then mkdir /media/hdd0/run; fi
mount --bind /run /media/hdd0/run
if [ ! -d /media/hdd0/run/lock ]; then mkdir /media/hdd0/run/lock; fi
mount --bind /run/lock /media/hdd0/run/lock
if [ ! -d /media/hdd0/dev/pts ]; then mkdir /media/hdd0/dev/pts; fi
mount --bind /dev/pts /media/hdd0/dev/pts

Now that all of the mounts are in place we can debootstrap an install into the chroot:

$ sudo debootstrap wheezy /media/hdd0/

We can then chroot to the mountpoint for our new VMs rootfs and put on the finishing touches:

$ chroot /media/hdd0


Unlike a PV guest, you’ll need a bootloader to get your HVM up and running. A first step in getting the bootloader installed is figuring out which disk will be mounted and where. This requires setting up your fstab file.

At this point we start to run into some awkward differences between our chroot and what our guest VM will look like once it’s booted. Our chroot reflects the device layout of the host on which we’re building the VM. This means that the device names for these disks will be different once the VM boots. On our host they’re all under the LVM /dev/mapper/myvg-hvmdisk and once the VM boots they’ll be something like /dev/xvda.

The easiest way to deal with this is to set our fstab up using UUIDs. This would look something like this:

# / was on /dev/xvda3 during installation
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro 0       1
# /boot was on /dev/xvda1 during installation
UUID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy /boot           ext2    defaults        0       2
# swap was on /dev/xvda2 during installation
UUID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz none            swap    sw              0       0

By using UUIDs we can make our fstab accurate even in our chroot.

After this we need to set up the /etc/mtab file needed by lots of Linux utilities. I found that when installing Grub2 I needed this file in place and accurate.

Some data I’ve found on the web says to just copy or link the mtab file from the host into the chroot but this is wrong. If a utility consults this file to find the device file that’s mounted as the rootfs it will find the device holding the rootfs for the host, not the device that contains the rootfs for our chroot.

The way I made this file was to copy it off of the host where I’m building the guest VM and then modify it for the guest. Again I’m using UUIDs to identify the disks / partitions for the rootfs and /boot to keep from having data specific to the host platform leak into the guest. My final /etc/mtab looks like this:

rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=253371,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=203892k,mode=755 0 0
/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=617480k 0 0
/dev/disk/by-uuid/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy /boot ext2 rw,relatime,errors=continue,user_xattr,acl 0 0

Finally we need to install both a kernel and the grub2 bootloader:

$ apt-get install linux-image-amd64 grub2

Installing Grub2 is a pain. All of the additional disks kicking around in my host confused the hell out of the grub installer scripts. I was given the option to install grub on a number of these disks and none were the one I wanted to install it on.

In the end I had to select the option to not install grub on any disk and fall back to installing it by hand:

$ grub-install --force --no-floppy --boot-directory=/boot /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

And then generate the grub config file:


If all goes well the grub boot loader should now be installed on your disk and you should have a grub config file in your chroot /boot directory.

Final Fixups

Finally you’ll need to log into the VM. If you’re confident it will boot without you having to do any debugging then you can just configure the ssh server to start up and throw a public key in the root homedir. If you’re like me something will go wrong and you’ll need some boot logs to help you debug. I like enabling the serial emulation provided by qemu for this purpose. It’ll also allow you to login over serial which is convenient.

This is pretty standard stuff. No paravirtual console through the xen console driver. The qemu emulated serial console will show up at ttyS0 like any physical serial hardware. You can enable serial interaction with grub by adding the following fragment to /etc/default/grub:

GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"

To get your kernel to log to the serial console as well set the GRUB_CMDLINE_LINUX variable thusly:

GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,38400n8"

Finally to get init to start a getty with a login prompt on the console add the following to your /etc/inittab:

T0:23:respawn:/sbin/getty -L ttyS0 38400 vt100

Stefano Stabilini has done another good write-up on the details of using both the PV and the emulated serial console here: Give it a read for the gory details.

Once this is all done you need to exit the chroot, unmount all of those bind mounts and then unmount your boot and rootfs from the chroot directory. Once we have a VM config file created this VM should be bootable.

VM config

Then we need a configuration file for our VM. This is what my generic HVM template looks like. I’ve disabled all graphical stuff: sdl=0, stdvga=0, and vnc=0, enabled the emulated serial console: serial='pty' and set xen_platform_pci=1 so that my VM can use PV drivers.

The other stuff is standard for HVM guests and stuff like memory, name, and uuid that should be customized for your specific installation. Things like uuid and the mac address for your virtual NIC should be unique. There are websites out there that will generate these values. Xen has it’s own prefix for MAC addresses so use a generator to make a proper one.

builder = "hvm"
memory = "2048"
name = "myvm"
uuid = "uuuuuuuu-uuuu-uuuu-uuuu-uuuuuuuuuuuu"
vcpus = 1
cpus = '0-7'
disk = [
vif = [


Booting this VM is just like booting any PV guest:

xl create -c /etc/xen/vms/myvm.cfg

I’ve included the -c option to attach to the VMs serial console and ideally we’d be able to see grub and the kernel dump a bunch of data as the system boots.


I’ve tested these instructions twice now on a Debian Wheezy system with Xen 4.3.1 installed from source. Both times Grub installs successfully but fails to boot. After enabling VNC for the VM and connecting with a viewer it’s apparent that the VM hangs when SEABIOS tries to kick off grub.

As a work-around both times I’ve booted the VM from a Debian rescue ISO, setup a chroot much like in these instructions (the disk is now /dev/xvda though) and re-installed Grub. This does the trick and rebooting the VM from the disk now works. So I can only conclude that either something from my instructions w/r to installing Grub is wrong but I think that’s unlikely as they’re confirmed from numerous other “install grub in a chroot” instructions on the web.

The source of the problem is speculation at this point. Part of me wants to dump the first 2M of my disk both after installing it using these instructions and then again after fixing it with the rescue CD. Now that I think about it the version of Grub installed in my chroot is probably a different version than the one on the rescue CD so that could have something to do with it.

Really though, I’ll probably just install syslinux and see if that works first. My experiences with Grub have generally been bad any time I try to do something out of the ordinary. It’s incredibly complicated and generally I just want something simple like syslinux to kick off a very simple VM.

I’ll post an update once I’ve got to the bottom of this mystery. Stay tuned.

tboot 1.8.0 and UEFI

Version 1.8.0 of tboot was released a while back. This is a pretty big deal as the EFI support has been a long time coming. Anyone wanting to use tboot on a modern piece of hardware using EFI has been out of luck till now.

For the past week or so I’ve been slowly figuring out how to build an OE image with grub-efi, building the new version of tboot and then debugging an upgrade in meta-measured. My idea of a good time for sure.

As always the debugging was the hardest part, building the software was easy. For the most part tboot EFI “just worked” … after I figured out all the problems with kernel version and grub configuration. Hard parts were

  • realizing the Linux kernel image had to be the latest 3.14 version
  • debugging new kernel version
  • configuring grub
  • which modules needed to be built into grub

If you want the details you can see the full history on the meta-meausred github. The highlights are pretty simple:

multiboot2 in oe-core grub-efi

The grub-efi recipe in oe-core is a bit rigid. I’ve pushed a patch upstream that allows another layer (like meta-measured) to modify which grub modules are built into the grub EFI executable. It’s a tiny change but it makes all of the difference:

This lets us add modules to the grub EFI executable. I also had to cobble together a working grub multiboot2 configuration.

linux-yocto v3.14

Pairing this with the older 3.10 Yocto Linux kernel image will allow you to get through grub and tboot but the kernel will panic very early in the boot process. The newer 3.14 doesn’t suffer from this limitation.

The measured reference image in meta-measured used aufs to keep from having to mount the rootfs read/write. This is to keep the rootfs hash from changing across boots. I wrote the whole thing up a while back: Anyways aufs doesn’t work in 3.14 so I took the extra few minutes to migrate the image to use the read-only-rootfs IMAGE_FEATURE. This is a good thing regardless, aufs was being used as a shortcut. I hadn’t had the drive to fix this till it broke. Problem solved.

rough edges

I still haven’t figured out all of the details in grub and it’s configuration. The current configuration in meta-measured is sufficient to boot but something gets screwed up in setting up VGA output for tboot and the early kernel output. Currently grub displays an error message indicating that tboot won’t get a console and no VGA output will be shown till the kernel loads the DRM driver. Output is still available on the serial console so if you’ve got a reasonable test setup you can get all the data you need for debugging.

No lies, I’m a bit afraid of grub, guess I’ll have to get over it. The measured-image-bootimg has a menuentry for tboot and a normal linux boot. Booting the kernel using the linux and initrd grub commands provide normal VGA output but the multiboot2 config required by tboot does not. I take this to mean that grub is capable of doing all of the necessary VGA stuff but that it can’t pass this data through to tboot via multiboot2. More to come on this soon hopefully.

Till then, if you build this stuff and have feedback leave it here.

OE image package part 2

I spent the weekend in bed with a cold … and with a laptop hacking on ways to get OE rootfs images packaged and installed in other OE rootfs images. My solution isn’t great but it’s functional and it doesn’t require that every image be built in series. But before I get too far let’s go over what I set out to achieve and get a rough set of requirements:

  • I’m building a rootfs that itself contains other rootfs’. The inner rootfs’ will be VMs. The outer rootfs will be the host with the hypervisor (Xen dom0).
  • Build speeds are important so I’d like to share as much of the build infrastructure between VMs as possible.
  • Running builds in parallel is a good thing. Running all builds as serial operations is a non-starter. Bonus points for being able to distribute them across multiple hosts.
  • Having to implement a pile of shell script outside of bitbake to make this work means you’re doing it wrong. The script that automates this build should be doing little more than calling bitbake.

First things first: my solution isn’t perfect. It does work pretty well though and achieves much of the above. Below is a quick visual of what I intend for the end product to support:

rootfs image relationships

On the left is the simple case I’m working to support currently. The boxes represent the root file systems (rootfs) that bitbake is churning out. The lines pointing from one rootfs to another represent one rootfs being packaged in another. dom0 here would be a live image and it would boot the NDVM automatically. Naturally the NDVM rootfs must be contained within dom0 for this to work. The right hand side is an eventual goal.

To support what most people think of as a ‘distro’ you need an installer to lay things down on a physical disk and if users expect to be able to run arbitrary workloads / VMs then they’ll want the whole disk available. In this scenario the installer image rootfs will have the image packages for the VMs installed in it (including dom0!). The installer when do it’s thing laying dom0 down in a partition but it can also drop the supporting VMs images into another partition. After installation, dom0 is booted from physical media and it will be able to boot these supporting VMs.

Notice the two level hierarchical relationship between the rootfs images in the diagram. The rootfs’ on the lower part of the diagram are completely independent and thus can be built in parallel. This will make them easily distributed across multiple build systems. Now on to some of the methods I tried to realize this goal and eventually one that worked!

Changing DISTRO in a build tree

The first thing I played around with was rewriting my local.conf between image builds to change the DISTRO. I use a different DISTRO configs to make package customizations that differentiate my images. This would allow me to add a simple IMAGE_POSTPROCESS_COMMAND to copy service VM rootfs images into the outer image (again, dom0 or an installer).

I had hoped I’d be able to pull this off and just have bitbake magically pick up the differences so I could build multiple images in the same build tree. This would make my image builds serial but possibly very efficient. This caused several failures in my tests however so I decided it would be best to keep separate builds for my individual images. I should probably find the right mailing lists to help track down the root cause of this but I expect this is well outside of the ‘supported’ bitbake / OE use cases.

Copying data across build trees

As a fall-back I came up with a hack in which I copy the needed build artifacts (rootfs & kernel image) to a common location as a post processing step in the image recipe. I’ve encapsulated this in a bbclass in anticipation of using the same pattern for other VM images. I’ve called this class integral-image-export.bbclass:

inherit core-image
do_export() {
    manifest_install() {
        if [ ! -z "$1" ]; then
            install -m 0644 "$1" "$4"
            printf "%s *%s\n" "$(sha256sum --binary $1 | awk '{ print $1 }')" "$2" >> $3
    # only do export if export dir is defined
    if [ ! -z "${INTEGRAL_EXPORT_DIR}" ]; then
        ROOT="${INTEGRAL_EXPORT_DIR}/${PN}-$(date --utc +%Y-%m-%dT%H:%M:%S.%NZ)"
        mkdir -p ${ROOT}
        manifest_install "${KERN_PATH}" "${KERN_FILE}" "${MANIFEST}" "${ROOT}"
        manifest_install "${ROOTFS}" "${FS_FILE}" "${MANIFEST}" "${ROOT}"
addtask export before do_build after do_rootfs

It lives here So by having my NDVM image inherit this class, and properly defining the INTEGRAL_EXPORT_DIR in my builds local.conf, the NDVM image recipe will copy these build artifacts out of the build tree.

Notice that the destination directory has an overly precise time stamp as part of its name. This is an attempt to create unique identifiers for images without having to track incrementing build numbers. Also worth noting is the manifest_install function. Basically this generates a file in the same format as the sha*sum utilities with the intent of those programs being able to verify the manifest.

Eventually I think it would be useful for a manifest to contain data about the meta layers that went into building the image and the hashes of the git commit checked out at the time of the build. This later bit will be useful if a build ever has to be recreated. Not something that’s necessary yet however.

Consuming exported images

After exporting these build artifacts we have to cope with other images that want to consume them. My main complaint about using a build script outside of my built tree to place images within one another is that I’d have to re-size existing file systems. Bitbake already builds file systems so resizing them from an external script seemed very ugly. Further changes to the images built by bitbake (ext3/iso/hddimg etc) would have to be coordinated with said external script. Very ugly indeed.

The most logical solution was to create a new recipe as a way to package the existing build artifacts into a package that can be consumed by an image. By ‘package’ I mean your typical ipk or rpm. This allows bitbake to continue to do all of the heavy lifting in image building for us. Assuming the relationships between images shown above, it allows the outer image to include the image package using the standard IMAGE_INSTALL mechanism. That feels borderline elegant compared to rewriting the generated file systems IMHO.

So from the last section we have builds that are pumping out build artifacts and for the case of our example we’ll say they land in /mnt/integral/image-$stamp where $stamp is a unique time stamp. On the other hand we need to create a recipe that consumes the artifacts (I’ll call it an ‘image package recipe’ from here out) in these directories. Typically in a bitbake recipe you’ll provide a URI to your source code in the SRC_URI variable and define the files that go into the image using FILES_${PN}. These are generally defined statically in the recipe. Our case is weird in that we want the image package recipe to grab the latest image exported by some other build. So we must dynamically generate these variables.

Though I’ve never seen these variables generated dynamically (aside from using the PN and PV variables in URIs) but it’s surprisingly easy. bitbake supports anonymous python functions that get run when the recipe is parsed. This happens before any tasks are executed so setting SRC_URI and PV in this function works quite well. The method for determining the latest images that our build has exported is a simple directory listing and sorting operation:

python() {
    import glob, os, subprocess
    # check for valid export dir
        bb.note ('INTEGRAL_EXPORT_DIR is empty')
        return 0
    if not os.path.isdir (INTEGRAL_EXPORT_DIR):
        bb.fatal ('INTEGRAL_EXPORT_DIR is set, but not a directory: {0}'.format (INTEGRAL_EXPORT_DIR))
        return 1
    PN = d.getVar ('PN', True)
    LIBDIR = d.getVar ('libdir', True)
    XENDIR = d.getVar ('XENDIR', True)
    VMNAME = d.getVar ('VMNAME', True)
    # find latest ndvm and verify hashes
    IMG_NAME = PN[:PN.rfind ('-')]
    DIR_LIST = glob.glob ('{0}/{1}*'.format (INTEGRAL_EXPORT_DIR, IMG_NAME))
    DIR_LIST.sort (reverse=True)
    DIR_SAVE = os.getcwd ()
    os.chdir (DIR_LIST [0])
        DEV_NULL = open ('/dev/null', 'w')
        subprocess.check_call ('sha256sum -c manifest', stdout=DEV_NULL, shell=True)
    except subprocess.CalledProcessError:
        return 1
        DEV_NULL.close ()
        os.chdir (DIR_SAVE)
    # build up SRC_URI and FILES_${PN} from latest NDVM image
    d.appendVar ('SRC_URI', 'file://{0}/*'. format (DIR_LIST [0]))
    d.appendVar ('FILES_{0}'.format (PN), ' {0}/{1}/{2}/*'.format (LIBDIR, XENDIR, VMNAME))
    # set up ${S}
    WORKDIR = d.getVar ('WORKDIR', True)
    d.setVar ('S', '{0}/{1}'.format (WORKDIR, DIR_LIST [0]))
    return 0

If you’re interested in the full recipe for this image package you can find it here:

The ‘manifest’ described above is also verified and processed. Using the file format of the sha256sum utility is a cheap approximation of the OE SRC_URI[sha256sum] metadata. This is a pretty naive approach to finding the “right” image to package as it doesn’t give the outer image much say over which inner image to pull in: It just grabs the latest. Some mechanism for the consuming image to specify which image it consumes would be useful.

So that’s about it. I’m pretty pleased with the outcome but time will tell how useful this approach is. Hopefully I’ll get a chance to see if it scales well in the future. Throw something in the comments if you get a chance to play around with this or have thoughts on the topic.

OE image package

Here’s a fun problem that I don’t yet have a solution to: I want to build a single image with OE. This image will be my dom0. I want to include other images in this image. That is to say I want to package service VMs as part of / in my dom0.

All of the research I’ve done up till now (all 30 minutes of it) points to this having never been done before. I could be using the wrong keywords but I the ones I tried turned up nothing on the respective OE and Yocto mailing lists. There seem to be a huge number of pitfalls here including things like changing the DISTRO_FEATURES in effect for the images as well as selecting image specific files for packages. On a few occasions I’ve used the distro name as a way to select specific configuration files like an fstab or interfaces.

What I want is to run bitbake once for the dom0 image and have it build all the other images and install them as packages in dom0. So I’d need to have recipes that actually package the images so they can be installed in another image. I think that will be the easy part.

The hard part will be making packages specific to each image with different files specific to the image. The only thing I can come up with for this is to play ugly tricks like building each VM image with a different MACHINE type but I’m not even sure if that will work. I guess all I can do for now is to experiment a bit and get on the mailing list to make sure I’m not duplicating work that’s already been done. This could get ugly.