tboot 1.8.0 and UEFI

Version 1.8.0 of tboot was released a while back. This is a pretty big deal as the EFI support has been a long time coming. Anyone wanting to use tboot on a modern piece of hardware using EFI has been out of luck till now.

For the past week or so I’ve been slowly figuring out how to build an OE image with grub-efi, building the new version of tboot and then debugging an upgrade in meta-measured. My idea of a good time for sure.

As always the debugging was the hardest part, building the software was easy. For the most part tboot EFI “just worked” … after I figured out all the problems with kernel version and grub configuration. Hard parts were

  • realizing the Linux kernel image had to be the latest 3.14 version
  • debugging new kernel version
  • configuring grub
  • which modules needed to be built into grub

If you want the details you can see the full history on the meta-meausred github. The highlights are pretty simple:

multiboot2 in oe-core grub-efi

The grub-efi recipe in oe-core is a bit rigid. I’ve pushed a patch upstream that allows another layer (like meta-measured) to modify which grub modules are built into the grub EFI executable. It’s a tiny change but it makes all of the difference:

http://lists.openembedded.org/pipermail/openembedded-core/2014-April/091768.html

This lets us add modules to the grub EFI executable. I also had to cobble together a working grub multiboot2 configuration.

linux-yocto v3.14

Pairing this with the older 3.10 Yocto Linux kernel image will allow you to get through grub and tboot but the kernel will panic very early in the boot process. The newer 3.14 doesn’t suffer from this limitation.

The measured reference image in meta-measured used aufs to keep from having to mount the rootfs read/write. This is to keep the rootfs hash from changing across boots. I wrote the whole thing up a while back: http://twobit.us/blog/2013/01/meta-measured/. Anyways aufs doesn’t work in 3.14 so I took the extra few minutes to migrate the image to use the read-only-rootfs IMAGE_FEATURE. This is a good thing regardless, aufs was being used as a shortcut. I hadn’t had the drive to fix this till it broke. Problem solved.

rough edges

I still haven’t figured out all of the details in grub and it’s configuration. The current configuration in meta-measured is sufficient to boot but something gets screwed up in setting up VGA output for tboot and the early kernel output. Currently grub displays an error message indicating that tboot won’t get a console and no VGA output will be shown till the kernel loads the DRM driver. Output is still available on the serial console so if you’ve got a reasonable test setup you can get all the data you need for debugging.

No lies, I’m a bit afraid of grub, guess I’ll have to get over it. The measured-image-bootimg has a menuentry for tboot and a normal linux boot. Booting the kernel using the linux and initrd grub commands provide normal VGA output but the multiboot2 config required by tboot does not. I take this to mean that grub is capable of doing all of the necessary VGA stuff but that it can’t pass this data through to tboot via multiboot2. More to come on this soon hopefully.

Till then, if you build this stuff and have feedback leave it here.

OE image package part 2

I spent the weekend in bed with a cold … and with a laptop hacking on ways to get OE rootfs images packaged and installed in other OE rootfs images. My solution isn’t great but it’s functional and it doesn’t require that every image be built in series. But before I get too far let’s go over what I set out to achieve and get a rough set of requirements:

  • I’m building a rootfs that itself contains other rootfs’. The inner rootfs’ will be VMs. The outer rootfs will be the host with the hypervisor (Xen dom0).
  • Build speeds are important so I’d like to share as much of the build infrastructure between VMs as possible.
  • Running builds in parallel is a good thing. Running all builds as serial operations is a non-starter. Bonus points for being able to distribute them across multiple hosts.
  • Having to implement a pile of shell script outside of bitbake to make this work means you’re doing it wrong. The script that automates this build should be doing little more than calling bitbake.

First things first: my solution isn’t perfect. It does work pretty well though and achieves much of the above. Below is a quick visual of what I intend for the end product to support:

rootfs image relationships

On the left is the simple case I’m working to support currently. The boxes represent the root file systems (rootfs) that bitbake is churning out. The lines pointing from one rootfs to another represent one rootfs being packaged in another. dom0 here would be a live image and it would boot the NDVM automatically. Naturally the NDVM rootfs must be contained within dom0 for this to work. The right hand side is an eventual goal.

To support what most people think of as a ‘distro’ you need an installer to lay things down on a physical disk and if users expect to be able to run arbitrary workloads / VMs then they’ll want the whole disk available. In this scenario the installer image rootfs will have the image packages for the VMs installed in it (including dom0!). The installer when do it’s thing laying dom0 down in a partition but it can also drop the supporting VMs images into another partition. After installation, dom0 is booted from physical media and it will be able to boot these supporting VMs.

Notice the two level hierarchical relationship between the rootfs images in the diagram. The rootfs’ on the lower part of the diagram are completely independent and thus can be built in parallel. This will make them easily distributed across multiple build systems. Now on to some of the methods I tried to realize this goal and eventually one that worked!

Changing DISTRO in a build tree

The first thing I played around with was rewriting my local.conf between image builds to change the DISTRO. I use a different DISTRO configs to make package customizations that differentiate my images. This would allow me to add a simple IMAGE_POSTPROCESS_COMMAND to copy service VM rootfs images into the outer image (again, dom0 or an installer).

I had hoped I’d be able to pull this off and just have bitbake magically pick up the differences so I could build multiple images in the same build tree. This would make my image builds serial but possibly very efficient. This caused several failures in my tests however so I decided it would be best to keep separate builds for my individual images. I should probably find the right mailing lists to help track down the root cause of this but I expect this is well outside of the ‘supported’ bitbake / OE use cases.

Copying data across build trees

As a fall-back I came up with a hack in which I copy the needed build artifacts (rootfs & kernel image) to a common location as a post processing step in the image recipe. I’ve encapsulated this in a bbclass in anticipation of using the same pattern for other VM images. I’ve called this class integral-image-export.bbclass:

inherit core-image
 
do_export() {
    manifest_install() {
        if [ ! -z "$1" ]; then
            install -m 0644 "$1" "$4"
            printf "%s *%s\n" "$(sha256sum --binary $1 | awk '{ print $1 }')" "$2" >> $3
        fi
    }
 
    # only do export if export dir is defined
    if [ ! -z "${INTEGRAL_EXPORT_DIR}" ]; then
        ROOT="${INTEGRAL_EXPORT_DIR}/${PN}-$(date --utc +%Y-%m-%dT%H:%M:%S.%NZ)"
        FS_FILE="${IMAGE_BASENAME}-${MACHINE}.ext3"
        KERN_FILE="${KERNEL_IMAGETYPE}-${MACHINE}.bin"
        KERN_PATH="${DEPLOY_DIR_IMAGE}/${KERN_FILE}"
        MANIFEST="${ROOT}/manifest"
        mkdir -p ${ROOT}
        manifest_install "${KERN_PATH}" "${KERN_FILE}" "${MANIFEST}" "${ROOT}"
        manifest_install "${ROOTFS}" "${FS_FILE}" "${MANIFEST}" "${ROOT}"
    fi
}
 
addtask export before do_build after do_rootfs

It lives here https://github.com/flihp/meta-integral/blob/master/classes/integral-image-export.bbclass. So by having my NDVM image inherit this class, and properly defining the INTEGRAL_EXPORT_DIR in my builds local.conf, the NDVM image recipe will copy these build artifacts out of the build tree.

Notice that the destination directory has an overly precise time stamp as part of its name. This is an attempt to create unique identifiers for images without having to track incrementing build numbers. Also worth noting is the manifest_install function. Basically this generates a file in the same format as the sha*sum utilities with the intent of those programs being able to verify the manifest.

Eventually I think it would be useful for a manifest to contain data about the meta layers that went into building the image and the hashes of the git commit checked out at the time of the build. This later bit will be useful if a build ever has to be recreated. Not something that’s necessary yet however.

Consuming exported images

After exporting these build artifacts we have to cope with other images that want to consume them. My main complaint about using a build script outside of my built tree to place images within one another is that I’d have to re-size existing file systems. Bitbake already builds file systems so resizing them from an external script seemed very ugly. Further changes to the images built by bitbake (ext3/iso/hddimg etc) would have to be coordinated with said external script. Very ugly indeed.

The most logical solution was to create a new recipe as a way to package the existing build artifacts into a package that can be consumed by an image. By ‘package’ I mean your typical ipk or rpm. This allows bitbake to continue to do all of the heavy lifting in image building for us. Assuming the relationships between images shown above, it allows the outer image to include the image package using the standard IMAGE_INSTALL mechanism. That feels borderline elegant compared to rewriting the generated file systems IMHO.

So from the last section we have builds that are pumping out build artifacts and for the case of our example we’ll say they land in /mnt/integral/image-$stamp where $stamp is a unique time stamp. On the other hand we need to create a recipe that consumes the artifacts (I’ll call it an ‘image package recipe’ from here out) in these directories. Typically in a bitbake recipe you’ll provide a URI to your source code in the SRC_URI variable and define the files that go into the image using FILES_${PN}. These are generally defined statically in the recipe. Our case is weird in that we want the image package recipe to grab the latest image exported by some other build. So we must dynamically generate these variables.

Though I’ve never seen these variables generated dynamically (aside from using the PN and PV variables in URIs) but it’s surprisingly easy. bitbake supports anonymous python functions that get run when the recipe is parsed. This happens before any tasks are executed so setting SRC_URI and PV in this function works quite well. The method for determining the latest images that our build has exported is a simple directory listing and sorting operation:

python() {
    import glob, os, subprocess
 
    # check for valid export dir
    INTEGRAL_EXPORT_DIR = d.getVar ('INTEGRAL_EXPORT_DIR', True)
    if INTEGRAL_EXPORT_DIR is None:
        bb.note ('INTEGRAL_EXPORT_DIR is empty')
        return 0
    if not os.path.isdir (INTEGRAL_EXPORT_DIR):
        bb.fatal ('INTEGRAL_EXPORT_DIR is set, but not a directory: {0}'.format (INTEGRAL_EXPORT_DIR))
        return 1
 
    PN = d.getVar ('PN', True)
    LIBDIR = d.getVar ('libdir', True)
    XENDIR = d.getVar ('XENDIR', True)
    VMNAME = d.getVar ('VMNAME', True)
 
    # find latest ndvm and verify hashes
    IMG_NAME = PN[:PN.rfind ('-')]
    DIR_LIST = glob.glob ('{0}/{1}*'.format (INTEGRAL_EXPORT_DIR, IMG_NAME))
    DIR_LIST.sort (reverse=True)
    DIR_SAVE = os.getcwd ()
    os.chdir (DIR_LIST [0])
    try:
        DEV_NULL = open ('/dev/null', 'w')
        subprocess.check_call ('sha256sum -c manifest', stdout=DEV_NULL, shell=True)
    except subprocess.CalledProcessError:
        return 1
    finally:
        DEV_NULL.close ()
        os.chdir (DIR_SAVE)
 
    # build up SRC_URI and FILES_${PN} from latest NDVM image
    d.appendVar ('SRC_URI', 'file://{0}/*'. format (DIR_LIST [0]))
    d.appendVar ('FILES_{0}'.format (PN), ' {0}/{1}/{2}/*'.format (LIBDIR, XENDIR, VMNAME))
 
    # set up ${S}
    WORKDIR = d.getVar ('WORKDIR', True)
    d.setVar ('S', '{0}/{1}'.format (WORKDIR, DIR_LIST [0]))
 
    return 0
}

If you’re interested in the full recipe for this image package you can find it here: https://github.com/flihp/meta-integral/blob/master/recipes-integral/images/integral-image-ndvm-pkg.bb

The ‘manifest’ described above is also verified and processed. Using the file format of the sha256sum utility is a cheap approximation of the OE SRC_URI[sha256sum] metadata. This is a pretty naive approach to finding the “right” image to package as it doesn’t give the outer image much say over which inner image to pull in: It just grabs the latest. Some mechanism for the consuming image to specify which image it consumes would be useful.

So that’s about it. I’m pretty pleased with the outcome but time will tell how useful this approach is. Hopefully I’ll get a chance to see if it scales well in the future. Throw something in the comments if you get a chance to play around with this or have thoughts on the topic.

OE image package

Here’s a fun problem that I don’t yet have a solution to: I want to build a single image with OE. This image will be my dom0. I want to include other images in this image. That is to say I want to package service VMs as part of / in my dom0.

All of the research I’ve done up till now (all 30 minutes of it) points to this having never been done before. I could be using the wrong keywords but I the ones I tried turned up nothing on the respective OE and Yocto mailing lists. There seem to be a huge number of pitfalls here including things like changing the DISTRO_FEATURES in effect for the images as well as selecting image specific files for packages. On a few occasions I’ve used the distro name as a way to select specific configuration files like an fstab or interfaces.

What I want is to run bitbake once for the dom0 image and have it build all the other images and install them as packages in dom0. So I’d need to have recipes that actually package the images so they can be installed in another image. I think that will be the easy part.

The hard part will be making packages specific to each image with different files specific to the image. The only thing I can come up with for this is to play ugly tricks like building each VM image with a different MACHINE type but I’m not even sure if that will work. I guess all I can do for now is to experiment a bit and get on the mailing list to make sure I’m not duplicating work that’s already been done. This could get ugly.

OpenEmbedded Xen Network Driver VM

I wrote about a similar topic what feels like ages ago and I guess it was (8 months is a long time in this business). Since then I’ve been throwing some spare time at this same problem and I’ve actually made measurable progress. There have been a number of serendipitous events that came together to make this possible, the most important of which is the massive update to the Xen recipe in meta-virtualization. With this it’s super easy to crank out a Xen pvops kernel so combining this with an image that has the right plumbing in place it’s not as hard as you might think to build an NDVM.

So armed with the new Xen stuff from meta-virtualization I set out to build a reference NDVM. This isn’t intended to replace the NDVM in a system like XenClient-XT which is far more sophisticated. It’s just intended for experimentation and I don’t intend to build anything more sophisticated than a dumb Ethernet bridge into it.

To host this I’ve started a layer I call ‘meta-integral’. I know, all the good names were taken. Anyways this is intended to be as sort of distro layer where I can experiment with Xen stuff. Currently I’ve got a distro config for dom0 and an NDVM. The dom0 work is still very much a work in progress but the NDVM (much simpler) will actually boot as a PV guest.

To build this just clone my git repo with the build scripts and it’ll do all of the hard work for you:

git clone https://github.com/flihp/oe-build-scripts.git
git checkout ndvm
./build.sh | tee build.log

This will crank out an image suitable to run on an Intel SandyBridge (SNB) system. I’ve only tested PV guests so you’ll have to set up a config like the following:

kernel = "/usr/lib/xen-common/bzImage-sugarbay.bin"
extra = "root=/dev/xvda console=hvc0"
iommu = "soft"
memory = "512"
name = "ndvm"
uuid = "a9ae8853-f1e9-41ca-9904-0e906efeb4af"
vcpus = "1"
 
disk = ['phy:/dev/loop0,xvda,w']
pci = ['0000:04:00.0']

Notice the kernel image and the rootfs image must be copied over to the Xen dom0 that you want to test the NDVM on. The image is listed in the kernel line and this can be found at tmp-eglibc/deploy/images/sugarbay/bzImage-sugarbay.bin relative to your build root. The image will be in the same directory and called something like integral-image-ndvm-sugarbay.ext3. Notice that the disk config is pointing at a loopback. You’ll have to set this up with losetup just like any other loopback device. The part that differentiates this from any other PV guest is that we’re passing a PCI network device through to it and it’ll offer up a bridge to other guest VMs. The definitive documentation on how to do this with Xen is here: http://wiki.xen.org/wiki/Xen_PCI_Passthrough

The bit that I had to wrangle to get the bridge set up properly with OE was the integration between a network interfaces file and the bridge. I’ve been spoiled by Debian and it’s seamless integration between the two. OE has no such niceties. In this situation I had to chose between hacking a script manually or finding the scripts that integrate interfaces configuration with the bridge and baking that into the bridge-utils package from meta-oe. I figured getting bridges integrated with interfaces would be useful to others so I went through the Debian source package, extracted the scripts and baked them into OE directly. Likely this should go ‘upstream’ but for now this specialization is just sitting in my meta-integral layer.

So after fixing up the bridge-utils package so it plays nice with the interfaces file, the interfaces in our NDVM looks like so:

# /etc/network/interfaces -- configuration file for ifup(8), ifdown(8)
 
# The loopback interface
auto lo
iface lo inet loopback
 
# real interface
auto eth0
iface eth0 inet manual
 
# xen bridge
auto xenbr0
iface xenbr0 inet manual
        bridge_ports eth0
        bridge_stp off
        bridge_waitport 0
        bridge_fw 0

So that’s it. Boot up this NDVM and it’ll have a physical network device and a bridge ready for consumption by other guests. I’ve not yet gone through and tested adding additional guests to the bridge so I’m assuming there’s still a bit of work lurking there. I’ll give this last bit a go and hopefully have positive results to post sooner than later. I’ve also not tested this on XenClient-XT as the most recent stable release is getting a bit old and likely there’s going to be incompatibilities between netfront / back stuff. This approach however is likely a great starting point if you’re building a service VM you want to run on our next release of XT though so feel free to fork and experiment.

UPDATE: Gave my NDVM a test just by giving the dom0 that was hosting it a vif. You can do this like so:

# xl network-attach Domain-0 backend=ndvm

The above assumes your NDVM has been named ‘ndvm’ in it’s VM config naturally. Anyways this will pop up a vif in dom0 backed by the NDVM. Pretty slick IMHO. Now to wrap this whole thing up so dom0 and the NDVM can be built as a single image with OE … Sounds easy enough :)

Talk at Xen Developer Summit 2013

UPDATE: Here’s the link: http://www.youtube.com/watch?v=6Q8mlTBn-ZI. I still haven’t been able to bring myself to actually watch it but I’m sure it’s great :\

Just got back from the 2013 Xen Developer Summit where I gave a talk on a few interesting (to me at least) things. If you’re interested you can find my abstract here. My focus was naturally on SELinux / XSM stuff. Mostly my talk focused on the sVirt implementation in XenClient XT and another fun application of the architecture to our management stuff.

Had good chat with a guy from Amazon afterward about all of the other evil stuff someone could do if they compromised QEMU. So while sVirt prevents the specific scenario presented I’ve no doubt there are other hazards. He was specifically concerned over the Xen privcmd driver & the hypercalls it could make. Hard to disagree as QEMU with root permissions in dom0 can execute any hypercall it wants. The only way to address this (other than stubdoms) is to deprivilege QEMU to prevent it from making hypercalls. That would probably require some code-changes in QEMU so it’s no small task.

I also touched briefly on the design for an inter-VM communication (IVC) mechanism that was floated to xen-devel this summer. In XT we have an IVC called ‘V4V’ that isn’t acceptable to upstream. When it came to our XSM policy however V4V had some favorable properties in that we created a new object in the hypervisor that was a ‘first-class’ object in the policy.

The proposal uses the same model as the front/back drivers so there would be no new object specific to the IVC. This means there wouldn’t be way to differentiate the IVC from any other front/back driver. The purpose of the talk was to point this out and hopefully solicit some discussion. Got an even better conversation going on this point so hopefully I’ll have some fun stuff to report on this front soon.

Calculating the MLE hash

My work to calculate PCR[18] from the last post was missing one big piece. I took a short cut and parsed the MLE hash out of the SINIT to MLE data table. This was a stop gap.
The MLE wasn’t being measured directly. We were still extracting the measurement as taken by the SINIT which is a binary blob from Intel. We don’t have a choice in trusting this blob from Intel but we can verify the measurements it takes. With this in mind I’ve gone back and added a tool to the pcr-calc module to calculate the MLE hash directly from the MLE.

The MLE Hash

Calculating the MLE hash is a bit more complicated than just hashing the ELF binary that contains it. There’s already a utility that does this in the tboot project though it’s pretty limited as it only dumps out the hash in a hex string. My end goal is to integrate this work into a bitbake class so having a python class to emit a hash object containing the measurement of the MLE is a lot more convenient.

In the pcr-calc project I’ve added a few things to make this happen. First is a class called mleHeader that parses the MLE header. This is just more of the mundane data parsing that I’ve been doing since this whole thing started. Finding the MLE header is just a matter of searching for the magic MLE UUID: 5aac8290-6f47-a774-0f5c-55a2cb51b642. Having the header isn’t enough though. The MLE must be extracted from the ELF and this is particularly hard because I know nothing about the structure of ELF files.

To do the extracting I basically ported the mlehash utility from tboot to python. The MLE is actually stored in the ELF file program header. This requires parsing and extracting the PT_LOAD segments. Writing a generic ELF parser is way beyond the scope of what I’m qualified to do but thankfully Eli Bendersky already has a handle on this. Check out pyelftools on his github page. You can download the package for pyelftools through the python package system like so:

$ pip install pyelftools

I’ve not yet integrated a check for this package into the pcr-calc autotools stuff yet but I'll get around to it.

So in pcr-calc, the MLEUtil class does a few things. First it unzips the ELF file if necessary. Second, the ELFFile class from pyelftools is used to extract the PT_LOAD segments from the ELF. These are copied to a temporary file and the excess space is zero-filled. Once the ELF is extracted we locate the MLE header by searching for the UUID above. This header is represented and parsed by the mleHeader object.

The end goal is to calculate the SHA1 hash of the MLE. The fields in the header we need to do this are mle_start_off and mle_end_off. These are the offset to the start and end of the MLE respectively. Both offsets are relative to the beginning of the extracted ELF. The hash is then simply calculated over the data in this range.

Housekeeping

With the objects necessary to calculate the MLE hash done I went back and updated the pcr18 utility. Now instead of parsing the hash out of the TXT heap it now hashes the MLE directly. The mlehash program is constructed in a similar way but it is limited to calculating the MLE hash only.

Conclusion

A significant amount of the work in calculating the MLE hash was just code reading, firstly to understand how to extract and measure the MLE, second to understand how use the pyelftools package. Using pyelftools means that pcr-calc has a new dependency but it's a lot better than implementing it myself. Working with pyelftools has been beneficial not only in that it saves me effort but it's also an excellent example to work from. pcr-calc is my first attempt at implementing anything in python and it shows. Having poked around in pyelftools a little bit I've realized that even though my code "works" it's pretty horrible. Future efforts to "clean up" pcr-calc will model significant portions of it after the code in pyelftools.

Having completed calculating the MLE hash we've taken a big step forward in our effort to construct future PCR values by measuring the individual components. It's the last step in removing dependence on the extracted heap. We can now calculate PCR[18] and PCR[19] without any knowledge of or access to the deployed platform hardware and that's pretty great. PCR[17] by contrast contains a whole bunch of stuff like the STM hash that's independent from the Linux OS being run. For now I'm happy to assume PCR[17] is static for a system and doesn't need to be calculated in the build system.

Eventually I'd like to extend pcr-calc to include mechanisms for ingesting an LCP and calculating PCR[17] but that's a long way off. Instead, my next steps will be to clean up the pcr-calc code and integrating it into the meta-measured OE layer. The end goal here is to produce a manifest that a 3rd party (an installer or a remote system) can use to either seal secrets to a future platform state or for appraising an attestation exchange. More on this front next.

Calculating PCR[18] and PCR[19]

In my first post on the subject, I indicated calculating PCR[18] and PCR[19] was significantly easier than PCR[17]. If you read my last post on calculating PCR[17] you’ll see why. Since then I’ve gone through and sorted out calculating the remaining two PCRs and it is in fact pretty easy (few!).

PCR[18]

Unlike PCR[17], 18 and 19 are mostly just hashes of boot modules. Unfortunately the MLE Developers Guide again is a bit vague here and maybe a bit misleading even. I’m starting to think that the guide on the Intel website is out of date.

The state of PCR[18] is defined in section 1.9.2. From the text:

PCR 18 will be extended with the SHA-1 hash of the MLE, as reported in the
SinitMleData.MleHash field.

This is true but it’s only the first extend operation. There can be a second and by default (default tboot behavior) there is. The second hash is of the first boot module after the MLE. The is the module tboot executes after the SINIT.

So what’s the MLE (Measured Launch Environment) anyways? The MLE is the thing that does the measured launch. Pretty much it’s the code that executes GETSEC[SENTER] and hands control over to the SINIT ACM, that’s tboot. After a successful measured launch this hash will be in the TXT heap in the SINIT to MLE data table. The field is called the MleHash.

Like I said in my first post: this isn’t about blindly trusting the hashes in the heap though. We want to be able to isolate the thing being measured so if the MLE is tboot then we should be able to take the SHA1 hash of the tboot binary and get the same value … or not. There’s actually a structure within tboot that defines the MLE. The hash of this structure is what’s in the MleHash filed of the SINIT to MLE data table.

There’s already a stand-alone program in the tboot code to calculate this hash for use in the LCP. I’ve not yet looked too deep into how this is calculated though. For now, if you’re interested, you can use this tool to calculate the MleHash and compare this to the data reported in your TXT heap or the txt-stat output. Mine looks like this:

  $ ./lcp_mlehash -c "logging=serial,vga,memory" ~/txt-test/modules/tboot.gz
  5b d5 12 72 1e 07 5e 31 4d 8d e5 2e 5f b9 10 04 d4 00 e7 27

A very important thing to note here is that we pass not only the module to this program but the arguments passed to it as well.

So the MLE Dev Guide indicates that PCR[18] will be extended with the MleHash we’ve just now calculated. But if you check the status of your PCR[18] after a measured launch you’ll realize that it can’t be calculated with just the MleHash. So the MLE Dev Guide isn’t really the best documentation for this stuff I guess. You’re much better off reading the README in the tboot sources.

In the README it’s clearly stated that PCR[18] is extended not only with the MleHash but also with the hash of the first boot module. The bit that’s missing of course is a description of how modules are hashed. The exact algorithm for doing this isn’t in the Dev Guide or the README though, it’s in the tboot source code in the file tboot/common/policy.c.

A module hash is simply the hash of the command line passed two the module concatinated with the module itself (remember the two parameters passed to the lcp_mlehash above). There isn’t much to processing the command line but if you’re interested you can check out the code. I’ve added a utility called module-hash to automate this to the pcr-calc library. Using the notation we’ve been using up till now this could be represented as:

  module_hash = SHA1 (cmdline | module)

You can invoke the utility to perform this calculation like so:

  module-hash --cmdline tboot.cmdline --module tboot.gz

Combine this with the MleHash and the full PCR[18] calculation is as follows:

  PCR[18]_1 = sha1 (PCR[18]_0 | MleHash)
  PCR[18]_2 = sha1 (PCR[18]_1 | sha1 (cmdline | module))

there are two extends required to calculate PCR[18]. And as a reminder: PCR[18]_0 is the PCR before any extend operations so it contains 20 bytes of 0′s. In my example setup the module that’s hashed and extended into PCR[18] after the MLE is just the linux kernel file vmlinuz.

The pcr-calc project has a pcr18 utility too that wraps the MleHash and the hashing of the module for convenience. It gets the MleHash from the heap still but the next logical step is to calculate it directly from the tboot binary and the commandline. Invoke pcr18 like so:

  $ pcr18 --module vmlinuz --cmdline vmlinuz.cmd txtheap.bin

The --cmdline argument is a file containing a single line of text which is parsed as the parameters to the module.

PCR[19]

PCR[19] is just as trivial to calculate and again the Dev guide is little help. There’s nothing mentioned of this PCR in the Dev Guide but the LCP dictates what gets hashed and stored in it.
The default tboot policy extends PCR[19] with the hash of all remaining modules. That’s all of the boot modules that aren’t an SINIT ACM, the MLE or the first module.

This leaves the initrd on my system (yours may be different and may have more than one module). If the module is compressed it must be decompressed before it’s hashed. The calculation of PCR[19] can be described as follows:

  PCR[19]_1 = sha1 (PCR[19]_0 | sha1 (cmdline_1 | module_1))
  PCR[19]_n = sha1 (PCR[19]_n-1 | sha1 (cmdline_n | module_n))

Remember that module_0 is tboot so it’s measured as part of PCR[18] so we start here with cmdline_1 and module_1.

The pcr-calc project has yet another program pcr19 that automates the calculation of PCR[19]. The calling convention is a bit awkward here and I’ll probably have to come up with something better:

  $ pcr19 module1.cmd,module1 module2.cmd,module2

The arguments are all positional. They’re ordered pairs of files with the first being a file containing the command line text and the second being the module itself. This means your file paths can’t have commas in them.

Conclusion

When I started out on this quest to pre-calculate the DRTM PCRs as populated by the Intel TXT hardware and tboot I knew this would be painful but I didn’t realize how much. It really was an exercise in masochism. Now that I’ve jumped through the hoops and done the digging necessary to understand the process, I’ve got a pretty good understanding of what’s required to move to the next phase of this project.

My prevous work with OE and the meta-measured layer is the groundwork. It produces a system image that will do a TXT measured launch but the PCR values after boot are still a mystery. I had hoped to be able to calculate all of these measurements in the build, but access to the TXT heap from the target device isn’t realistic. From the work discussed above however, it looks like PCR[18] and PCR[19] can be calculated reliably.

The remaining work is to calculate the MleHash, though the existing tool in tboot is extremely close to what we need. Combined with the tools from pcr-calc this is likely sufficient. All of this will need to be combined and integrated into the meta-measured layer likely as part of an image class. Sounds like my next task is to clean this stuff up and revive meta-measured.

Calculating PCR[17]

When I left off my last discussin of tboot PCR calculations I gave a quick intro but little more. In this post I’ll go into details for calculating the first of them: PCR[17].

There have been a number of discussions with regard to calculating or verifying PCR values on the tboot-devel mailing list and they were extremely useful in writing this code and post. These all fell a bit short of what I wanted to accomplish in that all approaches extracted hashes from the output of the txt-stat program (the tboot log) and used those hashes to re-construct the PCR values. I wanted to construct all hashes manually, to measure and account for the actual things that TXT and tboot were measuring and storing into the PCRs and to do this independent of an actual measured launch. Basically this translates to isolating the things being measured, extracting them (if possible) and use them to reconstruct the PCR value on any system, like a build server or an external verifying party.

The process is pretty straight forward, though time consuming, and the specification is phrased in such a way as to force some guess and check. There’s even a bit of a trick in the end which requires that we go digging around in the tboot source code which is always fun. I’ll also present a bit of code that will automate the calculation for you so if you’re anxious and don’t want to read any more you can go straight to the code which can be found here: .

DISCLAIMER: The code in the pcr-cal git repo is very much a work-in-progress and should be considered unstable at best so YMMV.

The spec that defines the DRTM specific PCRs is the “PC Client Implementation for BIOS”. These are PCRs 17 through 20. Their individual use however is hardware specific and on Intel hardware, the definitive source of data on what gets extended into which of these PCRs as part of establishing a DRTM is a document titled “Intel® Trusted Execution Technology (Intel® TXT) Software Development Guide: Measured Launched Environment Developer’s Guide”.

Quite a mouth full. Anyways section 1.9.1 covers PCR[17] but the details of what various bits are measured are spread out over the document. A default tboot configuration will cause 3 extends to this PCR so we’ll break this post up into 3 sections, each one describing the hashes that go into the 3 consecutive extend operations.

First extend: SINIT ACM

The first thing that’s extended into PCR[17] is the hash of the SINIT ACM. This is a binary blob that Intel ships which is used by tboot to establish the DRTM on a platform. The binary code in the ACM is chipset specific so there are a number of ACMs out there to chose from. tboot automates the process so if you’re unsure which ACM is the right one for your platform you can configure your bootloader to load every ACM and tboot will pick the right one. This will slow your boot process down considerably though and selecting the proper one isn’t hard with a bit of reading so don’t be lazy.

With the right ACM in hand you’d think it would be a simple matter of calculating the sha1 hash of the file and extending that into PCR[17]. That’s not the case though. There are two little details that must be sorted first.

Depending on the version of the ACM you’re using the hash algorithm may be sha1 or it may be sha256. ACMs version 7 or later will use sha256, while earlier versions will use sha1. The current version of the ACM format is 8 so most modern hardware will need a sha256 hash (not to mention that most OEM implementations of TXT 3 years or older never worked in the first place … snap!).

Further, there are some fileds in the ACM that aren’t included in the hash. The logic behind this escapes me but the apendix A.1.2 specifies that some fields are omitted. Quoting the spec: “Those parts of the module header not included are: the RSA Signature, the public key, and the scratch field.” That sounds like 3 fields from the ACM right? Wrong: there’s a 4th field omitted as well and that’s the RSA exponent. I guess they meant for the exponent to be included in the definition of “public key”? Thanks for being explicit.

Anyways omit the fields: RSAPubKey, RSAPubExp, RSASig and the Scratch space, got it. To omit these fields from the hash we’ve gotta parse the ACM. I’ll cover this code at the end.

Finally the 32 bits that make up the EDX register which hold the flags passed to the GETSEC[SENTER] instruction are appended to the hash of the ACM. We represent PCR[17] at the first extend operation thusly:

PCR[17]_1 = Extend(PCR[17]_0 | SHA256 (ACM | EDX))

where PCR[17]_0 is the state of PCR[17] at time = 0. PCRs are initialized to 20bytes of 0′s so PCR[17]_0 is 20 bytes of 0′s.

Second Extend: Heap Data

The second extend to PCR[17] includes various bits of data from the TXT heap. Appendix C describes the TXT heap as a contiguous region of memory set asside by the BIOS for use by ‘system software’ (aka BIOS) to pass data to the SINIT ACM and the MLE. PCR[17] is extended with the sha1 hash of either 6 or 7 concatenated pieces of data depending on the version of the ACM. The following fields are concatenated together and their sha1 hash is extended into PCR[17] for the second extend:

  1. BiosAcmId
  2. MsegValid
  3. StmHash
  4. PolicyControl
  5. LcpPolicyHash
  6. OsSinitCaps or 4 bytes of 0′s as specified by the LCP (more on this next)

If the SINIT to MLE data table version is 8 or greater an additional 4 bytes are appended representing the processor S-CRTM status. These 4 bytes are in the ProcScrtmStatus field in the SINIT to MLE data table.

The second extend to PCR[17] could be represented as follows for SINIT to MLE data table versions < 8:

PCR[17]_2 = sha1 (PCR[17]_1 | sha1 ( SinitMleData.BiosAcm.ID | SinitMleData.MsegValid |
                                     SinitMleData.StmHash | SinitMleData.PolicyControl |
                                     SinitMleData.LcpPolicyHash | (OsSinitData.Capabilities, 0)))

For SINIT to MLE data table versions >= 8:

PCR[17]_2 = sha1 (PCR[17]_1 | sha1 ( SinitMleData.BiosAcm.ID | SinitMleData.MsegValid | 
                                     SinitMleData.StmHash | SinitMleData.PolicyControl |
                                     SinitMleData.LcpPolicyHash | (OsSinitData.Capabilities, 0) |
                                     SinitMleData.ProcessorSCRTMStatus))

where we use the notation (OSSinitData.Capabilities, 0) to represent a choce made between appending the value of OsSinitData.Capabilities or 4 bytes of 0′s depending on the state of the LCP policy control field.

The astute reader is likely wondering: “How do I get these values out of the TXT heap … and where do I even get the TXT heap from?” Both are very good questions. Getting at the TXT heap isn’t too difficult. You’re on a Linux system presumably with root access. The TXT heap is a region of memory like any other and you only need to know the offset where it resides and how to determine it’s size. Both the offset and the heap size are obtained from the TXT public registers which are mapped to well known memory addresses (read the spec if you’re really interested).

In the git repo linked above I’ve written a simple utility to parse and output the TXT Heap: txt-heapdump. You can run this utility to display the contents of the heap on the system you wish to calcuclate PCR[17] in a human readable form:

$ txt-heapdump --mmap --pretty

You can also use it to obtain the heap as a binary file:

$ txt-heapdump --mmap > txtheap.bin

You can then parse the binary file to display the heap in a human readable form and it’ll look just like it did coming straight from /dev/mem::

$ txt-heapdump --pretty -i txtheap.bin

Once you have the heap as a file you can use the pcr-calc library to parse and extract various bits. Again, I’ll present the utility that does it all for you at the end. But first, the third and final extend …

Third Extend: Launch Control Policy (LCP)

You’d expect that all of the values that are hashed as part of PCR[17] are discussed in the spec under section “PCR 17″ … and you’d be wrong. A couple of sections deeper where the LCP is discussed, you’ll find a description of how the LCP policy is measured and it turns out that this measurement gets extended into PCR[17] as well! There are a number of rules laid out in this section for how the system behaves when there’s no ‘Supplier’ or ‘Owner’ LCP present. Specifically the spec states:

As a matter of integrity, the LCP_POLICY::PolicyControl field will always be extended into PCR 17. If an Owner policy exists, its PolicyControl field will be extended; otherwise the Supplier policy’s will be. If there are no policies, 32 bits of 0s will be extended.

I’ve not gone through this section with a very thorough eye so I’m not an authority here, but tboot seems to ignore these rules and instead loads a default policy when there isn’t one in the TPM NV RAM. Not saying this is good or bad, right or wrong, just pointing out that this is what tboot does and it was something that I had to figure out in order to calculate the value of PCR[17] independently on my test systems.

So my goal here is to calculate PCR values. If your system is like mine and both you (the ‘Owner’) and the ‘Supplier’ (your OEM) didn’t provide an LCP, how do we measure the default policy from tboot? The only thing I could come up with is to pull apart the tboot code and copy the hard-coded structures into a C program and then dump them to disk in binary. The hash of this file is the one we need to extend into PCR[17] along with the LCP PolicyControl value. I’ve added a class to the pcr-calc library to parse the necessary parts of the binary LCP to support this operation.

The program that dumps the binary LCP from tboot is: lcp_def I’ve kept this utility in the pcr-calc project to reproduce the LCP on demand. I considered only keeping around the LCP binary in a data file but in the event that the default tboot policy changes in the future I wanted to keep the program around to dump the binary structures. When executed this program just dumps the binary policy so you’ll have to redirect the output:

$ lcp_def > lcp.bin

Final PCR[17] Calculation

Now that we’ve figured out how to do all three independent extend operations and we’ve collected the heap and LCP blobs, we can calculate the final state of PCR[17]. I’ve automated this in the program: pcr17 (very creative name I know). Assuming your heap is in txtheap.bin and your LCP is in lcp.bin your SINIT ACM file is named sinit.acm you should invoke the program as follows:

$ pcr17 -i txtheap.bin -l lcp.bin sinit.acm

Your output should look something like this:

$ ./bin/pcr17 -i ../txt-data/txtheap.bin -l ../txt-data/lcp_def.bin ../3rd_gen_i5_i7_SINIT_51.BIN 
first extend: SINIT ACM hash
  extending with: 0fcc099f81549da4836d492afb8ab2e303cecfa1
  PCR[17] before extend: 0000000000000000000000000000000000000000
  PCR[17] after extend: 8d3dd5c8e795dfac5dbfa9859310b2bcea36d347
second extend: TXT heap data
  append BiosAcmId:
    8000 0000 2010 1022 0000 b001 ffff ffff 
    ffff ffff 
  append MsegValid_Bytes:
    0000 0000 0000 0000 
  append StmHash:
    0000 0000 0000 0000 0000 0000 0000 0000 
    0000 0000 
  append PolicyControl_Bytes:
    0000 0000 
  append LcpPolicyHash:
    0000 0000 0000 0000 0000 0000 0000 0000 
    0000 0000 
  append Capabilities_Bytes: False
    Hashing 4 bytes of 0s in place of OsSinit.Capabilities
  append ProcScrtmStatus_Bytes:
    0000 0000 
  extending with: 7e0cdad3b8d9c344ab89657efdbfa638d1b25978
  PCR[17] before extend: 8d3dd5c8e795dfac5dbfa9859310b2bcea36d347
  PCR[17]: bfa4421b49f6ab899157ba6ee8fec3c5c5abf4ab
third extend: LCP
  lcp hash: ab41624e7d71f068d48e1c2f43e616bf40671c39
  polctrl: 1
  extending with: 9704353630674bfe21b86b64a7b0f99c297cf902
  PCR[17] before extend: bfa4421b49f6ab899157ba6ee8fec3c5c5abf4ab
  PCR[17] after extend: 57a5f1b245ac52614498a728efe7f741b4dc3ebf
 
PCR[17] final: 57a5f1b245ac52614498a728efe7f741b4dc3ebf

Currently the program will dump the hashes and PCR states after each extend along with the actual sha1 that should be in PCR[17] after a successful TXT measured launch. Take a look at the code if you’re interested in the details. This was as much an exercise for me in learning a bit of python as it was about the actual end result. If given the choice again I’d have implemented this in C just because it’s much easier to deal with binary values and memory ranges in C than in Python. Then again this may just be that I know C better than I know Python so YMMV.

Conclusion

As you’ve probably noticed I was only partially successful in my goal. All of the data from the TXT heap that are extended in to PCR[17] are themselves hashes of things we can’t access. Most of these hashes are all 0′s though denoting that the BIOS implementer opted out of implementing that feature (you’ll have an STM on your system one of these days but don’t hold your breath). The only one that’s actually present is the BiosAcmId but I’d expect in the future for the other fields to be populated as well.

This is just another instance of binary blobs making their way into the TCB of our software systems. We’ve had to deal with these in various forms over the years: binary drivers, firmware and BIOS code. Intel and other chip manufacturers have been making their hardware extensible using firmware and microcode for a while now so it’s no surprise that these things have made their way into the TCB. The good news is that they’re being measured and even if we can’t get our hands on the code, or even the binary blob on account of it being embedded in some piece of hardware, we can still identify them by their hash. The implications for trust aren’t great but it’s a start.

Using OE to build an XT ‘Service VM’

UPDATE: I’ve deleted the build scripts git repo mentioned in this post and rolled all of my OE project build scripts into one repo. Find it here: git://github.com/flihp/oe-build-scripts.git

UPDATE #2: I’ve written a more up-to-date post on similar work here: http://twobit.us/blog/2013/11/openembedded-xen-network-driver-vm/. The data in this post should be considered out of date.

Over the past few weeks I’ve run into a few misconceptions about XenClient XT and OpenEmbedded. First is that XT is some sort of magical system that mere mortals can’t customize. Second is that building a special-purpose, super small Linux image on OpenEmbedded is an insurmountable task. This post is an attempt to dispel both of these misconceptions and maybe even motivate some fun work in the process.

Don’t get me wrong though, this isn’t a trival task and I didn’t start and end this work in one night. There’s still a bunch of work to do here. I’ll lay that out at the end though. For now, I’ve put up the build scripts I threw together last night on git hub. They’re super minimal and derived from another project. Get them here: https://github.com/flihp/transbridge-build-scripts

My goal here is to build a simple rootfs that XT can boot with a VM ‘type’ of ‘servicevm’. This is the type that the XT toolstack associates with the default ‘Network’ VM. Basically it will be a VM invisible to the user. Eventually I’d like for this example to be useful as a transparent network bridge suitable as an in-line filter or even as a ‘driver domain’. But let’s not get ahead of ourselves …

What image do I build?

The first thing that you need to chose to do when building an image with OE is what MACHINE you’re building for. XT uses Xen for virtualization so whatever platform you’re running it on will dictate the MACHINE. Since XT only runs on Intel hardware it’s pretty safe to assume you’re system is compatible with generic i586. The basic qemux86 MACHINE that’s in the oe-core layer builds for this so for these purposes it’ll suffice. This is already set up in the local.conf in my build scripts.

To build the minimal core image that’s in the oe-core layer just run my build.sh script from the root of the repository. I like to tee the output to a log file for inspection in the event of a failure:

./build.sh | tee build.log

Now you should have a bunch of new stuff in ./tmp-eglibc/deploy/images/ which includes an ext3 rootfs. The file name should be something like core-image-minimal.ext3. Copy this over to your XT dom0 and get ready to build a VM.

Make a VHD

The next thing to do is copy the ext3 image over to a VHD. From within /storage/disks create a new VHD large enough to hold the image. I’ve experimented with both core-image-basic and core-image-minimal and a 100M VHD will be large enough … yes that’s a very small rootfs. core-image-minimal is around 9M:

cd /storage/disks
vhd-util create -n transbridge.vhd -s 100

Next have tap-ctl create a new device node for the VHD:

tap-ctl create -a vhd:/storage/disks/transbridge.vhd

This will output the path to the device node created (and yeah the weird command syntax bugs me too). You can alternatively list the current blktap devices and find yours there:

tap-ctl list
1276    0    0        vhd /storage/ndvm/ndvm.vhd
1281    1    0        vhd /storage/ndvm/ndvm-swap.vhd
...

I’ve got no idea what the first number is (maybe PID of the blktap instance that’s backing the device?) but the 2nd and 3rd numbers are the major / minor for the device. The last column is the VHD file backing the device so find your VHD there and the major number then find the device in /dev/xen/blktap-2/tapdevX … mine had a major number of ’8′ so that’s what I’ll use in this example. Then just byte copy your ext3 on to this device:

dd if=/storage/disks/transbridge.ext3 of=/dev/xen/blktap-2/tapdev8

Then you can mount your VHD in dom0 to poke around:

mount /dev/xen/blktap-2/tapdev8 /media/hdd

Where’s my kernel?

Yeah so OE doesn’t put a kernel on the rootfs for qemu machines. That’s part of why the core-image-minimal image is so damn small. QEMU doesn’t actually boot like regular hardware so you actually pass it the kernel on the command line so OE’s doing the right thing here. If you want the kernel from the OE build it’ll be in ./tmp-eglibc/deploy/images/ with the images … but it won’t boot on XT :(

This is a kernel configuration thing. I could have spent a few days creating a new meta layer and customizing the Yocto kernel to get a ‘Xen-ified’ image but taht sounds like a lot of work. I’m happy for this to be quick and dirty for the time being so I just stole the kernel image from the XT ‘Network’ VM to see if I could get my VM booting.

You can do this too by first mounting the Network VMs rootfs. Cool thing is you don’t need to power down the Network VM to mount it’s FS in dom0! The disk is exposed to the Network VM as a read-only device so you can mount it read only in dom0:

mount /dev/xen/blktap-2/tapdev0 /media/cf

Then just copy the kernel and modules over to your new rootfs and set up some symlinks to the kernel image so it’s easy to find:

cp /media/cf/boot/vmlinuz-2.6.32.12-0.7.1 /media/hdd/boot
cp -R /media/cf/lib/modules/2.6.32.12-0.7.1 /media/hdd/lib/modules
cd /media/hdd/boot
ln -s vmlinuz-2.6.32.12-0.7.1 vmlinuz
cd /media/hdd
ln -s ./boot/vmlinuz-2.6.32.12-0.7.1 vmlinuz

You may find that there isn’t enough space on the ext3 image you copied on to the VHD. Remember that the ext3 image is only as large as the disk image created by OE. It’s size won’t be the same as the VHD you created unless you resize it to fill the full VHD. You can do so by first umount’ing the tapdev then running resize2fs on the tapdev:

umount /media/hdd
resize2fs /dev/xen/blktap-2/tapdev8

This will make the file system on the VHD expand to fill the full virtual disk. If you made your VHD large enough you’ll have enough space for the kernel and modules. Like I say above, 100M is a safe number but you can go smaller.

Finally you’ll want to be able to log into your VM. If you picked the minimal image it won’t have ssh or anything so you’ll need a getty listening on the xen console device. Add the following line to your inittab:

echo -e "\nX:2345:respawn:/sbin/getty 115200 xvc0" >> /media/hdd/etc/inittab

There’s also gonna be a default getty trying to attach to ttyS0 which isn’t present. When the VM is up it will cause some messages on the console:

respawning too fast: disabled for 5 minutes

You can disable this by removing the ‘S’ entry in inittab but really the proper solution is a new image with a proper inittab for an XT service VM … I’ll get there eventually.

Make it a VM

Up till now all we’ve got is a VHD file. Without a VM to run it nothing interesting is gonna happen though so now we make one. The XT toolstack isn’t documented to the point where someone can just read the man page. But it will tell you a lot about itself if you just run it with out any parameters. Honestly I know very little about our toolstack so I’m always executing xec and grepping though the output.

After some experimentation here are the commands to create a new Linux VM from the provided template and modify it to be a para-virtualized service VM. In retrospect it may be better to use the ‘ndvm’ template but this is how I did it for better or for worse:

xec create-vm-with-template new-vm-linux

This command will output a path to the VM node in the XT configuration database. The name of the VM will also be something crazy. Ge the name from the output of xec-vm and change it to something sensible like ‘minimal’:

xec-vm --name <name-some-wacky-uuid> set name minimal

Your VM will also get a virutal CD-ROM which we don’t want so delete it and then add a disk for VHD we configured:

xec-vm --name minimal --disk 0 delete
xec-vm --name minimal add-disk
xec-vm --name minimal --disk 0 set phys-path /storage/disks/minimal.vhd

Then set all of the VM properties per the instructions provided in the XT Developer Guide:

xec-vm --name minimal --disk 0 set virt-path xvda
xec-vm --name minimal set flask-label "system_u:system_r:nilfvm_t"
xec-vm --name minimal set stubdom false
xec-vm --name minimal set hvm false
xec-vm --name minimal set qemu-dm-path ""
xec-vm --name minimal set slot -1
xec-vm --name minimal set type servicevm
xec-vm --name minimal set kernel /tmp/minimal-vmlinuz
xec-vm --name minimal set kernel-extract /vmlinuz
xec-vm --name minimal set cmd-line "root=/dev/xvda xencons=xvc0 console=xvc0 rw"

Then all that’s left is booting your new minimal VM:

xec-vm --name minimal start

You can then connect to the dom0 end of the Xen serial device to log into your VM:

screen $(xenstore-read /local/domain/$(xec-vm --name minimal get domid)/console/tty))

Next steps

This is a pretty rough set of instructions but it will produce a bootable VM on XenClient XT from a very small OpenEmbedded core-image-minimal. There’s tons of places this can be cleaned up starting with an kernel that’s specific to a Xen domU. A real OE DISTRO would be another welcome addition so various distro specific features could be added and removed more easily. If the lazywebs feel like contributing some OE skills to this effort leave me a comment.

What’s in a hash?

After the initial work on meta-measured it was very clear that configuring an MLE is great but alone it has little value. Sure tboot will measure things for you, it will even store these measurements in your TPM’s PCRs! But the “so what?” remains unanswered: there are hashes in your TPM, who cares?

Even after you’ve set-up meta-measured, launch an MLE and dumped out the contents of /sys/class/misc/tpm0/device/pcrs what have you accomplished? The whole point of meta-measured was to setup the machinery to make this easier and for the PCR values to remain unchanged across a reboot. I was surprised at how much work went into just this. But after this work, the hashes in these PCRs still had no meaning beyond being mysterious, albeit static, hashes.

I closed the meta-measured post stating my next goal was to take a stab at pre-computing some PCR values. Knowing the values that PCRs will have in your final running system allows for secrets to be protected by sealed storage at install time (which I’ve heard called ‘local attestation’ just to confuse things). Naturally the more system state involved in the sealing operation (assume this means ‘more PCRs’ for now) the better. So I had hoped to come back after a bit with the tools necessary for meta-measured to produce a manifest of as many of the tboot PCR values as possible.

Starting with PCR[17]

Naturally I started with what I knew would be the hardest PCR to calculate: the infamous PCR[17]. JPs comment on my last post pointed out some of his heroic efforts to compute PCR[17] so that was a huge help. So first things first: respect to JP for the pointer. This task would have taken me twice as long were it not for his work and the work of others on tboot-devel.

So I set out to calculate PCR[17] but I think my approach was different from those I was able to find in the public domain. The criteria I came up with for my work was:

  1. Calculate PCR[17] for system A on system B.
  2. Do the measurements myself.

So ‘rule #1′ basically says: no reliance on having a console on the running system. This is one part technical purity, one part good design as the intent is to make these tools as flexible as possible and useful in a build system. ‘Rule #2′ is all technical purity. This isn’t an exercise in recreating the algorithm that produces the value that ends up in PCR[17].

This last bit is important. The whole point is to account for the actual things (software, configuration etc) that are measured as part of bringing up a TXT MLE. Once these are identified they need to be collected (maybe even extracted from the system) if possible, and then used to calculate the final hash stored in PCR[17]. So basically, no parsing and hashing the output from ‘txt-stat’, that’s cheating :) I explained this approach to a friend and was instantly accused of masochism. That’s a good sign and I guess there’s an element of that in the approach as well, if not everything I do.

As always, wrapping up one exploratory exercise in learning / brushing up on a language is always a good idea right? So I did as much of my work as possible on this in Python. Naturally I had to break this rule and use some C at the end but that’s a bit of a punchline so I don’t want to spoil that joke.

So if you’re only interested in the code I won’t bore you with any more talk about ‘goals’ and ‘design’. It’s all up on github. The python’s here: https://github.com/flihp/pcr-calc. The C is here: https://github.com/flihp/pcr-calc_c. There isn’t much in the way of documentation but I’ll get into that soon.

If you are interested in the words that accompany this work stay tuned. My next post will give a bit of a tour of the rabbit hole that is calculating PCR[17]. This will include discussion of each ‘thing’ that’s measured and what it all means. Like I said though: the end result is that precalculating PCR[17] for arbitrary platforms is a massive PITA and likely not very useful for my original purposes. After thinking on it a bit however I’m quite certain this info may be useful elsewhere but I’ll save that for discussion on follow-on work.