Building an Ubuntu Core appliance image

Creating an appliance image for a single purpose can be quite some effort- you need to take care of your application, need to build a safe root filesystem and make modifications to the system so it behaves the way you or the application expect it to behave. With Ubuntu Core,this effort becomes immediately a lot easier. Ubuntu Core is completely made out of snap packages; the kernel, rootfs and all applications are all snap based and benefit from the advantages this package format brings..

Snap packages are transactional. They can automatically rollback on error after an upgrade including the kernel snap. If a breakage is noticed after upgrading to the new kernel version, the system will detect this and automatically roll back to the former known-working version. The same goes for every other snap in the system, including the root filesystem.

Snap packages are binary readonly filesystem images that support binary delta upgrades generated on the store server. This means an upgrade only downloads the actual binary delta between two snaps thereby reducing the download cost to a minimum. If your appliance is only attached via 3G, for example, this is a significant cost saver.

Snap packages communicate with each other and with the system hardware through predefined interfaces that the image creator has full control over and your applications only see the hardware and data from other snap packages if you allow it to.

Ubuntu Core images consist of three snap packages by default. There is a kernel snap, running your hardware, the core snap which contains a minimal root filesystem and the gadget snap which ships a bootloader, the desired partitioning scheme, rules for interface connections and configuration defaults for application snaps included in the image.

If you pick hardware that is supported by existing Ubuntu Core (Generic x86_64, Raspberry Pi (armv7l) or the Dragonboard 420c (aarch64)) you will not only find the ready-made root filesystem core snap in the store already but also ready-made kernel snaps for your hardware. All it needs to create an Ubuntu Core appliance image is an application snap (and if you have a more complex setup a fork of the existing gadget snap for your device with adjustments for interfaces and app defaults).

This set up reduces the development time drastically … pretty much down to a one-time operation for the gadget and a constant focus on your application. There is no need to care for any other image specific bits, they can be downloaded you get them for free from the Snap Store for free, always up to date and security maintained for 5 years.

The following walk-through will show the creation of a mid-complex appliance image that does some automatic self configuration but also allows providing additional set-up (i.e. wireless LAN, system user) via plugging in a USB stick.

Picking the required snaps

For a Digital Signage demo where we want to control the attached displays through a web interface the dashkiosk [1] dashboard managing tool looks like a good candidate. It allows managing, grouping and assigning attached remote displays through a simple drag and drop web UI. The snap finds its clients via the mDNS protocol, so our appliance image will ship the avahi snap along with the server application.

The attached “displays” are actually simple web-browsers. To not waste resources we will turn our server image into being a client at the same time and have it ship a web-browser snap pointing to the local dashkiosk server.

To have the browser display something on the screen we will also need a graphics stack, so we will ship the mir-kiosk snap as well in our image.

This leaves us with the following list of snaps:

  • dashkiosk (source for this can be found at [2])
  • avahi
  • mir-kiosk
  • “a browser” (this could be chromium-mir-kiosk, but sadly this snap has no support for avahi built in so we will have to create a fork that adds this feature) [3]

We want to not only create a server image but also have auto-connecting clients, so we will create a second image that only contains the browser and display stack.

Now that we have identified what snap packages we want to pre-install in our two images, it is time to read up about how Ubuntu Core images are created. [4] has a walk-through for this, take a special look at the “required-snaps” option of the model assertion we will create.

The client:

{
  "type": "model",
  "authority-id": "",
  "brand-id": "",
  "series": "16",
  "model": "dashkiosk-client",
  "architecture": "armhf",
  "gadget": "pi3",
  "kernel": "pi2-kernel",
  "required-snaps": [ "avahi", "mir-kiosk", "dashkiosk-client-browser" ],
  "timestamp": "2018-09-25T14:45:25+00:00"
}

The server:

{
  "type": "model",
  "authority-id": "",
  "brand-id": "",
  "series": "16",
  "model": "dashkiosk",
  "architecture": "armhf",
  "gadget": "pi3",
  "kernel": "pi2-kernel",
  "required-snaps": [ "avahi", "dashkiosk", "mir-kiosk", "dashkiosk-client-browser" ],
  "timestamp": "2018-09-25T14:45:25+00:00"
}

The “dashkiosk-client-browser” [3] in the above two model assertions is a fork of the original chromium-mir-kiosk with the original “chromium.launcher” replaced with a script that first does an avahi-resolve-host-name call to receive the IP of the dashkiosk server via mDNS, beyond this it is largely original.

When you now use “ubuntu-image” as described in [4] you will already end up with a bootable image that has these snap packages pre-installed. They should automatically start up on boot, but since they are securely confined they will not yet be able to properly communicate with each other or the hardware because not all interfaces are automatically pre-connected.

Interfaces and defaults

While you can have default connections of snap packages defined via a store declaration, many of these interfaces are often not suitable security wise to have them auto-connected everywhere such a snap is installed. Luckily Ubuntu Core gives an option to do these additional interface connections with a “connections:” entry in the gadget snap of your image [5].

To use these “connections:” entries in your gadget.yaml all the snaps you want to use need to have a snap id. This means you first need to upload them to the store, in the details of the store UI for your snap package you can then see the ID hash that you need to use in the gadget.yaml entry.

A forked pi3 gadget example with a gadget.yaml that includes various default connections for dashkiosk, the browser and a few other bits can be found at [6].

To give our modified browser (snap ID: hBB9l3miabfAKr2Dmnzd5RgzmEbMQVbj) the permission to actually use the added “avahi-resolve-host-name” call, we add a plug/slot combination for the “avahi-observe” interface that the avahi snap (dVK2PZeOLKA7vf1WPCap9F8luxTk9Oll) provides to us:

connections:
[...]
- plug: hBB9l3miabfAKr2Dmnzd5RgzmEbMQVbj:avahi-observe
  slot: dVK2PZeOLKA7vf1WPCap9F8luxTk9Oll:avahi-observe
[...]

Such a set-up can be done for any interface combination of your shipped snap packages in the image.

If you scroll down further in the example gadget.yaml above you also find a “defaults:” field where we define that rsyslog should not log to the SD card to prevent wearing it out with massive logging and set the default port for the dashkiosk server to port 80. If your application snap packages use configuration via “snap set/get” [7] you can set all desired defaults through this method.

Considering additional image features

There are some adjustments we need to do to the image, for this we will create a “config snap” that we ship by default. This snap will run a few simple scripts during boot and set the additional defaults we need by utilizing the available snap interfaces of the system [8] and [9].

Since the clients find their server via an mDNS lookup we need to make sure the correct avahi hostname is set on our server (see the “set-name” script in the above config snap trees). We use the “avahi-control” interface for this and connect it in [6] with a slot/plug combination.

By default Ubuntu Core always uses UTC as its timezone. The dashkiosk default setup always shows a clock on the initial screen on all clients, so one thing we want to have the image do is to set a proper timezone on boot, to show the correct local time on start. For this we do a simple web lookup on a public geoip service that provides us the timezone for the IP we connect from, this is done via the “set-timezone” script in the above config snap tree. To allow our script to access the timezone configuration, we connect our configuration snap to the “timezone-control” interface of the system (note, connecting to system or “core” slots means you do not deed to define the slot in the definition, only the plug side needs to be defined)

While the above is already enough to have the images properly work on a wired network (Ubuntu Core always defaults to a DHCP configured ethernet without any further configuration), I personally plan to also be able to use this image on the Raspberry Pi3 with a wireless connection.

Network connections are configured by using netplan [10] on Ubuntu Core. To configure a WLAN connection we can either manually configure each client board through a serial console (which offers the interactive setup through console-conf/subiquity) or we can write a little tool that monitors the USB port for plugged in USB sticks and checks if it finds a netplan configuration file that we define. For this we create the “netplan-import” script in [8] and [9], this tool uses the “system-observe”, “mount-observe”, “udisks2”, “removable-media”, “network-setup-control” and “shutdown” interfaces, see again plugs and slots in our gadget.yaml for this … and since udisks2 is only provided by the udisks2 snap, we also need to make sure to add this snap to the “required-snaps” of our model assertion.

Now we have a completely self-configuring appliance image. To hook up all our clients to the same WLAN we can walk around with the same USB stick containing the netplan.yaml and configure the systems by simply plugging it in (which causes them to copy that config in place and reboot with the newly configured WLAN connection.)

A netplan.yaml for a valid WLAN connection should look like:

network:
  version: 2
  wifis:
    wlan0:
      access-points:
        <YOUR ESSID>: {password: <YOUR WPA PASSWORD>}
      addresses: []
      dhcp4: true

(Replace “<YOUR ESSID>” and “<YOUR WPA PASSWORD>” with the correct values)

Building the final image

Now that we have all the bits together, let us build the final images. To have all the above snaps included We add the remaining snaps to the “required-snaps” entries of the model assertions:

Server:

"required-snaps": [ "avahi", "udisks2", "dashkiosk-image-config", "dashkiosk", "mir-kiosk", "dashkiosk-client-browser" ],

Client:

"required-snaps": [ "avahi", "udisks2", "dashkiosk-client-image-config", "mir-kiosk", "dashkiosk-client-browser" ],

Make sure to have all interface connections set right like in [6].

Sign your model assertion as described in [4] and do a local build of your gadget snap (do not upload it to the store, it will get stuck in manual review there. If you actually want to use a gadget snap in a commercial project, contact Canonical about obtaining a brand store [11]).

When you build your image, use the –extra-snaps (see [4]) option to point to your locally built gadget package.

After the initial flashing of the image to your device/SD card, your kernel and rootfs will automatically receive updates without you doing anything. Any fixes you want to do to the applications or configuration snap packages, can just be uploaded to the store, your image will automatically refresh and pick them up in a timely manner.

Summary

If you followed the above step by step guide you will now have a Digital Signage demo appliance image for a Raspberry Pi, some helpful notes at the end:

Do the first boot of the images with ethernet connected, else you will have to wait extra long for systemd to time out trying to do an initial network connection (the boot will not fail but will take significantly longer, and indeed since the timezone above is set via a geoip lookup, your timezone will also not be set)

Be patient for the first boot in case you are on the ARM architecture. Snapd installs all the pre-seeded snap packages on first boot, this includes a sha3 checksum verification of the snap files. Snapd is written in go and the sha3 function of go is extremely slow (1-2min per pre-seeded snap) on armhf. All subsequent boots will be as fast as you would expect them (around 30 sec to having the graphics on screen).

If you want to log in via ssh you can create a system-user assertion that you can put on the same USB key your netplan.yaml lives on, it will create an ssh user for you to allow you to inspect the booted system [12]. There is a tool provided as a snap package to make creating a system user assertion easy for you at [13].

Last but not least, you can find ready-made images that followed the above step by step guide under [14].

If there are any open questions, feel free to ask them in the “device” category on https://forum.snapcraft.io/

[1] https://github.com/vincentbernat/dashkiosk.git
[2] https://github.com/ogra1/dashkiosk-snap
[3] https://github.com/ogra1/dashkiosk-client-browser
[4] https://docs.ubuntu.com/core/en/guides/build-device/image-building
[5] https://forum.snapcraft.io/t/the-gadget-snap/696#gadget.yaml
[6] https://github.com/ogra1/pi-kiosk-gadget/blob/master/gadget.yaml
[7] https://forum.snapcraft.io/t/configuration-in-snaps/510
[8] https://github.com/ogra1/dashkiosk-image-config
[9] https://github.com/ogra1/dashkiosk-client-image-config
[10] https://netplan.io/
[11] https://docs.ubuntu.com/core/en/build-store/create
[12] https://docs.ubuntu.com/core/en/reference/assertions/system-user
[13] https://snapcraft.io/make-system-user
[14] http://people.canonical.com/~ogra/snappy/kiosk/

 

Advertisements

Patching u-boot for use in an Ubuntu Core gadget snap

This is the second post in the series about building u-boot based gadget snaps, following Building u-boot gadget snap packages from source.

If you have read the last post in this series, you have likely noticed that there is a uboot.patch file being applied to the board config before building the u-boot binaries. This post will take a closer look at this patch.

As you might know already, Ubuntu Core will perform a fully automatic roll-back of upgrades of the kernel or the core snap (rootfs), if it detects that a reboot after the upgrade has not fully succeeded. If an upgrade of the kernel or core snap gets applied, snapd sets a flag in the bootloader configuration called “snap_mode=” and additionally sets the “snap_try_core=” and/or “snap_try_kernel=” variables.

To set these flags and variables that the bootloader should be able to read at next boot, snapd will need write access to the bootloader configuration.
Now, u-boot is the most flexible of all bootloaders, the configuration can live in a uEnv.txt file, in a boot.scr or boot.ini script on a filesystem, in raw space on the boot media or on some flash storage dedicated to u-boot or even a combination of these (and I surely forgot other variations in that list). This setup can vary from board to board and there is no actual standard.

Since it would be a massive amount of work and code to support all possible variations of u-boot configuration management in snapd, the Ubuntu Core team had to decided for one default process and pick a standard here.

Ubuntu Core is designed with completely unattended installations in mind, being the truly rolling Ubuntu, it should be able to upgrade itself at any time over the network and should never corrupt any of its setup or configuration, not even when a power loss occurs in the middle of an update or while the bootloader config is updated. No matter if your device is an embedded industrial controller mounted to the ceiling of a multi level factory hall, a cell tower far out in the woods or some floating sensor device on the ocean, the risk of corrupting any of the bootloader config needs to be as minimal as possible.

Opening a file, pulling it to RAM, changing it, then writing it to a filesystem cache and flushing that in the last step is quite a time-consuming thing. The time window where the system is vulnerable to corruption due to power outage is quite big. Instead we want to atomically toggle a value; preferably directly on disk with no caches at all. This cuts the potential corruption time down to the actual physical write operation, but also rules out most of the file based bits from the above list (uEnv.txt or boot.scr/.ini) and leaves us with the raw options.

That said, we can not really enforce an additional partition for a raw environment, a board might have a certain boot process that requires a very specific setup of partitions shipping binary blobs from the vendor before even getting to the bootloader (i.e. see the dragonboard-410c. Qualcomm requires 8 partitions with different blobs to initialize the hardware before even getting to u-boot.bin). To not exclude such boards we need to find a more generic setup. The solution here is a compromise between filesystem based and raw … we create an img file with fixed size (which allows the atomic writing we want) but put that on top of a vfat partition (our system-boot partition that also carries kernel, initrd and dtb) for biggest flexibility.

To make it easier for snapd and the user space side, we define a fixed size (the same size on all boards) for this img file. We also tell u-boot and the userspace tools to use redundancy for this file which allows the desired atomic writing.

Lets move on with some real-world example, looking at a board i recently created a gadget snap for [1]

I have an old Freescale SabreLite (IMX6) board lying around here, its native SCSI controller and gigabit ethernet make it a wonderful target device for i.e. a NAS or really fast Ubuntu Core based netxtcloud box.

A little research shows it uses the nitrogen6x configuration from the u-boot source tree which is stored in include/configs/nitrogen6x.h

To find the currently used environment setup for this board we just grep for “CONFIG_ENV_IS_IN” in that file and will find the following block:

#if defined(CONFIG_SABRELITE)
#define CONFIG_ENV_IS_IN_MMC
#else
#define CONFIG_ENV_IS_IN_SPI_FLASH
#endif

So this board defines a raw space on the MMC to be used for the environment if we build for the SabreLite, but we want to use CONFIG_ENV_IS_IN_FAT with the right parameters to make use of an uboot.env file from the first vfat partition on the first SD card.

Lets tell this in the config:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

If we just set this we’ll run into build errors though, since the CONFIG_ENV_IS_IN_FAT also wants to know which interface, device and filename it should use:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

So here we tell u-boot that it should use mmc device number 1 and read a file called uboot.env.

FAT_ENV_DEVICE_AND_PART can actually take a partition number, but if we do not set it, it will try to automatically use the very first partition found … (so “1” is equivalent to “1:1” in this case … on something like the dragonboard where the vfat is actually the 8th partition we use “1:8”).

While the above patch would already work with some uboot.env file, it would not yet work with the one we need for Ubuntu Core. Remember the atomic writing thing from above ? This requires us to set the CONFIG_SYS_REDUNDAND_ENVIRONMENT option too (note i did not typo this, the option is really called “REDUNDAND” for whatever reason).
Setting this option tells u-boot that there is a different header on the file and that write operations should be done atomic.

Ubuntu Core defaults to a fixed file size for uboot.env. We expect the file to be exactly 128k big, so lets find the “CONFIG_ENV_SIZE” option in the config file and adjust it too if it does define a different size:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

Trying to build the above will actually end up with a build error complaining that fat writing is not enabled, so we will have to add that too …

One other bit that Ubuntu core expects is that we can load a proper initrd.img without having to mangle or modify it in the kernel snap (by i.e. making it a uInitrd or whatnot) so we need to define the CONFIG_SUPPORT_RAW_INITRD option as well since it is not set by default for this board.

Our final patch now looks like:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

+#define CONFIG_FAT_WRITE
+#define CONFIG_SUPPORT_RAW_INITRD

With this we are now able to build a u-boot.bin that will handle the Ubuntu Core uboot.env file from the system-boot partition, read and write the environment from there and allow snapd to modify the same file from user space on a booted system when kernel or core snap updates occur.

The actual uboot.env file needs to be created using the “mkenvimage” tool with the “-r” (redundant) and “-s 131072” (128k size) options, from an input file. In the branch at [1] you will find the call of this command in the snapcraft.yaml file in the “install” script snippet. It uses the uboot.env.in textfile that stores the default environment we use …

The next post in this series will take a closer look at the contents of this uboot.env.in file, what we actually need in there to achieve proper rollback handling and how to obtain the default values for it.

If you have any questions about the process, feel free to ask here in the comments or open a thread on https://forum.snapcraft.io in the device category.

[1] https://github.com/ogra1/sabrelite-gadget

Dock a Snap…

I recently had to help setting up an image build environment for UbuntuCore images for someone who only allows docker as infrastructure.

When wanting to build an image from a blessed model assertion for i.e. the pi2, pi3 or dragonboard you need to use the “snap known” command (see below for the full syntax) to download the canonical signed assertion. The snap command requires snapd to run inside your container. To build images we need to use ubuntu-image which is also provided as a snap, so we not only want snapd to run for the “snap” command, but we also want the container to be able to execute snaps we install. After quite a bit back and forth and disabling quite a few security features inside the container setup, i came up with https://github.com/ogra1/snapd-docker which is a simple build script for setting up a container that can execute snaps.

I hope people needing to use docker and wanting to use snaps inside containers find this helpful … pull requests for improvements of the script or documentation will be happily reviewed on github.

Here the README.md of the tree:

Create and run a docker container that is able to run snap packages

This script allows you to create docker containers that are able to run and
build snap packages.

WARNING NOTE: This will create a container with security options disabled, this is an unsupported setup, if you have multiple snap packages inside the same container they will be able to break out of the confinement and see each others data and processes. Use this setup to build or test single snap packages but do not rely on security inside the container.

usage: build.sh [options]

  -c|--containername (default: snappy)
  -i|--imagename (default: snapd)

Examples

Creating a container with defaults (image: snapd, container name: snappy):

$ sudo apt install docker.io
$ ./build.sh

If you want to create subsequent other containers using the same image, use the –containername option with a subsequent run of the ./build.sh script.

$ ./build.sh -c second
$ sudo docker exec second snap list
Name Version Rev Developer Notes
core 16-2.26.4 2092 canonical -
$

Installing and running a snap package:

This will install the htop snap and will show the running processes inside the container after connecting the right snap interfaces.

$ sudo docker exec snappy snap install htop
htop 2.0.2 from 'maxiberta' installed
$ sudo docker exec snappy snap connect htop:process-control
$ sudo docker exec snappy snap connect htop:system-observe
$ sudo docker exec -ti snappy htop

Building snaps using the snapcraft snap package (using the default “snappy” name):

Install some required debs, install the snapcraft snap package to build snap packages, pull some remote branch and build a snap from it using the snapcraft command.

$ sudo docker exec snappy sh -c 'apt -y install git'
$ sudo docker exec snappy snap install snapcraft --edge --classic
$ sudo docker exec snappy sh -c 'git clone https://github.com/ogra1/beaglebone-gadget'
$ sudo docker exec snappy sh -c 'cd beaglebone-gadget; cp cross* snapcraft.yaml; TMPDIR=. snapcraft'
...
./scripts/config_whitelist.txt . 1>&2
Staging uboot
Priming uboot
Snapping 'bbb' |
Snapped bbb_16-0.1_armhf.snap
$

Building an UbuntuCore image for a RaspberryPi3:

Install some debs required to work around a bug in the ubuntu-image classic snap, install ubuntu-image, retrieve the model assertion for a pi3 image using the “snap known” command and build the image using ubuntu-image.

$ sudo docker exec snappy sh -c 'apt -y install libparted dosfstools' # work around bug 1694982
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
Setting up libparted2:amd64 (3.2-17) ...
Setting up dosfstools (4.0-2ubuntu1) ...
Processing triggers for libc-bin (2.24-9ubuntu2) ...
$ sudo docker exec snappy snap install ubuntu-image --classic --edge
ubuntu-image (edge) 1.0+snap3 from 'canonical' installed
$ sudo docker exec snappy sh -c "snap known --remote model series=16 model=pi3 brand-id=canonical >pi3.model"
$ sudo docker exec snappy ubuntu-image pi3.model
Fetching core
Fetching pi2-kernel
Fetching pi3
$ sudo docker exec snappy sh -c 'ls *.img'
pi3.img

Building u-boot Gadget Snap packages from source

When we started doing gadget snap packages for UbuntuCore images, there was no snapcraft. Gadgets were assembled from locally built bootloader binaries by setting up a filesystem structure that reflects the snap content, using pre-created meta/snap.yaml and meta/gadget.yaml files and then calling mksquashfs.

When snapcraft started to support the gadget format we added a very simple snapcraft.yaml that simply used the dump plugin to copy the prebuilt binaries into place in the resulting snap.

While we provide uboot.patch files in the gadget source trees, there is not really anything built from source at snap build time and doing your own modifications means you need to reach out to someone who has the necessary knowledge how the u-boot.img and the SPL were built. This was a long standing wart in our setup and there was desire for a long time to actually make the gadget creation a completely reproducable process based on upstream u-boot sources.

A typical build process would look like:

– git checkout git://git.denx.de/u-boot.git
– switch to the right release branch
– apply the uboot.patch to the tree
– run make $config_of_your_board
– run make (… and if you cross build, set the required environment up first)

After this the resulting binaries used to be copied into the prebuilt/ dir. The snapcraft build was completely disconnected from this process.

Auto-building u-boot from source with snapcraft

Snapcraft is well able to define all of these steps in the snapcraft.yaml nowadays, actually build a useful binary for us and put it in the right place in the final snap. So lets go step by step through creating a working “part:” entry for the snapcraft.yaml that provides the above steps:

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]

We use the “make” plugin (which nicely provides the “artifacts:” option for us to cherry-pick the binaries from the u-boot build to be put into the snap), point to the upstream u-boot source and make it use the v2017.01 branch.

   prepare: |
     git apply ../../../uboot.patch
     make am335x_boneblack_config

With this “prepare:” scriptlet we tell the plugin to apply our uboot.patch to the checked out branch and to configure it for a beaglebone black before starting the build.

   install: |
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

If you have worked with u-boot gadgets before you know how important the uboot.env file that carries our UbuntuCore bootloader setup is. It always needs to be the right size (-s 131072), redundant (-r) to allow atomic writing and we ship the input file as uboot.env.in in our source trees. In the “install:” scriptlet we take this input file and create a proper environment image file from it using the mkenvimage tool our build has just created before. The ubuntu-image and “snap prepare-image” commands will look for an “uboot.conf” file at image creation time, so we create a symlink pointing to our binary env file.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc

Dependencies to build u-boot get defined in the “build-packages:” option of the part. Obviously we need a compiler (build-essential), some build scripts still use python2.7 headers (libpython2.7-dev) and when test building there is a complaint about bc missing that is not fatal (but disturbing enough to also install the bc package as a build dependency).

After adding a bit of the general meta data like name, version, summary and description as well as all the snap informational data like type, (target) architecture, confinement type and stability grade, the resulting snapcraft.yaml looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]
    prepare: |
      git apply ../../../uboot.patch
      make am335x_boneblack_config
    install: |
      tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
      cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
    build-packages:
      - libpython2.7-dev
      - build-essential
      - bc

This snapcraft.yaml is enough to build a beaglebone gadget snap natively on an armhf host, so it will work if you run “snapcraft” in the checked out source on a raspberry Pi install or if you let launchpad or build.snapcraft.io do the build for you … but … typically, while developing you want to build on your workstation PC, not on some remote source or on a slow arm board. With some modifications of the snapcraft.yaml we can luckily make that possible very easily, lets make a copy of our snapcraft.yaml (i call it crossbuild-snapcraft.yaml in my trees) and add some changes to that.

Allow cross Building

First of all, we want a cross compiler on the host machine, so we will add the gcc-arm-linux-gnueabi package to the list of build dependencies.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc
     - gcc-arm-linux-gnueabi

We also need to override the “make” call to carry info about our cross compiler in the CROSS_COMPILE environment variable. We can use a “build:” scriptlet for this.

   build: |
     CROSS_COMPILE=arm-linux-gnueabi- make

When cross building, the “artifacts:” line sadly does not do what it should anymore (i assume this is a bug), as a quick workaround we can enhance the “install:” script snippet with a simple cp command.

   install: |
     cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

With all these changes in place our crossbuild-snapcraft.yaml now looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
  plugin: make
  source: git://git.denx.de/u-boot.git
  source-branch: v2017.01
  artifacts: [MLO, u-boot.img]
  prepare: |
    git apply ../../../uboot.patch
    make am335x_boneblack_config
  build: |
    CROSS_COMPILE=arm-linux-gnueabi- make
  install: |
    cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
    tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
    cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
  build-packages:
    - libpython2.7-dev
    - build-essential
    - bc
    - gcc-arm-linux-gnueabi

So with the original snapcraft.yaml we can now let our tree auto-build in build.snapcraft.io, when we checkout the source locally and want to build on a PC a simple “cp crossbuild-snapcraft.yaml snapcraft.yaml && snapcraft” will do a local cross build.

Creating the gadget.yaml

Just building the bootloader binaries is indeed not enough to create a bootable image, the binaries need to go in the right place, the bootloader will need to know where the devicetree file can be found and a working image should also have a proper partition table. For this purpose we will need to create a gadget.yaml file with the right information.

We create a gadget.yaml file in the source tree and tell the system that the devicetree file is called am335x-boneblack and that it gets shipped by the kernel snap.

device-tree: am335x-boneblack
device-tree-origin: kernel

Now we add a “volume:” entry that tells the system about the bootloader type (grub or u-boot) and defines which type of partition table we want (either “gpt” for a GUID partiton table or “mbr” for an msdos type one).

volumes:
  disk:
    bootloader: u-boot
    schema: mbr

(Note that in newer versions of the ubuntu-image tool the –ouput option to give your image a meaningful name has been deprecated, instead the name of the volume from the gadget snap is used now. To give your image a more meaningful name you might want to change “disk:” above to something else like “beagleboneblack:” to get a beagleboneblack.img file)

The last bit we need to do is to give our volume a “structure:”, i.e. a partition table but also info where to write the raw bootloader bits (MLO and u-boot.img).

Looking a the elinux wiki [3] how to create a bootable SD card for the beaglebone black we find lines like:

dd if=MLO of=/dev/sdX count=1 seek=1 conv=notrunc bs=128k
dd if=u-boot.img of=/dev/sdX count=2 seek=1 conv=notrunc bs=384k

For writing the bootloader blobs into the right place ubuntu-image will not just use dd so we need to translate the lines into proper entries for the volume structure. Lets take a closer look. The MLO line tells us dd will use 128k (131072 bytes) sized blocks (bs=), it will use 1 block offset from the start of the card (seek=1) and will reserve one block for the MLO payload in use (count=1). and indeed there is no filesystem in use, it will be written “bare”.
This gives us the first entry in the volume structure.

    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO

The u-boot.img dd command uses a block size of 384k (393216 bytes), one block offset from the start of the image and reserves two blocks as possible size for the u-boot.img binary and will also write the binary raw into place (type: bare).

      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img

Currently every UbuntuCore u-boot image expects to find the bootloader configuration, kernel, initrd and devicetree file in a vfat partition (type: 0C) called system-boot. To have enough wiggle room we’ll make that partition 128M big that will leave enough space for even gigantic kernel binaries or initrd’s. The ubuntu-image tool will put our uboot.env file from the start into that partition.

       - name: system-boot
         type: 0C
         filesystem: vfat
         filesystem-label: system-boot
         size: 128M

The final gadget.yaml file will now look like:

device-tree: am335x-boneblack
device-tree-origin: kernel
volumes:
  disk:
    bootloader: u-boot
    schema: mbr
    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO
      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img
      - name: system-boot
        type: 0C
        filesystem: vfat
        filesystem-label: system-boot
        size: 128M

As you can see building a gadget snap is fairly easy and only requires four files (snapcraft.yaml, gadget.yaml, uboot.patch and uboot.env.in) to have in a github tree that you can then have auto-built on build.snapcraft.io. In subsequent posts i will explain the patch and uboot.env.in files in more detail. I will also describe the setup of default interfaces a gadget can provide as well as how to set some system defaults from the gadget.yaml file. If you want to take a look at the full source tree used for the above example, go to [1].

Documentation of the gadget snap syntax can be found at [2]. The dd commands used as input for the gadget.yaml file can be found at [3] and documentation how to build an image out of a gadget snap is at [4]. If you have any questions feel free to ask at [5] (i recommmend using the “device” category).

[1] https://github.com/ogra1/beaglebone-gadget
[2] https://forum.snapcraft.io/t/the-gadget-snap/696
[3] http://elinux.org/Beagleboard:U-boot_partitioning_layout_2.0
[4] https://docs.ubuntu.com/core/en/guides/build-device/image-building
[5] https://forum.snapcraft.io/

Use UbuntuCore to create a WiFi AP with nextcloud support on a Pi3 in minutes.

UbuntuCore is the rolling release of Ubuntu.
It is self updating and completely built out of snap packages (including kernel, boot loader and root file system) which provides transactional updates, manual and automatic roll-back and a high level of system security to all parts of the system.

Once installed you have a secure zero maintenance OS that you can easily turn into a powerful appliance by simply adding a few application snaps to it.

The raspberry Pi3 comes with WLAN and ethernet hardware on board, which makes it a great candidate to turn it into a WiFi AccessPoint. But why stop here ? With UbuntuCore we can go further and install a WebRTC solution (like spreedme) for making in-house video calls or an UPNP media server to serve our music and video collections, an OpenHAB home automation device … or we can actually turn it into a personal cloud using the nextcloud snap.

The instructions below walk you through a basic install of UbuntuCore, setting up a WLAN AP, adding an external USB disk to hold data for nextcloud and installing the nexcloud snap.

You need a Raspberry Pi3 and an SD card.

Preparation:

Create an account at https://login.ubuntu.com/ and upload your public ssh key (~/.ssh/id_rsa.pub) in the “SSH Keys” section. This is where your UbuntuCore image will pull the ssh credentials from to provide you login access to the system (by default UbuntuCore does not create a local console login, only remote logins using this ssh key will be allowed).

Download the image from:
http://releases.ubuntu.com/ubuntu-core/16/ubuntu-core-16-pi3.img.xz

…or if you are brave and like to live on the edge you can use a daily build of the edge channel (…bugs included 😉) at:
http://people.canonical.com/~ogra/snappy/all-snaps/daily/current/ubuntu-core-16-pi3.img.xz

Write the image to SD:

Put your SD card into your PC’s SD card reader …
Make sure it did not get auto mounted (in case it did, do not use the filemanager to unmount but unmount it using the commandline (in the example below my USB card reader shows the SD as /dev/sdb to the system)):

ogra@pc~$ mount | grep /dev/sdb # check if anything is mounted
...
ogra@pc~$ sudo umount /dev/sdb1 # unmount the partition
ogra@pc~$ sudo umount /dev/sdb2
ogra@pc~$ 

Use the following command to write the image to the card:

ogra@pc~$ xzcat /path/to/ubuntu-core-16-pi3.img.xz | sudo dd of=/dev/sdb bs=8M
ogra@pc~$ 

Plug the SD into your pi3, plug an ethernet cable and either a serial cable or a monitor and keyboard in and power up the board. Eventually you will se a “Please press enter” message on the screen, hitting Enter will start the installer.

Going through the installer …

Configure eth0 as default interface (the WLAN driver is broken in the current pi3 installer. Simply ignore the wlan0 device at this point):

Give your login.ubuntu.com account info so the system can set up your ssh login:

The last screen of the installer will tell you the ssh credentials to use:

Ssh into the board, set a hostname and call sudo reboot (to work around the WLAN breakage):

ogra@pc:~$ ssh ogra@192.168.2.82

...
It's a brave new world here in Snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snap --help'
for app installation and transactional updates.

ogra@localhost:~$ sudo hostnamectl set-hostname pi3
ogra@localhost:~$ sudo reboot

Now that we have installed our basic system, we are ready to add some nice application snaps to turn it into a shiny WiFi AP with a personal cloud to use from our phone and desktop systems.

Install and set up your personal WiFi Accesspoint:

ogra@pi3:~$ snap install wifi-ap
ogra@pi3:~$ sudo wifi-ap.setup-wizard
Automatically selected only available wireless network interface wlan0 
Which SSID you want to use for the access point: UbuntuCore 
Do you want to protect your network with a WPA2 password instead of staying open for everyone? (y/n) y 
Please enter the WPA2 passphrase: 1234567890 
Insert the Access Point IP address: 192.168.1.1 
How many host do you want your DHCP pool to hold to? (1-253) 50 
Do you want to enable connection sharing? (y/n) y 
Which network interface you want to use for connection sharing? Available are sit0, eth0: eth0 
Do you want to enable the AP now? (y/n) y 
In order to get the AP correctly enabled you have to restart the backend service:
 $ systemctl restart snap.wifi-ap.backend 
2017/04/29 10:54:56 wifi.address=192.168.1.1 
2017/04/29 10:54:56 wifi.netmask=ffffff00 
2017/04/29 10:54:56 share.disabled=false 
2017/04/29 10:54:56 wifi.ssid=Snappy 
2017/04/29 10:54:56 wifi.security=wpa2 
2017/04/29 10:54:56 wifi.security-passphrase=1234567890 
2017/04/29 10:54:56 disabled=false 
2017/04/29 10:54:56 dhcp.range-start=192.168.1.2 
2017/04/29 10:54:56 dhcp.range-stop=192.168.1.51 
2017/04/29 10:54:56 share.network-interface=eth0 
Configuration applied succesfully 
ogra@pi3:~$

Set up an USB key as permanently mounted disk:

Plug your USB Disk/Key into the Pi3 and immediately call the dmesg command afterwards so you can see the name of the device and its partitions … (in my case the device name is /dev/sda and there is a vfat partition on the device called /dev/sda1)

Now create /etc/systemd/system/media-usbdisk.mount with the following content:

[Unit] 
Description=Mount USB Disk

[Mount]
What=/dev/sda1 
Where=/media/usbdisk 
Options=defaults 

[Install] 
WantedBy=multi-user.target

And enable it:

ogra@pi3:~$ sudo systemctl daemon-reload 
ogra@pi3:~$ sudo systemctl enable media-usbdisk.mount 
ogra@pi3:~$ sudo systemctl start media-usbdisk.mount 
ogra@pi3:~$

Install the nextcloud snap:

ogra@pi3:~$ snap install nextcloud 
ogra@pi3:~$

Allow nextcloud to access devices in /media:

ogra@pi3:~$ snap connect nextcloud:removable-media 
ogra@pi3:~$

Wait a bit, nextclouds auto setup takes a few minutes (make some tea or coffee) …

Turn on https:

ogra@pi3:~$ sudo nextcloud.enable-https self-signed 
Generating key and self-signed certificate... done 
Restarting apache... done 
ogra@pi3:~$

Now you can connect to your new WiFi AP SSID and point your browser to https://192.168.1.1/ afterwards.

Add an exception for the self signed security cert (note that nextcloud.enable-https also accepts Let’s Encrypt certs in case you own one, just call “sudo nextcloud.enable-https -h” to get all info) and configure nextcloud via the web UI.

In the nextcloud UI install the “External Storage Support” from the app section and create a new local Storage pointing to the /media/usbdisk dir so your users can store thier files on the external disk.

An alternate approach to Ubuntu Phone Web App containers

It bothers me since a while that Web Apps on the Ubuntu Phone have their back button at the top left of the screen. It bothers me even more that the toolbar constantly collapses and expands during browsing … most of the time it does that for me when I just want to tap on a link. The page content suddenly moves 50px up or down…

Since Dekko exists on the Ubuntu Phone I became a heavy user of it for reading my mails and I really fell in love with the new bottom menu that Dan Chapman integrated so nicely (based on the circle menu work from Nekhelesh Ramananthan)

So this weekend it struck me to simply combine a WebView with this menu work to ge a shiny bottom navigation menu. I grabbed the recent google plus app from Szymon Waliczek, the latest source of Dekko and some bits from the webbrowser-app tree to combine them into a new webapp-container like framework.

You can find an experimental G+ click package (one that surely wins the contest for the ugliest icon) here.

I pushed the code to launchpad together with a README that describes how you can use it in your own WebApp, you can branch it with:

bzr branch lp:~ogra/junk/alternate-webapp-container

Meet node-snapper a helper to easily create .snap packages of your node.js projects

When I created the “Snappy Chatroom” package for WebRTC Video chat on snappy I used node.js to provide the necessary server bits. During building the snap I noticed how hard it is to actually put the necessary binaries and node modules in place. Especially if you want your snap to be arch independent (javascript is arch independent, so indeed our snap package should be too).

The best way I found was to actually build node.js from source on the respective target arch and run “npm install” for the necessary modules, then tarring up the matching dirs and putting them into my snap package tree.

This is quite some effort !!!

I’m a lazy programmer and surely do not want do that every time I update the package. Luckily there are already binaries of node for all architectures in the Ubuntu archive and it is not to hard to make use of them to run npm install in a qemu-user-static chroot for all target arches and to automate the creation for the respective tarballs. As little bonus i thought it would be nice to have it automatically generate the proper snap execution environment in form of a service startup script (with properly set LD_LIBRARY_PATh etc) so you only need to point node to the to-execute .js file.

This brought me to write node-snapper, a tool that does exactly do the above. It makes it easy to just maintain the actual code I care about in a tree (the site itself and the packaging data for the snap). I leave the caring for node itself or for the modules to the archive, respectively the npm upstreams and just pull in their work as needed.

See https://launchpad.net/node-snapper for the upstream code.

To outline how node-snapper works I took some notes below how I roll the chatroom snap as an example.

Using node-snapper:

First we create a work dir for our snap package.

ogra@anubis:~$ mkdir package

To create the nodejs and npm module setup for our snap package we use node-snapper, let’s branch this so we can use it later.

ogra@anubis:~$ bzr branch lp:node-snapper

Now we move into the package dir and let node-snapper create the tarballs with the “express”, “webrtc.io”, “webrtc.io-client” and “ws” node modules since chatroom makes use of all of them.

ogra@anubis:~$ cd package
ogra@anubis:~/package$ sudo ../node-snapper/node-snapper express webrtc.io webrtc.io-client ws
...

This created three files.

ogra@anubis:~/package$ ls
amd64-dir.tgz  armhf-dir.tgz  start-service.sh

We unpack the tarballs and remove them.

ogra@anubis:~/package$ tar xf amd64-dir.tgz
ogra@anubis:~/package$ tar xf armhf-dir.tgz
ogra@anubis:~/package$ ls
amd64  amd64-dir.tgz  armhf  armhf-dir.tgz  start-service.sh
ogra@anubis:~/package$ rm *.tgz
ogra@anubis:~/package$ ls
amd64  armhf  start-service.sh

… and branch the chatroom site and packaging code branch.

ogra@anubis:~/package$ bzr branch lp:~ogra/+junk/chatroom
ogra@anubis:~/package$ mv chatroom/* .
ogra@anubis:~/package$ rm -rf chatroom/
...
ogra@anubis:~/package$ ls site/
add.png      cam_on.png    expand.png      fullscreen.png  mute.png   server.js  unmute.png
cam_off.png  collapse.png  index.html      script.js  style.css  webrtc.io.js
ogra@anubis:~/package$ ls meta/
icon.png  icon.svg  package.yaml  readme.md

The file we want node to execute on startup is the server.js file in the “site” dir in our snap package. We edit start-service.sh so that the MY_EXECUTABLE variable looks like:

MY_EXECUTABLE=site/server.js

This is it, we are ready to roll a .snap package out of this

ogra@anubis:~/package$ cd ..
ogra@anubis:~$ snappy build package
...
ogra@anubis:~$ ls
chatroom.ogra_0.1-5_multi.snap  package

As you can see, node-snapper makes supplying javascript nodejs code as snap package a breeze. You only need to keep your site and package files in a git or bzr tree and node-snapper will always provide you the latest nodejs setup and npm installed modules as needed just at package build time.

Indeed we now want to test our snap package. I have a RPi2 running snappy at 192.168.2.57 with enabled developer mode, so I can easily use snappy-remote to install the package.

ogra@anubis:~$ snappy-remote --url=ssh://192.168.2.57 install chatroom.ogra_0.1-5_multi.snap

The service should start automatically. Opening chromium, pointing it to http://192.168.2.57:6565 and approving access to microphone and camera will now give us a Video chat (pointing an android phone to it at the same time enables you to talk to yourself while watching you from different angles 😉 … note that the mute button is very helpful when doing this …)

I hope we will see some more node.js projects land in the snappy app store soon. A PPA with node-snapper to make it easier installable should be ready next week and if I see there is demand I will also push it to universe in the Ubuntu archive.

I hope you found that little howto helpful 🙂