Patching u-boot for use in an Ubuntu Core gadget snap

This is the second post in the series about building u-boot based gadget snaps, following Building u-boot gadget snap packages from source.

If you have read the last post in this series, you have likely noticed that there is a uboot.patch file being applied to the board config before building the u-boot binaries. This post will take a closer look at this patch.

As you might know already, Ubuntu Core will perform a fully automatic roll-back of upgrades of the kernel or the core snap (rootfs), if it detects that a reboot after the upgrade has not fully succeeded. If an upgrade of the kernel or core snap gets applied, snapd sets a flag in the bootloader configuration called “snap_mode=” and additionally sets the “snap_try_core=” and/or “snap_try_kernel=” variables.

To set these flags and variables that the bootloader should be able to read at next boot, snapd will need write access to the bootloader configuration.
Now, u-boot is the most flexible of all bootloaders, the configuration can live in a uEnv.txt file, in a boot.scr or boot.ini script on a filesystem, in raw space on the boot media or on some flash storage dedicated to u-boot or even a combination of these (and I surely forgot other variations in that list). This setup can vary from board to board and there is no actual standard.

Since it would be a massive amount of work and code to support all possible variations of u-boot configuration management in snapd, the Ubuntu Core team had to decided for one default process and pick a standard here.

Ubuntu Core is designed with completely unattended installations in mind, being the truly rolling Ubuntu, it should be able to upgrade itself at any time over the network and should never corrupt any of its setup or configuration, not even when a power loss occurs in the middle of an update or while the bootloader config is updated. No matter if your device is an embedded industrial controller mounted to the ceiling of a multi level factory hall, a cell tower far out in the woods or some floating sensor device on the ocean, the risk of corrupting any of the bootloader config needs to be as minimal as possible.

Opening a file, pulling it to RAM, changing it, then writing it to a filesystem cache and flushing that in the last step is quite a time-consuming thing. The time window where the system is vulnerable to corruption due to power outage is quite big. Instead we want to atomically toggle a value; preferably directly on disk with no caches at all. This cuts the potential corruption time down to the actual physical write operation, but also rules out most of the file based bits from the above list (uEnv.txt or boot.scr/.ini) and leaves us with the raw options.

That said, we can not really enforce an additional partition for a raw environment, a board might have a certain boot process that requires a very specific setup of partitions shipping binary blobs from the vendor before even getting to the bootloader (i.e. see the dragonboard-410c. Qualcomm requires 8 partitions with different blobs to initialize the hardware before even getting to u-boot.bin). To not exclude such boards we need to find a more generic setup. The solution here is a compromise between filesystem based and raw … we create an img file with fixed size (which allows the atomic writing we want) but put that on top of a vfat partition (our system-boot partition that also carries kernel, initrd and dtb) for biggest flexibility.

To make it easier for snapd and the user space side, we define a fixed size (the same size on all boards) for this img file. We also tell u-boot and the userspace tools to use redundancy for this file which allows the desired atomic writing.

Lets move on with some real-world example, looking at a board i recently created a gadget snap for [1]

I have an old Freescale SabreLite (IMX6) board lying around here, its native SCSI controller and gigabit ethernet make it a wonderful target device for i.e. a NAS or really fast Ubuntu Core based netxtcloud box.

A little research shows it uses the nitrogen6x configuration from the u-boot source tree which is stored in include/configs/nitrogen6x.h

To find the currently used environment setup for this board we just grep for “CONFIG_ENV_IS_IN” in that file and will find the following block:

#if defined(CONFIG_SABRELITE)
#define CONFIG_ENV_IS_IN_MMC
#else
#define CONFIG_ENV_IS_IN_SPI_FLASH
#endif

So this board defines a raw space on the MMC to be used for the environment if we build for the SabreLite, but we want to use CONFIG_ENV_IS_IN_FAT with the right parameters to make use of an uboot.env file from the first vfat partition on the first SD card.

Lets tell this in the config:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

If we just set this we’ll run into build errors though, since the CONFIG_ENV_IS_IN_FAT also wants to know which interface, device and filename it should use:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

So here we tell u-boot that it should use mmc device number 1 and read a file called uboot.env.

FAT_ENV_DEVICE_AND_PART can actually take a partition number, but if we do not set it, it will try to automatically use the very first partition found … (so “1” is equivalent to “1:1” in this case … on something like the dragonboard where the vfat is actually the 8th partition we use “1:8”).

While the above patch would already work with some uboot.env file, it would not yet work with the one we need for Ubuntu Core. Remember the atomic writing thing from above ? This requires us to set the CONFIG_SYS_REDUNDAND_ENVIRONMENT option too (note i did not typo this, the option is really called “REDUNDAND” for whatever reason).
Setting this option tells u-boot that there is a different header on the file and that write operations should be done atomic.

Ubuntu Core defaults to a fixed file size for uboot.env. We expect the file to be exactly 128k big, so lets find the “CONFIG_ENV_SIZE” option in the config file and adjust it too if it does define a different size:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

Trying to build the above will actually end up with a build error complaining that fat writing is not enabled, so we will have to add that too …

One other bit that Ubuntu core expects is that we can load a proper initrd.img without having to mangle or modify it in the kernel snap (by i.e. making it a uInitrd or whatnot) so we need to define the CONFIG_SUPPORT_RAW_INITRD option as well since it is not set by default for this board.

Our final patch now looks like:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

+#define CONFIG_FAT_WRITE
+#define CONFIG_SUPPORT_RAW_INITRD

With this we are now able to build a u-boot.bin that will handle the Ubuntu Core uboot.env file from the system-boot partition, read and write the environment from there and allow snapd to modify the same file from user space on a booted system when kernel or core snap updates occur.

The actual uboot.env file needs to be created using the “mkenvimage” tool with the “-r” (redundant) and “-s 131072” (128k size) options, from an input file. In the branch at [1] you will find the call of this command in the snapcraft.yaml file in the “install” script snippet. It uses the uboot.env.in textfile that stores the default environment we use …

The next post in this series will take a closer look at the contents of this uboot.env.in file, what we actually need in there to achieve proper rollback handling and how to obtain the default values for it.

If you have any questions about the process, feel free to ask here in the comments or open a thread on https://forum.snapcraft.io in the device category.

[1] https://github.com/ogra1/sabrelite-gadget

Dock a Snap…

I recently had to help setting up an image build environment for UbuntuCore images for someone who only allows docker as infrastructure.

When wanting to build an image from a blessed model assertion for i.e. the pi2, pi3 or dragonboard you need to use the “snap known” command (see below for the full syntax) to download the canonical signed assertion. The snap command requires snapd to run inside your container. To build images we need to use ubuntu-image which is also provided as a snap, so we not only want snapd to run for the “snap” command, but we also want the container to be able to execute snaps we install. After quite a bit back and forth and disabling quite a few security features inside the container setup, i came up with https://github.com/ogra1/snapd-docker which is a simple build script for setting up a container that can execute snaps.

I hope people needing to use docker and wanting to use snaps inside containers find this helpful … pull requests for improvements of the script or documentation will be happily reviewed on github.

Here the README.md of the tree:

Create and run a docker container that is able to run snap packages

This script allows you to create docker containers that are able to run and
build snap packages.

WARNING NOTE: This will create a container with security options disabled, this is an unsupported setup, if you have multiple snap packages inside the same container they will be able to break out of the confinement and see each others data and processes. Use this setup to build or test single snap packages but do not rely on security inside the container.

usage: build.sh [options]

  -c|--containername (default: snappy)
  -i|--imagename (default: snapd)

Examples

Creating a container with defaults (image: snapd, container name: snappy):

$ sudo apt install docker.io
$ ./build.sh

If you want to create subsequent other containers using the same image, use the –containername option with a subsequent run of the ./build.sh script.

$ ./build.sh -c second
$ sudo docker exec second snap list
Name Version Rev Developer Notes
core 16-2.26.4 2092 canonical -
$

Installing and running a snap package:

This will install the htop snap and will show the running processes inside the container after connecting the right snap interfaces.

$ sudo docker exec snappy snap install htop
htop 2.0.2 from 'maxiberta' installed
$ sudo docker exec snappy snap connect htop:process-control
$ sudo docker exec snappy snap connect htop:system-observe
$ sudo docker exec -ti snappy htop

Building snaps using the snapcraft snap package (using the default “snappy” name):

Install some required debs, install the snapcraft snap package to build snap packages, pull some remote branch and build a snap from it using the snapcraft command.

$ sudo docker exec snappy sh -c 'apt -y install git'
$ sudo docker exec snappy snap install snapcraft --edge --classic
$ sudo docker exec snappy sh -c 'git clone https://github.com/ogra1/beaglebone-gadget'
$ sudo docker exec snappy sh -c 'cd beaglebone-gadget; cp cross* snapcraft.yaml; TMPDIR=. snapcraft'
...
./scripts/config_whitelist.txt . 1>&2
Staging uboot
Priming uboot
Snapping 'bbb' |
Snapped bbb_16-0.1_armhf.snap
$

Building an UbuntuCore image for a RaspberryPi3:

Install some debs required to work around a bug in the ubuntu-image classic snap, install ubuntu-image, retrieve the model assertion for a pi3 image using the “snap known” command and build the image using ubuntu-image.

$ sudo docker exec snappy sh -c 'apt -y install libparted dosfstools' # work around bug 1694982
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
Setting up libparted2:amd64 (3.2-17) ...
Setting up dosfstools (4.0-2ubuntu1) ...
Processing triggers for libc-bin (2.24-9ubuntu2) ...
$ sudo docker exec snappy snap install ubuntu-image --classic --edge
ubuntu-image (edge) 1.0+snap3 from 'canonical' installed
$ sudo docker exec snappy sh -c "snap known --remote model series=16 model=pi3 brand-id=canonical >pi3.model"
$ sudo docker exec snappy ubuntu-image pi3.model
Fetching core
Fetching pi2-kernel
Fetching pi3
$ sudo docker exec snappy sh -c 'ls *.img'
pi3.img

Building u-boot Gadget Snap packages from source

When we started doing gadget snap packages for UbuntuCore images, there was no snapcraft. Gadgets were assembled from locally built bootloader binaries by setting up a filesystem structure that reflects the snap content, using pre-created meta/snap.yaml and meta/gadget.yaml files and then calling mksquashfs.

When snapcraft started to support the gadget format we added a very simple snapcraft.yaml that simply used the dump plugin to copy the prebuilt binaries into place in the resulting snap.

While we provide uboot.patch files in the gadget source trees, there is not really anything built from source at snap build time and doing your own modifications means you need to reach out to someone who has the necessary knowledge how the u-boot.img and the SPL were built. This was a long standing wart in our setup and there was desire for a long time to actually make the gadget creation a completely reproducable process based on upstream u-boot sources.

A typical build process would look like:

– git checkout git://git.denx.de/u-boot.git
– switch to the right release branch
– apply the uboot.patch to the tree
– run make $config_of_your_board
– run make (… and if you cross build, set the required environment up first)

After this the resulting binaries used to be copied into the prebuilt/ dir. The snapcraft build was completely disconnected from this process.

Auto-building u-boot from source with snapcraft

Snapcraft is well able to define all of these steps in the snapcraft.yaml nowadays, actually build a useful binary for us and put it in the right place in the final snap. So lets go step by step through creating a working “part:” entry for the snapcraft.yaml that provides the above steps:

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]

We use the “make” plugin (which nicely provides the “artifacts:” option for us to cherry-pick the binaries from the u-boot build to be put into the snap), point to the upstream u-boot source and make it use the v2017.01 branch.

   prepare: |
     git apply ../../../uboot.patch
     make am335x_boneblack_config

With this “prepare:” scriptlet we tell the plugin to apply our uboot.patch to the checked out branch and to configure it for a beaglebone black before starting the build.

   install: |
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

If you have worked with u-boot gadgets before you know how important the uboot.env file that carries our UbuntuCore bootloader setup is. It always needs to be the right size (-s 131072), redundant (-r) to allow atomic writing and we ship the input file as uboot.env.in in our source trees. In the “install:” scriptlet we take this input file and create a proper environment image file from it using the mkenvimage tool our build has just created before. The ubuntu-image and “snap prepare-image” commands will look for an “uboot.conf” file at image creation time, so we create a symlink pointing to our binary env file.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc

Dependencies to build u-boot get defined in the “build-packages:” option of the part. Obviously we need a compiler (build-essential), some build scripts still use python2.7 headers (libpython2.7-dev) and when test building there is a complaint about bc missing that is not fatal (but disturbing enough to also install the bc package as a build dependency).

After adding a bit of the general meta data like name, version, summary and description as well as all the snap informational data like type, (target) architecture, confinement type and stability grade, the resulting snapcraft.yaml looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]
    prepare: |
      git apply ../../../uboot.patch
      make am335x_boneblack_config
    install: |
      tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
      cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
    build-packages:
      - libpython2.7-dev
      - build-essential
      - bc

This snapcraft.yaml is enough to build a beaglebone gadget snap natively on an armhf host, so it will work if you run “snapcraft” in the checked out source on a raspberry Pi install or if you let launchpad or build.snapcraft.io do the build for you … but … typically, while developing you want to build on your workstation PC, not on some remote source or on a slow arm board. With some modifications of the snapcraft.yaml we can luckily make that possible very easily, lets make a copy of our snapcraft.yaml (i call it crossbuild-snapcraft.yaml in my trees) and add some changes to that.

Allow cross Building

First of all, we want a cross compiler on the host machine, so we will add the gcc-arm-linux-gnueabi package to the list of build dependencies.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc
     - gcc-arm-linux-gnueabi

We also need to override the “make” call to carry info about our cross compiler in the CROSS_COMPILE environment variable. We can use a “build:” scriptlet for this.

   build: |
     CROSS_COMPILE=arm-linux-gnueabi- make

When cross building, the “artifacts:” line sadly does not do what it should anymore (i assume this is a bug), as a quick workaround we can enhance the “install:” script snippet with a simple cp command.

   install: |
     cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

With all these changes in place our crossbuild-snapcraft.yaml now looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
  plugin: make
  source: git://git.denx.de/u-boot.git
  source-branch: v2017.01
  artifacts: [MLO, u-boot.img]
  prepare: |
    git apply ../../../uboot.patch
    make am335x_boneblack_config
  build: |
    CROSS_COMPILE=arm-linux-gnueabi- make
  install: |
    cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
    tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
    cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
  build-packages:
    - libpython2.7-dev
    - build-essential
    - bc
    - gcc-arm-linux-gnueabi

So with the original snapcraft.yaml we can now let our tree auto-build in build.snapcraft.io, when we checkout the source locally and want to build on a PC a simple “cp crossbuild-snapcraft.yaml snapcraft.yaml && snapcraft” will do a local cross build.

Creating the gadget.yaml

Just building the bootloader binaries is indeed not enough to create a bootable image, the binaries need to go in the right place, the bootloader will need to know where the devicetree file can be found and a working image should also have a proper partition table. For this purpose we will need to create a gadget.yaml file with the right information.

We create a gadget.yaml file in the source tree and tell the system that the devicetree file is called am335x-boneblack and that it gets shipped by the kernel snap.

device-tree: am335x-boneblack
device-tree-origin: kernel

Now we add a “volume:” entry that tells the system about the bootloader type (grub or u-boot) and defines which type of partition table we want (either “gpt” for a GUID partiton table or “mbr” for an msdos type one).

volumes:
  disk:
    bootloader: u-boot
    schema: mbr

(Note that in newer versions of the ubuntu-image tool the –ouput option to give your image a meaningful name has been deprecated, instead the name of the volume from the gadget snap is used now. To give your image a more meaningful name you might want to change “disk:” above to something else like “beagleboneblack:” to get a beagleboneblack.img file)

The last bit we need to do is to give our volume a “structure:”, i.e. a partition table but also info where to write the raw bootloader bits (MLO and u-boot.img).

Looking a the elinux wiki [3] how to create a bootable SD card for the beaglebone black we find lines like:

dd if=MLO of=/dev/sdX count=1 seek=1 conv=notrunc bs=128k
dd if=u-boot.img of=/dev/sdX count=2 seek=1 conv=notrunc bs=384k

For writing the bootloader blobs into the right place ubuntu-image will not just use dd so we need to translate the lines into proper entries for the volume structure. Lets take a closer look. The MLO line tells us dd will use 128k (131072 bytes) sized blocks (bs=), it will use 1 block offset from the start of the card (seek=1) and will reserve one block for the MLO payload in use (count=1). and indeed there is no filesystem in use, it will be written “bare”.
This gives us the first entry in the volume structure.

    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO

The u-boot.img dd command uses a block size of 384k (393216 bytes), one block offset from the start of the image and reserves two blocks as possible size for the u-boot.img binary and will also write the binary raw into place (type: bare).

      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img

Currently every UbuntuCore u-boot image expects to find the bootloader configuration, kernel, initrd and devicetree file in a vfat partition (type: 0C) called system-boot. To have enough wiggle room we’ll make that partition 128M big that will leave enough space for even gigantic kernel binaries or initrd’s. The ubuntu-image tool will put our uboot.env file from the start into that partition.

       - name: system-boot
         type: 0C
         filesystem: vfat
         filesystem-label: system-boot
         size: 128M

The final gadget.yaml file will now look like:

device-tree: am335x-boneblack
device-tree-origin: kernel
volumes:
  disk:
    bootloader: u-boot
    schema: mbr
    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO
      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img
      - name: system-boot
        type: 0C
        filesystem: vfat
        filesystem-label: system-boot
        size: 128M

As you can see building a gadget snap is fairly easy and only requires four files (snapcraft.yaml, gadget.yaml, uboot.patch and uboot.env.in) to have in a github tree that you can then have auto-built on build.snapcraft.io. In subsequent posts i will explain the patch and uboot.env.in files in more detail. I will also describe the setup of default interfaces a gadget can provide as well as how to set some system defaults from the gadget.yaml file. If you want to take a look at the full source tree used for the above example, go to [1].

Documentation of the gadget snap syntax can be found at [2]. The dd commands used as input for the gadget.yaml file can be found at [3] and documentation how to build an image out of a gadget snap is at [4]. If you have any questions feel free to ask at [5] (i recommmend using the “device” category).

[1] https://github.com/ogra1/beaglebone-gadget
[2] https://forum.snapcraft.io/t/the-gadget-snap/696
[3] http://elinux.org/Beagleboard:U-boot_partitioning_layout_2.0
[4] https://docs.ubuntu.com/core/en/guides/build-device/image-building
[5] https://forum.snapcraft.io/

Use UbuntuCore to create a WiFi AP with nextcloud support on a Pi3 in minutes.

UbuntuCore is the rolling release of Ubuntu.
It is self updating and completely built out of snap packages (including kernel, boot loader and root file system) which provides transactional updates, manual and automatic roll-back and a high level of system security to all parts of the system.

Once installed you have a secure zero maintenance OS that you can easily turn into a powerful appliance by simply adding a few application snaps to it.

The raspberry Pi3 comes with WLAN and ethernet hardware on board, which makes it a great candidate to turn it into a WiFi AccessPoint. But why stop here ? With UbuntuCore we can go further and install a WebRTC solution (like spreedme) for making in-house video calls or an UPNP media server to serve our music and video collections, an OpenHAB home automation device … or we can actually turn it into a personal cloud using the nextcloud snap.

The instructions below walk you through a basic install of UbuntuCore, setting up a WLAN AP, adding an external USB disk to hold data for nextcloud and installing the nexcloud snap.

You need a Raspberry Pi3 and an SD card.

Preparation:

Create an account at https://login.ubuntu.com/ and upload your public ssh key (~/.ssh/id_rsa.pub) in the “SSH Keys” section. This is where your UbuntuCore image will pull the ssh credentials from to provide you login access to the system (by default UbuntuCore does not create a local console login, only remote logins using this ssh key will be allowed).

Download the image from:
http://releases.ubuntu.com/ubuntu-core/16/ubuntu-core-16-pi3.img.xz

…or if you are brave and like to live on the edge you can use a daily build of the edge channel (…bugs included 😉) at:
http://people.canonical.com/~ogra/snappy/all-snaps/daily/current/ubuntu-core-16-pi3.img.xz

Write the image to SD:

Put your SD card into your PC’s SD card reader …
Make sure it did not get auto mounted (in case it did, do not use the filemanager to unmount but unmount it using the commandline (in the example below my USB card reader shows the SD as /dev/sdb to the system)):

ogra@pc~$ mount | grep /dev/sdb # check if anything is mounted
...
ogra@pc~$ sudo umount /dev/sdb1 # unmount the partition
ogra@pc~$ sudo umount /dev/sdb2
ogra@pc~$ 

Use the following command to write the image to the card:

ogra@pc~$ xzcat /path/to/ubuntu-core-16-pi3.img.xz | sudo dd of=/dev/sdb bs=8M
ogra@pc~$ 

Plug the SD into your pi3, plug an ethernet cable and either a serial cable or a monitor and keyboard in and power up the board. Eventually you will se a “Please press enter” message on the screen, hitting Enter will start the installer.

Going through the installer …

Configure eth0 as default interface (the WLAN driver is broken in the current pi3 installer. Simply ignore the wlan0 device at this point):

Give your login.ubuntu.com account info so the system can set up your ssh login:

The last screen of the installer will tell you the ssh credentials to use:

Ssh into the board, set a hostname and call sudo reboot (to work around the WLAN breakage):

ogra@pc:~$ ssh ogra@192.168.2.82

...
It's a brave new world here in Snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snap --help'
for app installation and transactional updates.

ogra@localhost:~$ sudo hostnamectl set-hostname pi3
ogra@localhost:~$ sudo reboot

Now that we have installed our basic system, we are ready to add some nice application snaps to turn it into a shiny WiFi AP with a personal cloud to use from our phone and desktop systems.

Install and set up your personal WiFi Accesspoint:

ogra@pi3:~$ snap install wifi-ap
ogra@pi3:~$ sudo wifi-ap.setup-wizard
Automatically selected only available wireless network interface wlan0 
Which SSID you want to use for the access point: UbuntuCore 
Do you want to protect your network with a WPA2 password instead of staying open for everyone? (y/n) y 
Please enter the WPA2 passphrase: 1234567890 
Insert the Access Point IP address: 192.168.1.1 
How many host do you want your DHCP pool to hold to? (1-253) 50 
Do you want to enable connection sharing? (y/n) y 
Which network interface you want to use for connection sharing? Available are sit0, eth0: eth0 
Do you want to enable the AP now? (y/n) y 
In order to get the AP correctly enabled you have to restart the backend service:
 $ systemctl restart snap.wifi-ap.backend 
2017/04/29 10:54:56 wifi.address=192.168.1.1 
2017/04/29 10:54:56 wifi.netmask=ffffff00 
2017/04/29 10:54:56 share.disabled=false 
2017/04/29 10:54:56 wifi.ssid=Snappy 
2017/04/29 10:54:56 wifi.security=wpa2 
2017/04/29 10:54:56 wifi.security-passphrase=1234567890 
2017/04/29 10:54:56 disabled=false 
2017/04/29 10:54:56 dhcp.range-start=192.168.1.2 
2017/04/29 10:54:56 dhcp.range-stop=192.168.1.51 
2017/04/29 10:54:56 share.network-interface=eth0 
Configuration applied succesfully 
ogra@pi3:~$

Set up an USB key as permanently mounted disk:

Plug your USB Disk/Key into the Pi3 and immediately call the dmesg command afterwards so you can see the name of the device and its partitions … (in my case the device name is /dev/sda and there is a vfat partition on the device called /dev/sda1)

Now create /etc/systemd/system/media-usbdisk.mount with the following content:

[Unit] 
Description=Mount USB Disk

[Mount]
What=/dev/sda1 
Where=/media/usbdisk 
Options=defaults 

[Install] 
WantedBy=multi-user.target

And enable it:

ogra@pi3:~$ sudo systemctl daemon-reload 
ogra@pi3:~$ sudo systemctl enable media-usbdisk.mount 
ogra@pi3:~$ sudo systemctl start media-usbdisk.mount 
ogra@pi3:~$

Install the nextcloud snap:

ogra@pi3:~$ snap install nextcloud 
ogra@pi3:~$

Allow nextcloud to access devices in /media:

ogra@pi3:~$ snap connect nextcloud:removable-media 
ogra@pi3:~$

Wait a bit, nextclouds auto setup takes a few minutes (make some tea or coffee) …

Turn on https:

ogra@pi3:~$ sudo nextcloud.enable-https self-signed 
Generating key and self-signed certificate... done 
Restarting apache... done 
ogra@pi3:~$

Now you can connect to your new WiFi AP SSID and point your browser to https://192.168.1.1/ afterwards.

Add an exception for the self signed security cert (note that nextcloud.enable-https also accepts Let’s Encrypt certs in case you own one, just call “sudo nextcloud.enable-https -h” to get all info) and configure nextcloud via the web UI.

In the nextcloud UI install the “External Storage Support” from the app section and create a new local Storage pointing to the /media/usbdisk dir so your users can store thier files on the external disk.

An alternate approach to Ubuntu Phone Web App containers

It bothers me since a while that Web Apps on the Ubuntu Phone have their back button at the top left of the screen. It bothers me even more that the toolbar constantly collapses and expands during browsing … most of the time it does that for me when I just want to tap on a link. The page content suddenly moves 50px up or down…

Since Dekko exists on the Ubuntu Phone I became a heavy user of it for reading my mails and I really fell in love with the new bottom menu that Dan Chapman integrated so nicely (based on the circle menu work from Nekhelesh Ramananthan)

So this weekend it struck me to simply combine a WebView with this menu work to ge a shiny bottom navigation menu. I grabbed the recent google plus app from Szymon Waliczek, the latest source of Dekko and some bits from the webbrowser-app tree to combine them into a new webapp-container like framework.

You can find an experimental G+ click package (one that surely wins the contest for the ugliest icon) here.

I pushed the code to launchpad together with a README that describes how you can use it in your own WebApp, you can branch it with:

bzr branch lp:~ogra/junk/alternate-webapp-container

Meet node-snapper a helper to easily create .snap packages of your node.js projects

When I created the “Snappy Chatroom” package for WebRTC Video chat on snappy I used node.js to provide the necessary server bits. During building the snap I noticed how hard it is to actually put the necessary binaries and node modules in place. Especially if you want your snap to be arch independent (javascript is arch independent, so indeed our snap package should be too).

The best way I found was to actually build node.js from source on the respective target arch and run “npm install” for the necessary modules, then tarring up the matching dirs and putting them into my snap package tree.

This is quite some effort !!!

I’m a lazy programmer and surely do not want do that every time I update the package. Luckily there are already binaries of node for all architectures in the Ubuntu archive and it is not to hard to make use of them to run npm install in a qemu-user-static chroot for all target arches and to automate the creation for the respective tarballs. As little bonus i thought it would be nice to have it automatically generate the proper snap execution environment in form of a service startup script (with properly set LD_LIBRARY_PATh etc) so you only need to point node to the to-execute .js file.

This brought me to write node-snapper, a tool that does exactly do the above. It makes it easy to just maintain the actual code I care about in a tree (the site itself and the packaging data for the snap). I leave the caring for node itself or for the modules to the archive, respectively the npm upstreams and just pull in their work as needed.

See https://launchpad.net/node-snapper for the upstream code.

To outline how node-snapper works I took some notes below how I roll the chatroom snap as an example.

Using node-snapper:

First we create a work dir for our snap package.

ogra@anubis:~$ mkdir package

To create the nodejs and npm module setup for our snap package we use node-snapper, let’s branch this so we can use it later.

ogra@anubis:~$ bzr branch lp:node-snapper

Now we move into the package dir and let node-snapper create the tarballs with the “express”, “webrtc.io”, “webrtc.io-client” and “ws” node modules since chatroom makes use of all of them.

ogra@anubis:~$ cd package
ogra@anubis:~/package$ sudo ../node-snapper/node-snapper express webrtc.io webrtc.io-client ws
...

This created three files.

ogra@anubis:~/package$ ls
amd64-dir.tgz  armhf-dir.tgz  start-service.sh

We unpack the tarballs and remove them.

ogra@anubis:~/package$ tar xf amd64-dir.tgz
ogra@anubis:~/package$ tar xf armhf-dir.tgz
ogra@anubis:~/package$ ls
amd64  amd64-dir.tgz  armhf  armhf-dir.tgz  start-service.sh
ogra@anubis:~/package$ rm *.tgz
ogra@anubis:~/package$ ls
amd64  armhf  start-service.sh

… and branch the chatroom site and packaging code branch.

ogra@anubis:~/package$ bzr branch lp:~ogra/+junk/chatroom
ogra@anubis:~/package$ mv chatroom/* .
ogra@anubis:~/package$ rm -rf chatroom/
...
ogra@anubis:~/package$ ls site/
add.png      cam_on.png    expand.png      fullscreen.png  mute.png   server.js  unmute.png
cam_off.png  collapse.png  index.html      script.js  style.css  webrtc.io.js
ogra@anubis:~/package$ ls meta/
icon.png  icon.svg  package.yaml  readme.md

The file we want node to execute on startup is the server.js file in the “site” dir in our snap package. We edit start-service.sh so that the MY_EXECUTABLE variable looks like:

MY_EXECUTABLE=site/server.js

This is it, we are ready to roll a .snap package out of this

ogra@anubis:~/package$ cd ..
ogra@anubis:~$ snappy build package
...
ogra@anubis:~$ ls
chatroom.ogra_0.1-5_multi.snap  package

As you can see, node-snapper makes supplying javascript nodejs code as snap package a breeze. You only need to keep your site and package files in a git or bzr tree and node-snapper will always provide you the latest nodejs setup and npm installed modules as needed just at package build time.

Indeed we now want to test our snap package. I have a RPi2 running snappy at 192.168.2.57 with enabled developer mode, so I can easily use snappy-remote to install the package.

ogra@anubis:~$ snappy-remote --url=ssh://192.168.2.57 install chatroom.ogra_0.1-5_multi.snap

The service should start automatically. Opening chromium, pointing it to http://192.168.2.57:6565 and approving access to microphone and camera will now give us a Video chat (pointing an android phone to it at the same time enables you to talk to yourself while watching you from different angles 😉 … note that the mute button is very helpful when doing this …)

I hope we will see some more node.js projects land in the snappy app store soon. A PPA with node-snapper to make it easier installable should be ready next week and if I see there is demand I will also push it to universe in the Ubuntu archive.

I hope you found that little howto helpful 🙂

Porting Ubuntu Snappy to a yet unsupported armhf board

With the appearance of Snappy Ubuntu steps into the world of embedded systems. Ubuntu Snappy is designed in a way that will make it safe to run in critical environments from drones over medical equipment to robotics, home automation and machine control. The automatic rollback features will prevent you from outages when an upgrade fails, application confinement prevents you from apps, servers and tools doing any evil to your system and the image based design makes upgrades happen in minutes instead of potentially hours you are used to from package based upgrade systems.

By its design of separating device, rootfs and application packages strictly Snappy provides a true rolling release, you just upgrade each of the bits separately, independent from each other. Your Home Automation Server software can just stay on the latest upstream version all the time, no matter what version or release the other bits of your system are on. There is no more “I’m running Ubuntu XX.04 or XX.10, where do i find a PPA with a backport of the latest LibreOffice”, “snappy install” and “snappy upgrade” will simply always get you the latest stable upstream version of your software, regardless of the base system.

Thanks to the separation of the device related bits porting to yet unsupported hardware is a breeze too, though since features like automated roll-back on upgrades as well as the security guarding of snap packages depend on capabilities of the bootloader and kernel, your port might operate slightly degraded until you are able to add these bits.

Let’s take a look what it takes to do such a port to a NinjaSphere developer board in detail.

The Snappy boot process and finding out about your Bootloader capabilities

This section requires some basic u-boot knowledge, you should also have read https://developer.ubuntu.com/en/snappy/porting/

By default the whole u-boot logic in a snappy system gets read and executed from a file called snappy-system.txt living in the /boot partition of your install. This file is put in place by the image build software we will use later. So first of all your Bootloader setup needs to be able to load files from disk and read their content into the bootloader environment. Most u-boot installs provide “fatload” and the “env import” commands for this.

It is also very likely that the commands in your snappy-system.txt are to new for your installed u-boot (or are simply not enabled in its build configuration) so we might need to override them with equivalent functions your bootloader actually supports (i.e. fatload vs load or bootm vs bootz).

To get started, we grab a default linux SD card image from the board vendor, write it to an SD card and wire up the board for serial console using an FTDI USB serial cable. We stop the boot process by hitting enter right after the first u-boot messages appear during boot, which should get us to the bootloader prompt where we simply type “help”. This will show us all the commands the installed bootloader knows. Next we want to know what the bootloader does by default, so we call the “printenv” command which will show us all pre-set variables (copy paste them from your terminal application to a txt file so you can easier look them up later without having to boot your board each time you need to know anything).

Inspecting the “printenv” output of the NinjaSphere u-boot you will notice that it uses a file called uEnv-NS.txt to read its environment from. This is the file we will have to work with to put overrides and hardware sepcific bits in place. It is also the file from which we will load snappy-system.txt into our environment.

Now lets take a look at the snappy-system.txt file, an example can be found at:
http://people.canonical.com/~ogra/snappy/snappy-system.txt

It contains four variables we can not change that tell our snappy how to boot, these are snappy_cmdline, snappy_ab, snappy_stamp and snappy_mode. It also puts the logic for booting a snappy system into the snappy_boot variable.
Additionally there are the different load commands for kernel, initrd and devicetree files and as you can see when comparing these with your u-boot “help” output they use commands out installed u-boot does not know, so the first bits we will put into our uEnv-NS.txt files are adjusted version of these commands. In the default instructions for the NinjaSphere for building the Kernel you will notice that it uses the devicetree attached to an uImage and can not boot raw vmlinuz and initrd.img files by using the bootz command. It also does not use an initrd at all by default but luckily in the “printenv” output there is at least a load address set for a ramdisk already, so we will make use of this. Based on these findings our first lines in uEnv-NS.txt look like the following:


loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}
kernel_file_ninja=uImage
initrd_file_ninja=uInitrd

We will now simply be able to run “loadfiles_ninja” instead of “loadfiles” from our snappy_boot override command.

Snappy uses ext4 filesystems all over the place, looking at “printenv” we see the NinjaSphere defaults to ext3 by setting the mmcrootfstype variable, so our next line in uEnv-NS.txt switches this to ext4:

mmcrootfstype=ext4

Now lets take a closer look at snappy_boot in snappy-system.txt, the command that contains all the magic.
The section  “Bootloader requirements for Snappy (u-boot + system-AB)” on https://developer.ubuntu.com/en/snappy/porting/ describes the if-then logic used there in detail. Comparing the snappy_boot command from snappy-system.txt with the list of available commands shows that we need some adjustments though, the “load” command is not supported, we need to use “fatload” instead. The original snappy_boot command also uses “fatwrite” to touch snappy-stamp.txt. While you can see from the “help” output, that this command is supported by our preinstalled u-boot, there is a bug with older u-boot versions where using fatwrite results in a corrupted /boot partition if this partition is formatted as fat32 (which snappy uses). So our new snappy_boot command will need to have this part of the logic ripped out (which sadly breaks the auto-rollback function but will not have any other limitations for us (“snappy upgrade” will still work fine as well as a manual “snappy rollback” will)).

After making all the changes our “snappy_boot_ninja” will look like the following in the uEnv-NS.txt file:


snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}

As the final step we now just need to set “uenvcmd” to import the variables from snappy-system.txt and then make it run our modified snappy_boot_ninja command:


uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

This is it ! Our bootloader setup is now ready, the final uEnv-NS.txt that we will put into our /boot partition now looks like below:


# hardware specific overrides for the ninjasphere developer board
#
loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}


kernel_file_ninja=uImage
initrd_file_ninja=uInitrd
mmcrootfstype=ext4


snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}


uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

Building kernel and initrd files to boot Snappy on the NinjaSphere

Snappy makes heavy use of the apparmor security extension of the linux kernel to provide a safe execution environment for the snap packages of applications and services. So while we could now clone the NinjaSphere kernel source and apply the latest apparmor patches from linus’ mainline tree, the kind Paolo Pisati from the Ubuntu kernel team was luckily interested in getting the NinjaSphere running snappy and did all this work for us already, so instead of cloning the BSP kernel from the NinjaSphere team on github, we can pull the already patched tree from:

http://kernel.ubuntu.com/git?p=ppisati/ubuntu-vivid.git;a=shortlog;h=refs/heads/snappy_ti_ninjasphere

First of all, let us install a cross toolchain. Assuming you use an Ubuntu or Debian install for your work you can just do this by:


sudo apt-get install gcc-arm-linux-gnueabihf

Now we clone the patched tree and move into the cloned directory:


git clone -b snappy_ti_ninjasphere git://kernel.ubuntu.com/ppisati/ubuntu-vivid.git
cd ubuntu-vivid

Build uImage with attached devicetree, build the modules and install them. All based on Paolos adjusted snappy defconfiig:


export CROSS_COMPILE=arm-linux-gnueabihf-; export ARCH=arm
make snappy_ninjasphere_defconfig
make -j8 uImage.var-som-am33-ninja
make -j8 modules
mkdir ../ninjasphere-modules
make modules_install INSTALL_MOD_PATH=../ninjasphere-modules
cp arch/arm/boot/uImage.var-som-am33-ninja ../uImage
cd -

So we now have a modules/ directory containing the binary modules and we have a uImage file to boot our snappy, what we are still missing is an initrd file to make our snappy boot. We can just use the initrd from an existing snappy device tarball which we can find at cdimage.ubuntu.com.


mkdir tmp
cd tmp
wget http://cdimage.ubuntu.com/ubuntu-core/daily-preinstalled/current/vivid-preinstalled-core-armhf.device.tar.gz
tar xzvf vivid-preinstalled-core-armhf.device.tar.gz

Do you remember, our board requires an uInitrd … the above tarball only ships a raw initrd.img, so we need to convert it. In Ubuntu there is the u-boot-tools package that ships the mkimage tool to convert files for u-boot consumption, lets install this package and create a proper uInitrd:


sudo apt-get install u-boot-tools
mkimage -A arm -T ramdisk -C none -n "Snappy Initrd" -d system/boot/initrd.img-* ../uInitrd
cd ..
rm -rf tmp/

If you do not want to keep the modules from the -generic kernel in your initrd.img you can easily unpack and re-pack the initrd.img file as described in “Initrd requirements for Snappy” on https://developer.ubuntu.com/en/snappy/porting/ and simply rm -rf lib/modules/* before re-packing to get a clean and lean initrd.img before converting to uInitrd.

Now we have a bootloader configuration file, uImage, uInitrd and a dir with the matching binary modules we can use to create our snappy device tarball.

Creating the Snappy device tarball

We are ready to create the device tarball filesystem structure and roll a proper snappy tarball from it, lets create a build/ dir in which we build this structure:


mkdir build
cd build

As described on https://developer.ubuntu.com/en/snappy/porting/ our uInitrd and uImage files need to go into the assets subdir:


mkdir assets
cp ../uImage assets/
cp ../uInitrd assets/

The modules we built above will have to live underneath the system/ dir inside the tarball:


mkdir system
cp -a ../modules/* system/

Our boootloader configuration goes into the boot/ dir. For proper operation snappy looks for a plain uEnv.txt file, since our actual bootloader config lives in uEnv-NS.txt we just create the other file as empty doc (it would be great if we could use a symlink here, but remember, the /boot partition that will be created from this uses a vfat filesystem and vfat does not support
symlinks, so we just touch an empty file instead).


mkdir boot
cp ../uEnv-NS.txt boot/
touch boot/uEnv.txt

Snappy will also expect a flashtool-assets dir, even though we do not use this for our port:


mkdir flashtool-assets

As last step we now need to create the hardware.yaml file as described on https://developer.ubuntu.com/en/snappy/porting/


echo "kernel: assets/uImage" >hardware.yaml
echo "initrd: assets/uInitrd" >>hardware.yaml
echo "dtbs: assets/dtbs" >>hardware.yaml
echo "partition-layout: system-AB" >>hardware.yaml
echo "bootloader: u-boot" >>hardware.yaml

This is it ! Now we can tar up the contents of the build/ dir into a tar.xz file that we can use with ubuntu-device-flash to build a bootable snappy image.


tar cJvf ../device_part_ninjasphere.tar.xz *
cd ..

Since I personally like to re-build my tarballs regulary if anything changed or improved, I wrote a little tool I call snappy-device-builder which takes over some of the repetitive tasks you have to do when rolling the tarball, you can branch it with bzr from launchpad if you are interested in it (patches and improvements are indeed very welcome):


bzr branch lp:~ogra/+junk/snappy-device-builder

Building the actual SD card image

Install the latest ubuntu-device-flash from the snappy-dev beta PPA:


sudo add-apt-repository ppa:snappy-dev/beta
sudo apt-get update
sudo apt-get install ubuntu-device-flash

Now we build a 3GB big image called mysnappy.img using ubuntu-device-flash and our newly created device_part_ninjasphere.tar.xz with the command below:


sudo ubuntu-device-flash core --size 3 -o mysnappy.img --channel ubuntu-core/devel-proposed --device generic_armhf --device-part device_part_ninjasphere.tar.xz --developer-mode

.. and write the create mysnappy.img to an SD card that sits in the SD Card Reader at /dev/sdc:


sudo dd if=mysnappy.img of=/dev/sdc bs=4k

This is it, your NinjaSphere board should now boot you to a snappy login on the serial port, log in with “ubuntu” with the password “ubuntu” and if your board is attached to the network i recommend doing a “sudo snappy install webdm”, then you can reach your snappy via http://webdm.local:4200/ in a browser and install/remove/configure snap packages on it.

If you have any problems with this guide, want to make suggestions or have questions, you can reach me as “ogra” via IRC inthe #snappy channel on irc.freenode.net or just mail the snappy-devel@lists.ubuntu.com mailing list with your question.