Snap your Shell scripts !!!

A colleague recently talked me into buying one of these nifty HDMI to USB video capture dongles that allows me to try out my ARM boards attached to my desktop without the need for a separate monitor. Your video output just ends up in a window on your desktop … this is quite a feature for just 11€

The device shows up on Linux as a new /dev/video* device when you plug it in, it also registers in pulseaudio as an audio source. To display the captured output a simple call to mplayer is sufficient, like:

mplayer -ao pulse tv:// -tv driver=v4l2:device=/dev/video2:width=1280:height=720

Now, you might have other video devices (i.e. a webcam) attached to your machine and it will not always be /dev/video2 … so we want a bit of auto-detection to determine the device …

Re-plugging the device while running dmesg -w shows the following:

usb 1-11: new high-speed USB device number 13 using xhci_hcd
usb 1-11: New USB device found, idVendor=534d, idProduct=2109, bcdDevice=21.00
usb 1-11: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-11: Product: USB Video
usb 1-11: Manufacturer: MACROSILICON
uvcvideo: Found UVC 1.00 device USB Video (534d:2109)
hid-generic 0003:534D:2109.000C: hiddev2,hidraw7: USB HID v1.10 Device [MACROSILICON USB Video] on usb-0000:00:14.0-11/input4

So our vendor id for the device is 534d … this should help finding the correct device from Linux’ sysfs … lets write a little helper:

VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do 
  if grep -q 534d $file 2>/dev/null; then 
    ls $(echo $file| sed 's/idVendor/*/')/video4linux; 
  fi;
done | sort |head -1)

Running this snippet in a terminal and echoing $VIDEODEV now puts out “video2”, this is something we can work with … lets put both of the above together into one script:

#! /bin/sh

VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do 
  if grep -q 534d $file 2>/dev/null; then 
    ls $(echo $file| sed 's/idVendor/*/')/video4linux; 
  fi;
done | sort |head -1)

mplayer -ao pulse tv:// -tv driver=v4l2:device=/dev/$VIDEODEV:width=1280:height=720

Making the script executable with chmod +x run.sh (i have called it run.sh) and executing it as ./run.sh now pops up a 720p window showing the screen of my attached Raspberry Pi.

Video works now, lets take a look at how we can get the audio output too.
First we need to find the correct name for the pulseaudio source, again based on the Vendor Id:

AUDIODEV=$(pactl list sources | egrep 'Name:|device.vendor.id.' | grep -B1 534d | head -1 | sed 's/^.*Name: //')

Running the above and then echoing $AUDIODEV returns alsa_input.usb-MACROSILICON_USB_Video-02.analog-stereo so this is our pulse source we want to capture and play back to the default audio output, this can be easily done with a pipe between two pacat commands, one for record (-r) and one for playback (-p) like below:

pacat -r --device="$AUDIODEV" --latency-msec=1 | pacat -p --latency-msec=1 

Now playing a video with audio on my Pi while running the ./run.sh script in one terminal and the pacat pipe in a second one gives me both, video and audio output …
To not have to use two terminals we should rather merge the pacat, auto detection and mplayer commands into one script … since both of them are actually blocking, we need to fiddle a bit by putting pacat into the background (by adding a & to the end of the command) and telling our shell to actually kill all subprocesses (even backgrounded ones) that were started by our script when we stop it with the following trap command:

pid=$$
terminate() {
  pkill -9 -P "$pid"
}
trap terminate 1 2 3 9 15 0

So lets merge everything into one script, it should look like below then:

#! /bin/sh

pid=$$
terminate() {
  pkill -9 -P "$pid"
}
trap terminate 1 2 3 9 15 0

VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do 
  if grep -q 534d $file 2>/dev/null; then 
    ls $(echo $file| sed 's/idVendor/*/')/video4linux; 
  fi;
done | sort |head -1)

AUDIODEV=$(pactl list sources | egrep 'Name:|device.vendor.id.' | grep -B1 534d | head -1 | sed 's/^.*Name: //')

pacat -r --device="$AUDIODEV" --latency-msec=1 | pacat -p --latency-msec=1 &

mplayer -ao pulse tv:// -tv driver=v4l2:device=/dev/$VIDEODEV:width=1280:height=720

And this is it, executing the script now plays back video and audio from the dongle …

Collecting all the above info to create that shell script took me the better part of a Sunday afternoon and I was figuring that everyone who buys such a device might hit the same pain, so why not package it up in a distro agnostic way so that everyone on Linux can simply use my script and does not have to do all the hackery themselves … snaps are an easy way to do this and they are really quickly packaged as well, so lets do it !

First of all we need the snapcraft tool to quickly and easily create a snap and use multipass as build environment:

sudo snap install snapcraft --classic
sudo snap install multipass

Now lets create a workdir, copy our script in place and let snapcraft init create a template file as a boilerplate:

$ mkdir hdmi-usb-dongle
$ cd hdmi-usb-dongle
$ cp ../run.sh .
$ snapcraft init
Created snap/snapcraft.yaml.
Go to https://docs.snapcraft.io/the-snapcraft-format/8337 for more information about the snapcraft.yaml format.
$

We’ll edit the name in snap/snapcraft.yaml, change core18 to core20 (since we really want to be on the latest base), adjust description and summary, switch grade: to stable and confinement: to strict … Now that we have a proper skeleton, lets take a look at the parts: which tells snapcraft how to build the snap and what should be put into it … we just want to copy our script in place and make sure that mplayer and pacat are available to it … To copy a script we can use the dump plugin that snapcraft provides, to make sure the two applications our script uses get included we have the stage-packages: property, the parts: definition should look like:

parts:
  copy-runscript: # give it any name you like here
    plugin: dump
    source: . # our run.sh lives in the top level of the source tree
    organize:
      run.sh: usr/bin/run # tell snapcraft to put run.sh into a PATH that snaps do know about
    stage-packages:
      - mplayer
      - pulseaudio-utils

Now we can just call snapcraft while inside the hdmi-usb-dongle dir:

$ snapcraft
Launching a VM.
Launched: snapcraft-my-dongle
[...]
Priming copy-runscript 
+ snapcraftctl prime
This part is missing libraries that cannot be satisfied with any available stage-packages known to snapcraft:
- libGLU.so.1
- libglut.so.3
These dependencies can be satisfied via additional parts or content sharing. Consider validating configured filesets if this dependency was built.
Snapping |                                                                                                                   
Snapped hdmi-usb-dongle_0.1_amd64.snap

OOPS ! Seems we are missing some libraries and snapcraft tells us about this (they are apparently needed by mplayer)… lets find where these libs live and add the correct packages to our stage-packages: entry … we’ll install apt-file for this which allows reverse searches in deb packages:

$ sudo apt install apt-file
[...]
$ sudo apt-file update
$ apt-file search libGLU.so.1                               
libglu1-mesa: /usr/lib/x86_64-linux-gnu/libGLU.so.1
libglu1-mesa: /usr/lib/x86_64-linux-gnu/libGLU.so.1.3.1
$ apt-file search libglut.so.3
freeglut3: /usr/lib/x86_64-linux-gnu/libglut.so.3
freeglut3: /usr/lib/x86_64-linux-gnu/libglut.so.3.9.0

There we go, lets add libglu1-mesa and freeglut3 to our stage-packages:

    stage-packages:
      - mplayer
      - pulseaudio-utils
      - libglu1-mesa
      - freeglut3

If we now just call snapcraft again, it will re-build the snap for us and the warning about the missing libraries will be gone …

So now we do have a snap containing all the bits we need, the run.sh script, mplayer and pacat (from the pulseaudio-utils package). We also have made sure that mplayer finds the libs it needs to run, now we just need to tell snapcraft how we want to execute our script. To do this we need to add an apps: section to our snapcraft.yaml:

apps:
  hdmi-usb-dongle:
    extensions: [gnome-3-38]
    command: usr/bin/run
    plugs:
      - audio-playback   # for the use of "pacat -p" and "pactl list sources"
      - audio-record     # for the use of "pacat -r"
      - camera           # to allow read access to /dev/videoX
      - hardware-observe # to allow scanning sysfs for the VendorId 

To save us from having to fiddle with any desktop integration, there are the desktop extensions (you can see which extensions exist with the snapcraft list-extensions command), since we picked base: core20 at he beginning when editing the template file, we will use the gnome-3-38 extension with our snap. Our app should execute our script from the place we have put it in with the organize: statement before so our command: entry points to usr/bin/run and to allow the different functions of our script we add a bunch of snap plugs that I have explained inline above. Now our snapcraft.yaml looks like below:

name: hdmi-usb-dongle
base: core20
version: '0.1'
summary: A script to use a HDMI to USB dongle
description: |
  This snap allows to easily use a HDMI to USB dongle on a desktop
grade: stable
confinement: strict

apps:
  hdmi-usb-dongle:
    extensions: [gnome-3-38]
    command: usr/bin/run
    plugs:
      - audio-playback
      - audio-record
      - camera
      - hardware-observe

parts:
  copy-runscript:
    plugin: dump
    source: .
    organize:
      run.sh: usr/bin/run
    stage-packages:
      - mplayer
      - pulseaudio-utils
      - libglu1-mesa
      - freeglut3

And this is it … running snapcraft again will now create a snap with an executable script inside, you can now install this snap (because it is a local snap you need the –dangeours option), connect the interface plugs and run the app (note that audio-playback automatically connects on desktops, so you do not explicitly need to connect it) …

$ sudo snap install --dangerous hdmi-usb-dongle_0.1_amd64.snap
$ sudo snap connect hdmi-usb-dongle:audio-record
$ sudo snap connect hdmi-usb-dongle:camera
$ sudo snap connect hdmi-usb-dongle:hardware-observe

When you now run hdmi-usb-dongle you should see something like below (if you have a HDMI cable connected with a running device you will indeed not see the test pattern):

This is great, everything runs fine, but if we run this on a desktop an “Unknown” icon shows up in the panel … it is also annoying having to start our app from a terminal all the time, so lets turn our snapped shell script into a desktop app by simply adding a .desktop file and an icon:

$ pwd
/home/ogra/hdmi-usb-dongle
$ mkdir snap/gui

We’ll create the desktop file inside the snap/gui folder that we just created, with the following content:

[Desktop Entry]
Type=Application
Name=HDMI to USB Dongle
Icon=${SNAP}/meta/gui/icon.png
Exec=hdmi-usb-dongle
Terminal=false
Categories=Utility;

Note that the Exec= line just uses our command from the apps: section in our snapcraft.yaml
Now find or create some .png icon, 256×256 is a good size (I tend to use flaticon.com to find something that fits (do not forget to attribute the author if you use downloaded icons, the description: field in your snapcraft.yaml is good for this)) and copy that icon.png into snap/gui

Re-build your snap once again, install it with the --dangerous option and you should now find it in your application overview or menu (if you do not use GNOME).
Your snapped shell script is done, congratulations !

You could now just upload it to snapcraft.io to allow others to use it … and here we are back to the reason for this blog post … as I wrote at the beginning it took me a bit of time to figure all the commands out that I added to the script … I’m crazy enough to think it might be useful for others, even though this USB dongle is pretty exotic hardware, so I expected it to probably find one or two other users for whom it might be useful and I created https://snapcraft.io/hdmi-usb-dongle

For snap publishers the snapcraft.io page offers the neat feature to actually see your number of users. I created this snap about 6 weeks ago, lets see how many people actually installed it in this period:

Oh, seems I was wrong, there are actually 95 (!) people out there that I could help with packaging my script as a snap package !! While indeed the majority of users will be on Ubuntu given that snaps are a default package tool there, even among these 95 users there are a good bunch of non Ubuntu systems (what the heck is “boss 7” ??):

So if you have any useful scripts lying on your disk, even for exotic tasks, why not share them with others ? As you can see from my example even scripts to handle exotic hardware quickly find a lot of users around the world and across different distros when you offer them in the snap store … and do not forget, snap packages can be services, GUI apps as well as CLI apps, there are no limits in what you can package as a snap !

Building snap packages on Ubuntu Core

I actually wanted to move on with the node-red series of blog posts, but noticed that there is something more pressing to write down first …

People (on the snapcraft.io forum or IRC) often ask about “how would i build a package for Ubuntu Core” …

If your Ubuntu Core device is i.e. a Raspberry Pi you won’t easily be able to build for its armhf or arm64 target architecture on your PC which makes development harder.

You can use the snapcraft.io auto-build service that builds for all supported arches automatically or use fabrica but if you want to iterate fast over your code, waiting for the auto-builds is quite time consuming. Others i heard of simply have two SD cards in use, one running classic Ubuntu Server and the second one running Ubuntu Core so you can switch them around to test your code on Core after building on Server … Not really ideal either and if you do not have two Raspberry Pis this ends in a lot reboots, eating your development time.

There is help !

There is an easy way to do your development on Ubuntu Core by simply using an LXD container directly on the device … you can make code changes and quickly build inside the container, pull the created snap package out of your build container and install it on the Ubuntu Core host without any reboots or waiting for remote build services, just take a look at the following recipe of steps:

1) Grab an Ubuntu Core image from the stable channel, run through the setup wizard to set up user and network and ssh into the device:

$ grep Model /proc/cpuinfo 
Model       : Raspberry Pi 3 Model B Plus Rev 1.3
$ grep PRETTY /etc/os-release 
PRETTY_NAME="Ubuntu Core 18"
$

2) Install lxd on the device and set up a container targeting the release that your snapcraft.yaml defines in the base: entry (i.e. base: core -> 16.04, base: core18 -> 18.04, base: core20 -> 20.04):

$ snap install lxd
$ sudo lxd init --auto
$ sudo lxc launch ubuntu:18.04 bionic
Creating bionic
Starting bionic
$

3) Enter the container with the lxc shell command, install the snapcraft snap, clone your tree and edit/build your code:

$ sudo lxc shell bionic
root@bionic:~# snap install snapcraft --classic
...
root@bionic:~# git clone https://github.com/ogra1/htpdate-daemon-snap.git
root@bionic:~# cd htpdate-daemon-snap/
... make any edits you want here ...
root@bionic:~/htpdate-daemon-snap# snapcraft --destructive-mode
...
Snapped 'htpdate-daemon_1.2.2_armhf.snap'
root@bionic:~/htpdate-daemon-snap#

4) Exit the container, pull the snap file you built and install it with the –dangerous flag

root@bionic:~/htpdate-daemon-snap# exit
logout
$ sudo lxc file pull bionic/root/htpdate-daemon-snap/htpdate-daemon_1.2.2_armhf.snap .
$ snap install --dangerous htpdate-daemon_1.2.2_armhf.snap
htpdate-daemon 1.2.2 installed
$

This is it … for each new iteration you can just enter the container again, make your edits, build, pull and install the snap.

(One additional Note: if you want to avoid having to use sudo with all the lxc calls above, add your username to the end of the line reading lxd:x:999: in the /var/lib/extrausers/group file)

 

Rebuilding the node-red snap in a device focused way with additional node-red modules

While there is a node-red snap in the snap store (to be found at https://snapcraft.io/node-red with the source at https://github.com/dceejay/nodered.snap) it does not really allow you to do a lot with it on i.e. a Raspberry Pi if you want to read sensor data that does not actually come in via the network …

The snap is missing all essential interfaces that could be used for any sensor access (gpio, i2c, Bluetooth, spi or serial-port) and it does not even come with basics like hardware-observe, system-observe or mount-observe to get any systemic info from the device it runs on.

While the missing interfaces are indeed a problem, there is the fact that strict snap packages need to be self contained and hardly have any ability to dynamically compile any software …. Now, if you know nodejs and npm (or yarn or gyp) you know that additional node modules often need to compile back-end code and libraries when you add them to your nodejs install. Technically it is actually possible to make “npm install” work but it is indeed hard to predict what a user may want to install in her installation so you would also have to ship all possible build systems (gcc, perl, python, you name it)
plus all possible development libraries any of the added modules could ever require …

That way you might technically end up with a full OS inside the snap package. Not really a desirable thing to do (beyond the fact that this would even with the high compression snap packages use end up in a gigabytes big snap).

So lets take a look at whats there already in the upstream snapcraft.yaml we can find a line like the following:

npm install --prefix $SNAPCRAFT_PART_INSTALL/lib node-red node-red-node-ping node-red-node-random node-red-node-rbe node-red-node-serialport

This is actually great, so we can just append any modules we need to that line …

Now, as noted above, while there are many node-red modules that will simply work this way, many that are interesting for us to access sensor data will need additional libs that we will need to include in the snap as well …

In Snapcraft you can easily add a dependency via simply adding a new part to the snapcraft.yaml, so lets do this with an example:

Lets add the node-red-node-pi-gpio module, lets also break up the above long line into two and use a variable that we can append more modules to:

DEFAULT_MODULES="npm node-red node-red-node-ping node-red-node-random node-red-node-rbe \
                 node-red-node-serialport node-red-node-pi-gpio"
npm install --prefix $SNAPCRAFT_PART_INSTALL/lib $DEFAULT_MODULES

So this should get us the GPIO support for the Pi into node-red …

But ! Reading the module documentation shows that this module is actually a front-end to the RPi.GPIO python module, so we need the snap to ship this too … luckily snapcraft has an easy to use python plugin that can pip install anything you need. We will add a new part above the node-red part:

parts:
...
  sensor-libs:
    plugin: python
    python-version: python2
    python-packages:
      - RPi.GPIO
  node-red:
    ...
    after: [ sensor-libs ]

Now Snapcraft will pull in the python RPi.GPIO module before it builds node-red (see the “after:” statement i added) and node-red will find the required RPi.GPIO lib when compiling the node-red-node-pi-gpio node module. This will get us all the bits and pieces to have GPIO support inside the node-red application …

Snap packages are running confined, this means they can not see anything of the system we do not allow it to via an interface connection. Remember that i said above the upstream snap is lacking some such interfaces ? So lets better add them to the “apps:” section of our snap (the pi-gpio node module wants to access /dev/gpiomem as well as the gpio device-node itself, so we make sure both these plugs are available to the app):

apps:
  node-red:
    command: bin/startNR
    daemon: simple
    restart-condition: on-failure
    plugs:
      ...
      - gpio
      - gpio-memory-control

And this is it, we have added GPIO support to the node-red snap source, if we re-build the snap, install it on an Ubuntu Core device and do a:

snap connect node-red:gpio-memory-control
snap connect node-red:gpio pi:bcm-gpio-4

We will be able to use node-red flows using this GPIO (for other GPIOs you indeed need to connect to the pi:bcm-gpio-* of your choice … (the mapping for Ubuntu Core follows https://pinout.xyz/ )

I have been collecting a good bunch of possible modules in a forked snap that can be found at https://github.com/ogra1/nodered-snap a binary of this is at https://snapcraft.io/node-red-rpi and i plan a series of more node-red centric posts the next days telling you how to wire things up, with example flows and some deeper insight how to make your node-red snap talk to all the Raspberry Pi interfaces, from i2c to Bluetooth.

Stay tuned !

Your own in-house snap factory

When working with customers on snaps and Ubuntu Core one of the most asked questions I get in calls and at events in booth discussions is about building your code in-house.

Many companies simply do not allow their sources to leave the house …yet many of these customers have also used https://build.snapcraft.io before for their test projects …

Typically I point such customers to use lxd and snapcraft manually, or to just go with multipass … but then the question comes up “how do I build for my ARM IoT device” ?

There is no easy way to cross-build snaps so it usually boils down to some complex setup that has some ARM device in the back end doing the actual building and requires some more or less complex work to get it up and running.

This gave me an idea … and I started to write a bunch of pylxd scripts that you could easily install as a snap to do a build in lxd… pretty much like https://build.snapcraft.io does, just without a UI … This was during the annual company shutdown at the end of last year (canonical shuts down for two weeks over christmas each year).

In February I tried to actually create a UI, using web sockets talking to the build script to show the output … but I didn’t really get forward with this, the result looked like below and my python looked like shell (it always does somehow !!). Even though it didn’t look too bad, I wasn’t really satisfied with the result, this needed someone who is better with UIs than I am:

Bildschirmfoto vom 2020-02-19 17-46-48

I decided to ask my colleague James if he would be interested in a little spare-time Web UI project and pointed him to my three little python scripts on github … the next evening I got a PR with (felt) 100 new files, lots of go and react code … this went on for several weeks during the Covid-19 shutdown and eventually only a few lines of my original python code survived in the end … the app got shinier every day and James did not stop making awesome usability improvements with every commit …

So yet the thing was initially my prototype and I still maintain the code tree and the snap package, let me present you James Jesudasons Fabrica !

Bildschirmfoto von 2020-06-01 21-55-22

Fabrica is your own in-house https://build.snapcraft.io, it allows you to build any branch of any cloneable git tree (yes, it supports gitlab, not only github and you can select different branches, not just master !)

Bildschirmfoto von 2020-06-02 17-03-15

It will run a native build for the used host architecture inside an lxd container …

Bildschirmfoto von 2020-06-02 17-04-15

Bildschirmfoto von 2020-06-02 17-04-22

You might have noticed the URL in the browser screenshots above, there Fabrica runs a build on top of an Ubuntu Core 18 installed 8GB Raspberry Pi4 where I put the /writable partition onto a very fast USB 3.1 SSD. For the fun of it I made Fabrica build its own git tree in parallel on my local Fabrica instance as well as on build.snapcraft.io:

Bildschirmfoto von 2020-06-02 13-39-59

Bildschirmfoto von 2020-06-02 13-39-44

There might be some additional stuff that build.snapcraft.io does here and all this is indeed totally unscientific but I still find 10min faster builds actually significant 🙂

You can easily install Fabrica yourself, it is available for all snap architectures except i386, all you need is an already installed lxd:

sudo snap install lxd
sudo lxd init # just hit enter for all the questions

Then install the Fabrica snap and connect its interfaces:

sudo snap install fabrica
sudo snap connect fabrica:mount-observe
sudo snap connect fabrica:system-observe
sudo snap connect fabrica:lxd lxd

Point your browser to either http://localhost:8000 or use an IP instead of localhost if you are accessing from another machine.

Git trees you add will be checked for new commits every 5 minutes and snap builds start automatically for each new commit of the branch you defined when adding the tree.

There is no auto-upload feature yet, so you need to download the snap through the web UI and manually upload it to the store. The whole thing uses only http, not https and there is no user authentication mechanism, so please do not use Fabrica on an internet site, it is really just designed for usage in a protected LAN currently.

Since it is still some work to set up an instance and since the snap world focuses on pre-made appliances this cycle, here is a first (very experimental) attempt of creating a pi4 pre-configured Ubuntu Core18 appliance that you can just dump on your pi4 to get started quickly (I will write a few more blog posts about building such an appliance image soon)

The source tree as well as the URL for filing issues are linked on the Fabrica snapcraft.io page. Any feedback is indeed very very welcome by James and me.

Using the new 12MP Pi cam for video conferencing on your desktop

So you got that awesome new Raspberry Pi High-Res camera that they released last week but you don’t really know what to do with it ?

Well, there is salvation, Ubuntu Core can turn it for you into a proper streaming device and help you with using it for your Zoom meetings giving you a really professional look !

What you need:

  • A Raspberry Pi 3 (2 will work too but you need ethernet or a wlan dongle)
  • An SD card
  • The new High Quality Camera
  • A C-Mount or CS-Mount lens
  • A tripod/stand (optional but really helpful)

Setting up the Pi:

Attach the camera to the CSI port of the Raspberry Pi.

Download the Ubuntu Core 18 image and write it to an SD card.

Boot the Pi with keyboard/monitor attached and run though the setup wizard to create a User and configure the WLAN.

Now ssh into the pi like it tells you to on the screen.

On the Pi you do:

$ snap set system pi-config.start-x=1
$ snap install picamera-streaming-demo
$ sudo reboot # to make the start-x setting above take effect

This is it for the Pi side, you can check that the setup worked by pointing your browser to (replace IP_OF_YOUR_PI with the actual IP address):

http://IP_OF_YOUR_PI:8000/stream.mjpg

You should see the picture of the camera in your browser…

The PC side:

To make your stream from your newly created Ubuntu Core network camera available to your video conferencing applications on the desktop we need to teach it to show up on your PC as a v4l /dev/video device.

Luckily the linux kernel has the awesome v4l2loopback module that helps with this, lets install it on the PC:

$ sudo apt install v4l2loopback-dkms
... [ some compiling ] ...
$ ls /dev/video*
/dev/video0  /dev/video1  /dev/video2
$ sudo modprobe v4l2loopback
$ ls /dev/video*
/dev/video0  /dev/video1  /dev/video2  /dev/video3

Loading the module created a new /dev/video3 device that we will use …

Now we need to capture the stream from the pi and route it into this /dev/video3 device, to do this we will use ffmpeg (replace IP_OF_YOUR_PI with the actual IP address):

$ sudo snap install ffmpeg
$ sudo snap connect ffmpeg:camera
$ sudo ffmpeg -re -i http://IP_OF_YOR_PI:8000/stream.mjpg -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video3

And thats it …

If you do not yet have any video conferencing software, try out the zoom-client snap …

$ sudo snap install zoom-client

Set up an account and select the “Dummy Camera Device” in the Video Input Settings.

The whole thing will then look like (hopefully with a less ugly face in it though 🙂 ):

zoom-shot1

And this is it … the camera quality is classes better than anything you get with a similar priced USB camera or builtin laptop one and you can replace the lens with a wider angle one etc …

Attaching a CPU fan to a RPi running Ubuntu Core

When I purchased my Raspberry Pi4 I kind of expected it to operate under similar conditions as all the former Pi’s I owned …

So I created an Ubuntu Core image for it (you can find info about this at Support for Raspberry Pi 4 on the snapcraft forum)

Runnig lxd on this image off a USB3.1 SSD to build snap packages (it is faster than the Ubuntu Launchpad builders that are used for build.snapcraft.io, so a pretty good device for local development), I quickly noticed the device throttles a lot once it gets a little warmer, so I decided I need a fan.

I ordered this particular set  at amazon, dug up a circuit to be able to run the fan at 5V without putting too much load on the GPIO managing the fan state … luckily my “old parts box” still had a spare BC547 transistor and an 1k resistor that I could use, so I created the following addon board:

fancontrol.png

Circuit

fan-hw.png

Finished addon board (with pic how it gets attached)

So now I had an addon board that can cool the CPU, but the fan indeed needs some controlling software, this is easily done via some small shell script by echoing 0 or 1 into /sys/class/gpio/gpio14/value … this script can be found on my github account as fancontrol.sh

Since we run Ubuntu Core we indeed want to run the whole thing as a snap package, so lets quickly create a snapcraft.yaml file for it:

name: pi-fancontrol
base: core18
version: '0.1'
summary: Control a raspberry pi fan attached to GPIO 14
description: |
  Control a fan attached to a GPIO via NPN transistor
  (defaults to GPIO 14 (pin 8))

grade: stable
confinement: strict
architectures:
  - build-on: armhf
    run-on: armhf
  - build-on: arm64
    run-on: arm64

apps:
  pi-fancontrol:
    command: fancontrol.sh
    daemon: simple
    plugs:
      - gpio
      - hardware-observe

parts:
  fancontrol:
    plugin: nil
    source: .
    override-build: |
      cp -av fancontrol.sh $SNAPCRAFT_PART_INSTALL/

The image is based on core18, so we add a base: core18 entry. It is very specific to the Raspberry Pi, so we also add an architectures: block that makes it only build and run on arm images. Now we need a very simple apps: entry that spawns the script as a daemon, allows it to access the info about temperature via the hardware-observe interface and also allows it to write to the gpio interface we connect the snap to, to echo the 0/1 values into the sysfs node for the GPIO. A simple fancontrol part that just copies the script into the snap package, and off we go !

The whole code for the pi-fancontrol snap can be found on github and there is indeed a ready made snap for you to use in the snap store at https://snapcraft.io/pi-fancontrol

You can easily install it with:

snap install pi-fancontrol
snap connect pi-fancontrol:gpio pi4-devel:bcm-gpio-14
snap connect pi-fancontrol:hardware-observe

… and your fan should start to fire up every time your CPU temperature goes above 50 degrees….

Building an Ubuntu Core appliance image

Creating an appliance image for a single purpose can be quite some effort- you need to take care of your application, need to build a safe root filesystem and make modifications to the system so it behaves the way you or the application expect it to behave. With Ubuntu Core,this effort becomes immediately a lot easier. Ubuntu Core is completely made out of snap packages; the kernel, rootfs and all applications are all snap based and benefit from the advantages this package format brings..

Snap packages are transactional. They can automatically rollback on error after an upgrade including the kernel snap. If a breakage is noticed after upgrading to the new kernel version, the system will detect this and automatically roll back to the former known-working version. The same goes for every other snap in the system, including the root filesystem.

Snap packages are binary readonly filesystem images that support binary delta upgrades generated on the store server. This means an upgrade only downloads the actual binary delta between two snaps thereby reducing the download cost to a minimum. If your appliance is only attached via 3G, for example, this is a significant cost saver.

Snap packages communicate with each other and with the system hardware through predefined interfaces that the image creator has full control over and your applications only see the hardware and data from other snap packages if you allow it to.

Ubuntu Core images consist of three snap packages by default. There is a kernel snap, running your hardware, the core snap which contains a minimal root filesystem and the gadget snap which ships a bootloader, the desired partitioning scheme, rules for interface connections and configuration defaults for application snaps included in the image.

If you pick hardware that is supported by existing Ubuntu Core (Generic x86_64, Raspberry Pi (armv7l) or the Dragonboard 420c (aarch64)) you will not only find the ready-made root filesystem core snap in the store already but also ready-made kernel snaps for your hardware. All it needs to create an Ubuntu Core appliance image is an application snap (and if you have a more complex setup a fork of the existing gadget snap for your device with adjustments for interfaces and app defaults).

This set up reduces the development time drastically … pretty much down to a one-time operation for the gadget and a constant focus on your application. There is no need to care for any other image specific bits, they can be downloaded you get them for free from the Snap Store for free, always up to date and security maintained for 5 years.

The following walk-through will show the creation of a mid-complex appliance image that does some automatic self configuration but also allows providing additional set-up (i.e. wireless LAN, system user) via plugging in a USB stick.

Picking the required snaps

For a Digital Signage demo where we want to control the attached displays through a web interface the dashkiosk [1] dashboard managing tool looks like a good candidate. It allows managing, grouping and assigning attached remote displays through a simple drag and drop web UI. The snap finds its clients via the mDNS protocol, so our appliance image will ship the avahi snap along with the server application.

The attached “displays” are actually simple web-browsers. To not waste resources we will turn our server image into being a client at the same time and have it ship a web-browser snap pointing to the local dashkiosk server.

To have the browser display something on the screen we will also need a graphics stack, so we will ship the mir-kiosk snap as well in our image.

This leaves us with the following list of snaps:

  • dashkiosk (source for this can be found at [2])
  • avahi
  • mir-kiosk
  • “a browser” (this could be chromium-mir-kiosk, but sadly this snap has no support for avahi built in so we will have to create a fork that adds this feature) [3]

We want to not only create a server image but also have auto-connecting clients, so we will create a second image that only contains the browser and display stack.

Now that we have identified what snap packages we want to pre-install in our two images, it is time to read up about how Ubuntu Core images are created. [4] has a walk-through for this, take a special look at the “required-snaps” option of the model assertion we will create.

The client:

{
  "type": "model",
  "authority-id": "",
  "brand-id": "",
  "series": "16",
  "model": "dashkiosk-client",
  "architecture": "armhf",
  "gadget": "pi3",
  "kernel": "pi2-kernel",
  "required-snaps": [ "avahi", "mir-kiosk", "dashkiosk-client-browser" ],
  "timestamp": "2018-09-25T14:45:25+00:00"
}

The server:

{
  "type": "model",
  "authority-id": "",
  "brand-id": "",
  "series": "16",
  "model": "dashkiosk",
  "architecture": "armhf",
  "gadget": "pi3",
  "kernel": "pi2-kernel",
  "required-snaps": [ "avahi", "dashkiosk", "mir-kiosk", "dashkiosk-client-browser" ],
  "timestamp": "2018-09-25T14:45:25+00:00"
}

The “dashkiosk-client-browser” [3] in the above two model assertions is a fork of the original chromium-mir-kiosk with the original “chromium.launcher” replaced with a script that first does an avahi-resolve-host-name call to receive the IP of the dashkiosk server via mDNS, beyond this it is largely original.

When you now use “ubuntu-image” as described in [4] you will already end up with a bootable image that has these snap packages pre-installed. They should automatically start up on boot, but since they are securely confined they will not yet be able to properly communicate with each other or the hardware because not all interfaces are automatically pre-connected.

Interfaces and defaults

While you can have default connections of snap packages defined via a store declaration, many of these interfaces are often not suitable security wise to have them auto-connected everywhere such a snap is installed. Luckily Ubuntu Core gives an option to do these additional interface connections with a “connections:” entry in the gadget snap of your image [5].

To use these “connections:” entries in your gadget.yaml all the snaps you want to use need to have a snap id. This means you first need to upload them to the store, in the details of the store UI for your snap package you can then see the ID hash that you need to use in the gadget.yaml entry.

A forked pi3 gadget example with a gadget.yaml that includes various default connections for dashkiosk, the browser and a few other bits can be found at [6].

To give our modified browser (snap ID: hBB9l3miabfAKr2Dmnzd5RgzmEbMQVbj) the permission to actually use the added “avahi-resolve-host-name” call, we add a plug/slot combination for the “avahi-observe” interface that the avahi snap (dVK2PZeOLKA7vf1WPCap9F8luxTk9Oll) provides to us:

connections:
[...]
- plug: hBB9l3miabfAKr2Dmnzd5RgzmEbMQVbj:avahi-observe
  slot: dVK2PZeOLKA7vf1WPCap9F8luxTk9Oll:avahi-observe
[...]

Such a set-up can be done for any interface combination of your shipped snap packages in the image.

If you scroll down further in the example gadget.yaml above you also find a “defaults:” field where we define that rsyslog should not log to the SD card to prevent wearing it out with massive logging and set the default port for the dashkiosk server to port 80. If your application snap packages use configuration via “snap set/get” [7] you can set all desired defaults through this method.

Considering additional image features

There are some adjustments we need to do to the image, for this we will create a “config snap” that we ship by default. This snap will run a few simple scripts during boot and set the additional defaults we need by utilizing the available snap interfaces of the system [8] and [9].

Since the clients find their server via an mDNS lookup we need to make sure the correct avahi hostname is set on our server (see the “set-name” script in the above config snap trees). We use the “avahi-control” interface for this and connect it in [6] with a slot/plug combination.

By default Ubuntu Core always uses UTC as its timezone. The dashkiosk default setup always shows a clock on the initial screen on all clients, so one thing we want to have the image do is to set a proper timezone on boot, to show the correct local time on start. For this we do a simple web lookup on a public geoip service that provides us the timezone for the IP we connect from, this is done via the “set-timezone” script in the above config snap tree. To allow our script to access the timezone configuration, we connect our configuration snap to the “timezone-control” interface of the system (note, connecting to system or “core” slots means you do not deed to define the slot in the definition, only the plug side needs to be defined)

While the above is already enough to have the images properly work on a wired network (Ubuntu Core always defaults to a DHCP configured ethernet without any further configuration), I personally plan to also be able to use this image on the Raspberry Pi3 with a wireless connection.

Network connections are configured by using netplan [10] on Ubuntu Core. To configure a WLAN connection we can either manually configure each client board through a serial console (which offers the interactive setup through console-conf/subiquity) or we can write a little tool that monitors the USB port for plugged in USB sticks and checks if it finds a netplan configuration file that we define. For this we create the “netplan-import” script in [8] and [9], this tool uses the “system-observe”, “mount-observe”, “udisks2”, “removable-media”, “network-setup-control” and “shutdown” interfaces, see again plugs and slots in our gadget.yaml for this … and since udisks2 is only provided by the udisks2 snap, we also need to make sure to add this snap to the “required-snaps” of our model assertion.

Now we have a completely self-configuring appliance image. To hook up all our clients to the same WLAN we can walk around with the same USB stick containing the netplan.yaml and configure the systems by simply plugging it in (which causes them to copy that config in place and reboot with the newly configured WLAN connection.)

A netplan.yaml for a valid WLAN connection should look like:

network:
  version: 2
  wifis:
    wlan0:
      access-points:
        <YOUR ESSID>: {password: <YOUR WPA PASSWORD>}
      addresses: []
      dhcp4: true

(Replace “<YOUR ESSID>” and “<YOUR WPA PASSWORD>” with the correct values)

Building the final image

Now that we have all the bits together, let us build the final images. To have all the above snaps included We add the remaining snaps to the “required-snaps” entries of the model assertions:

Server:

"required-snaps": [ "avahi", "udisks2", "dashkiosk-image-config", "dashkiosk", "mir-kiosk", "dashkiosk-client-browser" ],

Client:

"required-snaps": [ "avahi", "udisks2", "dashkiosk-client-image-config", "mir-kiosk", "dashkiosk-client-browser" ],

Make sure to have all interface connections set right like in [6].

Sign your model assertion as described in [4] and do a local build of your gadget snap (do not upload it to the store, it will get stuck in manual review there. If you actually want to use a gadget snap in a commercial project, contact Canonical about obtaining a brand store [11]).

When you build your image, use the –extra-snaps (see [4]) option to point to your locally built gadget package.

After the initial flashing of the image to your device/SD card, your kernel and rootfs will automatically receive updates without you doing anything. Any fixes you want to do to the applications or configuration snap packages, can just be uploaded to the store, your image will automatically refresh and pick them up in a timely manner.

Summary

If you followed the above step by step guide you will now have a Digital Signage demo appliance image for a Raspberry Pi, some helpful notes at the end:

Do the first boot of the images with ethernet connected, else you will have to wait extra long for systemd to time out trying to do an initial network connection (the boot will not fail but will take significantly longer, and indeed since the timezone above is set via a geoip lookup, your timezone will also not be set)

Be patient for the first boot in case you are on the ARM architecture. Snapd installs all the pre-seeded snap packages on first boot, this includes a sha3 checksum verification of the snap files. Snapd is written in go and the sha3 function of go is extremely slow (1-2min per pre-seeded snap) on armhf. All subsequent boots will be as fast as you would expect them (around 30 sec to having the graphics on screen).

If you want to log in via ssh you can create a system-user assertion that you can put on the same USB key your netplan.yaml lives on, it will create an ssh user for you to allow you to inspect the booted system [12]. There is a tool provided as a snap package to make creating a system user assertion easy for you at [13].

Last but not least, you can find ready-made images that followed the above step by step guide under [14].

If there are any open questions, feel free to ask them in the “device” category on https://forum.snapcraft.io/

[1] https://github.com/vincentbernat/dashkiosk.git
[2] https://github.com/ogra1/dashkiosk-snap
[3] https://github.com/ogra1/dashkiosk-client-browser
[4] https://docs.ubuntu.com/core/en/guides/build-device/image-building
[5] https://forum.snapcraft.io/t/the-gadget-snap/696#gadget.yaml
[6] https://github.com/ogra1/pi-kiosk-gadget/blob/master/gadget.yaml
[7] https://forum.snapcraft.io/t/configuration-in-snaps/510
[8] https://github.com/ogra1/dashkiosk-image-config
[9] https://github.com/ogra1/dashkiosk-client-image-config
[10] https://netplan.io/
[11] https://docs.ubuntu.com/core/en/build-store/create
[12] https://docs.ubuntu.com/core/en/reference/assertions/system-user
[13] https://snapcraft.io/make-system-user
[14] http://people.canonical.com/~ogra/snappy/kiosk/

 

Patching u-boot for use in an Ubuntu Core gadget snap

This is the second post in the series about building u-boot based gadget snaps, following Building u-boot gadget snap packages from source.

If you have read the last post in this series, you have likely noticed that there is a uboot.patch file being applied to the board config before building the u-boot binaries. This post will take a closer look at this patch.

As you might know already, Ubuntu Core will perform a fully automatic roll-back of upgrades of the kernel or the core snap (rootfs), if it detects that a reboot after the upgrade has not fully succeeded. If an upgrade of the kernel or core snap gets applied, snapd sets a flag in the bootloader configuration called “snap_mode=” and additionally sets the “snap_try_core=” and/or “snap_try_kernel=” variables.

To set these flags and variables that the bootloader should be able to read at next boot, snapd will need write access to the bootloader configuration.
Now, u-boot is the most flexible of all bootloaders, the configuration can live in a uEnv.txt file, in a boot.scr or boot.ini script on a filesystem, in raw space on the boot media or on some flash storage dedicated to u-boot or even a combination of these (and I surely forgot other variations in that list). This setup can vary from board to board and there is no actual standard.

Since it would be a massive amount of work and code to support all possible variations of u-boot configuration management in snapd, the Ubuntu Core team had to decided for one default process and pick a standard here.

Ubuntu Core is designed with completely unattended installations in mind, being the truly rolling Ubuntu, it should be able to upgrade itself at any time over the network and should never corrupt any of its setup or configuration, not even when a power loss occurs in the middle of an update or while the bootloader config is updated. No matter if your device is an embedded industrial controller mounted to the ceiling of a multi level factory hall, a cell tower far out in the woods or some floating sensor device on the ocean, the risk of corrupting any of the bootloader config needs to be as minimal as possible.

Opening a file, pulling it to RAM, changing it, then writing it to a filesystem cache and flushing that in the last step is quite a time-consuming thing. The time window where the system is vulnerable to corruption due to power outage is quite big. Instead we want to atomically toggle a value; preferably directly on disk with no caches at all. This cuts the potential corruption time down to the actual physical write operation, but also rules out most of the file based bits from the above list (uEnv.txt or boot.scr/.ini) and leaves us with the raw options.

That said, we can not really enforce an additional partition for a raw environment, a board might have a certain boot process that requires a very specific setup of partitions shipping binary blobs from the vendor before even getting to the bootloader (i.e. see the dragonboard-410c. Qualcomm requires 8 partitions with different blobs to initialize the hardware before even getting to u-boot.bin). To not exclude such boards we need to find a more generic setup. The solution here is a compromise between filesystem based and raw … we create an img file with fixed size (which allows the atomic writing we want) but put that on top of a vfat partition (our system-boot partition that also carries kernel, initrd and dtb) for biggest flexibility.

To make it easier for snapd and the user space side, we define a fixed size (the same size on all boards) for this img file. We also tell u-boot and the userspace tools to use redundancy for this file which allows the desired atomic writing.

Lets move on with some real-world example, looking at a board i recently created a gadget snap for [1]

I have an old Freescale SabreLite (IMX6) board lying around here, its native SCSI controller and gigabit ethernet make it a wonderful target device for i.e. a NAS or really fast Ubuntu Core based netxtcloud box.

A little research shows it uses the nitrogen6x configuration from the u-boot source tree which is stored in include/configs/nitrogen6x.h

To find the currently used environment setup for this board we just grep for “CONFIG_ENV_IS_IN” in that file and will find the following block:

#if defined(CONFIG_SABRELITE)
#define CONFIG_ENV_IS_IN_MMC
#else
#define CONFIG_ENV_IS_IN_SPI_FLASH
#endif

So this board defines a raw space on the MMC to be used for the environment if we build for the SabreLite, but we want to use CONFIG_ENV_IS_IN_FAT with the right parameters to make use of an uboot.env file from the first vfat partition on the first SD card.

Lets tell this in the config:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

If we just set this we’ll run into build errors though, since the CONFIG_ENV_IS_IN_FAT also wants to know which interface, device and filename it should use:

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

So here we tell u-boot that it should use mmc device number 1 and read a file called uboot.env.

FAT_ENV_DEVICE_AND_PART can actually take a partition number, but if we do not set it, it will try to automatically use the very first partition found … (so “1” is equivalent to “1:1” in this case … on something like the dragonboard where the vfat is actually the 8th partition we use “1:8”).

While the above patch would already work with some uboot.env file, it would not yet work with the one we need for Ubuntu Core. Remember the atomic writing thing from above ? This requires us to set the CONFIG_SYS_REDUNDAND_ENVIRONMENT option too (note i did not typo this, the option is really called “REDUNDAND” for whatever reason).
Setting this option tells u-boot that there is a different header on the file and that write operations should be done atomic.

Ubuntu Core defaults to a fixed file size for uboot.env. We expect the file to be exactly 128k big, so lets find the “CONFIG_ENV_SIZE” option in the config file and adjust it too if it does define a different size:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

Trying to build the above will actually end up with a build error complaining that fat writing is not enabled, so we will have to add that too …

One other bit that Ubuntu core expects is that we can load a proper initrd.img without having to mangle or modify it in the kernel snap (by i.e. making it a uInitrd or whatnot) so we need to define the CONFIG_SUPPORT_RAW_INITRD option as well since it is not set by default for this board.

Our final patch now looks like:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
 #define CONFIG_ENV_IS_IN_MMC
+#undef CONFIG_ENV_IS_IN_MMC
+#define CONFIG_ENV_IS_IN_FAT
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_DEVICE_AND_PART "1"
+#define FAT_ENV_FILE "uboot.env"
+#define CONFIG_SYS_REDUNDAND_ENVIRONMENT
 #else
 #define CONFIG_ENV_IS_IN_SPI_FLASH
 #endif

+#define CONFIG_FAT_WRITE
+#define CONFIG_SUPPORT_RAW_INITRD

With this we are now able to build a u-boot.bin that will handle the Ubuntu Core uboot.env file from the system-boot partition, read and write the environment from there and allow snapd to modify the same file from user space on a booted system when kernel or core snap updates occur.

The actual uboot.env file needs to be created using the “mkenvimage” tool with the “-r” (redundant) and “-s 131072” (128k size) options, from an input file. In the branch at [1] you will find the call of this command in the snapcraft.yaml file in the “install” script snippet. It uses the uboot.env.in textfile that stores the default environment we use …

The next post in this series will take a closer look at the contents of this uboot.env.in file, what we actually need in there to achieve proper rollback handling and how to obtain the default values for it.

If you have any questions about the process, feel free to ask here in the comments or open a thread on https://forum.snapcraft.io in the device category.

[1] https://github.com/ogra1/sabrelite-gadget

Dock a Snap…

I recently had to help setting up an image build environment for UbuntuCore images for someone who only allows docker as infrastructure.

When wanting to build an image from a blessed model assertion for i.e. the pi2, pi3 or dragonboard you need to use the “snap known” command (see below for the full syntax) to download the canonical signed assertion. The snap command requires snapd to run inside your container. To build images we need to use ubuntu-image which is also provided as a snap, so we not only want snapd to run for the “snap” command, but we also want the container to be able to execute snaps we install. After quite a bit back and forth and disabling quite a few security features inside the container setup, i came up with https://github.com/ogra1/snapd-docker which is a simple build script for setting up a container that can execute snaps.

I hope people needing to use docker and wanting to use snaps inside containers find this helpful … pull requests for improvements of the script or documentation will be happily reviewed on github.

Here the README.md of the tree:

Create and run a docker container that is able to run snap packages

This script allows you to create docker containers that are able to run and
build snap packages.

WARNING NOTE: This will create a container with security options disabled, this is an unsupported setup, if you have multiple snap packages inside the same container they will be able to break out of the confinement and see each others data and processes. Use this setup to build or test single snap packages but do not rely on security inside the container.

usage: build.sh [options]

  -c|--containername (default: snappy)
  -i|--imagename (default: snapd)

Examples

Creating a container with defaults (image: snapd, container name: snappy):

$ sudo apt install docker.io
$ ./build.sh

If you want to create subsequent other containers using the same image, use the –containername option with a subsequent run of the ./build.sh script.

$ ./build.sh -c second
$ sudo docker exec second snap list
Name Version Rev Developer Notes
core 16-2.26.4 2092 canonical -
$

Installing and running a snap package:

This will install the htop snap and will show the running processes inside the container after connecting the right snap interfaces.

$ sudo docker exec snappy snap install htop
htop 2.0.2 from 'maxiberta' installed
$ sudo docker exec snappy snap connect htop:process-control
$ sudo docker exec snappy snap connect htop:system-observe
$ sudo docker exec -ti snappy htop

Building snaps using the snapcraft snap package (using the default “snappy” name):

Install some required debs, install the snapcraft snap package to build snap packages, pull some remote branch and build a snap from it using the snapcraft command.

$ sudo docker exec snappy sh -c 'apt -y install git'
$ sudo docker exec snappy snap install snapcraft --edge --classic
$ sudo docker exec snappy sh -c 'git clone https://github.com/ogra1/beaglebone-gadget'
$ sudo docker exec snappy sh -c 'cd beaglebone-gadget; cp cross* snapcraft.yaml; TMPDIR=. snapcraft'
...
./scripts/config_whitelist.txt . 1>&2
Staging uboot
Priming uboot
Snapping 'bbb' |
Snapped bbb_16-0.1_armhf.snap
$

Building an UbuntuCore image for a RaspberryPi3:

Install some debs required to work around a bug in the ubuntu-image classic snap, install ubuntu-image, retrieve the model assertion for a pi3 image using the “snap known” command and build the image using ubuntu-image.

$ sudo docker exec snappy sh -c 'apt -y install libparted dosfstools' # work around bug 1694982
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
Setting up libparted2:amd64 (3.2-17) ...
Setting up dosfstools (4.0-2ubuntu1) ...
Processing triggers for libc-bin (2.24-9ubuntu2) ...
$ sudo docker exec snappy snap install ubuntu-image --classic --edge
ubuntu-image (edge) 1.0+snap3 from 'canonical' installed
$ sudo docker exec snappy sh -c "snap known --remote model series=16 model=pi3 brand-id=canonical >pi3.model"
$ sudo docker exec snappy ubuntu-image pi3.model
Fetching core
Fetching pi2-kernel
Fetching pi3
$ sudo docker exec snappy sh -c 'ls *.img'
pi3.img

Building u-boot Gadget Snap packages from source

When we started doing gadget snap packages for UbuntuCore images, there was no snapcraft. Gadgets were assembled from locally built bootloader binaries by setting up a filesystem structure that reflects the snap content, using pre-created meta/snap.yaml and meta/gadget.yaml files and then calling mksquashfs.

When snapcraft started to support the gadget format we added a very simple snapcraft.yaml that simply used the dump plugin to copy the prebuilt binaries into place in the resulting snap.

While we provide uboot.patch files in the gadget source trees, there is not really anything built from source at snap build time and doing your own modifications means you need to reach out to someone who has the necessary knowledge how the u-boot.img and the SPL were built. This was a long standing wart in our setup and there was desire for a long time to actually make the gadget creation a completely reproducable process based on upstream u-boot sources.

A typical build process would look like:

– git checkout git://git.denx.de/u-boot.git
– switch to the right release branch
– apply the uboot.patch to the tree
– run make $config_of_your_board
– run make (… and if you cross build, set the required environment up first)

After this the resulting binaries used to be copied into the prebuilt/ dir. The snapcraft build was completely disconnected from this process.

Auto-building u-boot from source with snapcraft

Snapcraft is well able to define all of these steps in the snapcraft.yaml nowadays, actually build a useful binary for us and put it in the right place in the final snap. So lets go step by step through creating a working “part:” entry for the snapcraft.yaml that provides the above steps:

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]

We use the “make” plugin (which nicely provides the “artifacts:” option for us to cherry-pick the binaries from the u-boot build to be put into the snap), point to the upstream u-boot source and make it use the v2017.01 branch.

   prepare: |
     git apply ../../../uboot.patch
     make am335x_boneblack_config

With this “prepare:” scriptlet we tell the plugin to apply our uboot.patch to the checked out branch and to configure it for a beaglebone black before starting the build.

   install: |
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

If you have worked with u-boot gadgets before you know how important the uboot.env file that carries our UbuntuCore bootloader setup is. It always needs to be the right size (-s 131072), redundant (-r) to allow atomic writing and we ship the input file as uboot.env.in in our source trees. In the “install:” scriptlet we take this input file and create a proper environment image file from it using the mkenvimage tool our build has just created before. The ubuntu-image and “snap prepare-image” commands will look for an “uboot.conf” file at image creation time, so we create a symlink pointing to our binary env file.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc

Dependencies to build u-boot get defined in the “build-packages:” option of the part. Obviously we need a compiler (build-essential), some build scripts still use python2.7 headers (libpython2.7-dev) and when test building there is a complaint about bc missing that is not fatal (but disturbing enough to also install the bc package as a build dependency).

After adding a bit of the general meta data like name, version, summary and description as well as all the snap informational data like type, (target) architecture, confinement type and stability grade, the resulting snapcraft.yaml looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
    plugin: make
    source: git://git.denx.de/u-boot.git
    source-branch: v2017.01
    artifacts: [MLO, u-boot.img]
    prepare: |
      git apply ../../../uboot.patch
      make am335x_boneblack_config
    install: |
      tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
      cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
    build-packages:
      - libpython2.7-dev
      - build-essential
      - bc

This snapcraft.yaml is enough to build a beaglebone gadget snap natively on an armhf host, so it will work if you run “snapcraft” in the checked out source on a raspberry Pi install or if you let launchpad or build.snapcraft.io do the build for you … but … typically, while developing you want to build on your workstation PC, not on some remote source or on a slow arm board. With some modifications of the snapcraft.yaml we can luckily make that possible very easily, lets make a copy of our snapcraft.yaml (i call it crossbuild-snapcraft.yaml in my trees) and add some changes to that.

Allow cross Building

First of all, we want a cross compiler on the host machine, so we will add the gcc-arm-linux-gnueabi package to the list of build dependencies.

   build-packages:
     - libpython2.7-dev
     - build-essential
     - bc
     - gcc-arm-linux-gnueabi

We also need to override the “make” call to carry info about our cross compiler in the CROSS_COMPILE environment variable. We can use a “build:” scriptlet for this.

   build: |
     CROSS_COMPILE=arm-linux-gnueabi- make

When cross building, the “artifacts:” line sadly does not do what it should anymore (i assume this is a bug), as a quick workaround we can enhance the “install:” script snippet with a simple cp command.

   install: |
     cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
     tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
     cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf

With all these changes in place our crossbuild-snapcraft.yaml now looks like:

name: bbb
version: 16-0.1
summary: Beagle Bone Black
description: |
 Bootloader files and partitoning data to create a
 bootable Ubuntu Core image for the Beaglebone Black.
type: gadget
architectures:
  - armhf
confinement: strict
grade: stable

parts:
  uboot:
  plugin: make
  source: git://git.denx.de/u-boot.git
  source-branch: v2017.01
  artifacts: [MLO, u-boot.img]
  prepare: |
    git apply ../../../uboot.patch
    make am335x_boneblack_config
  build: |
    CROSS_COMPILE=arm-linux-gnueabi- make
  install: |
    cp MLO u-boot.img $SNAPCRAFT_PART_INSTALL/
    tools/mkenvimage -r -s 131072 -o $SNAPCRAFT_PART_INSTALL/uboot.env ../../../uboot.env.in
    cd $SNAPCRAFT_PART_INSTALL/; ln -s uboot.env uboot.conf
  build-packages:
    - libpython2.7-dev
    - build-essential
    - bc
    - gcc-arm-linux-gnueabi

So with the original snapcraft.yaml we can now let our tree auto-build in build.snapcraft.io, when we checkout the source locally and want to build on a PC a simple “cp crossbuild-snapcraft.yaml snapcraft.yaml && snapcraft” will do a local cross build.

Creating the gadget.yaml

Just building the bootloader binaries is indeed not enough to create a bootable image, the binaries need to go in the right place, the bootloader will need to know where the devicetree file can be found and a working image should also have a proper partition table. For this purpose we will need to create a gadget.yaml file with the right information.

We create a gadget.yaml file in the source tree and tell the system that the devicetree file is called am335x-boneblack and that it gets shipped by the kernel snap.

device-tree: am335x-boneblack
device-tree-origin: kernel

Now we add a “volume:” entry that tells the system about the bootloader type (grub or u-boot) and defines which type of partition table we want (either “gpt” for a GUID partiton table or “mbr” for an msdos type one).

volumes:
  disk:
    bootloader: u-boot
    schema: mbr

(Note that in newer versions of the ubuntu-image tool the –ouput option to give your image a meaningful name has been deprecated, instead the name of the volume from the gadget snap is used now. To give your image a more meaningful name you might want to change “disk:” above to something else like “beagleboneblack:” to get a beagleboneblack.img file)

The last bit we need to do is to give our volume a “structure:”, i.e. a partition table but also info where to write the raw bootloader bits (MLO and u-boot.img).

Looking a the elinux wiki [3] how to create a bootable SD card for the beaglebone black we find lines like:

dd if=MLO of=/dev/sdX count=1 seek=1 conv=notrunc bs=128k
dd if=u-boot.img of=/dev/sdX count=2 seek=1 conv=notrunc bs=384k

For writing the bootloader blobs into the right place ubuntu-image will not just use dd so we need to translate the lines into proper entries for the volume structure. Lets take a closer look. The MLO line tells us dd will use 128k (131072 bytes) sized blocks (bs=), it will use 1 block offset from the start of the card (seek=1) and will reserve one block for the MLO payload in use (count=1). and indeed there is no filesystem in use, it will be written “bare”.
This gives us the first entry in the volume structure.

    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO

The u-boot.img dd command uses a block size of 384k (393216 bytes), one block offset from the start of the image and reserves two blocks as possible size for the u-boot.img binary and will also write the binary raw into place (type: bare).

      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img

Currently every UbuntuCore u-boot image expects to find the bootloader configuration, kernel, initrd and devicetree file in a vfat partition (type: 0C) called system-boot. To have enough wiggle room we’ll make that partition 128M big that will leave enough space for even gigantic kernel binaries or initrd’s. The ubuntu-image tool will put our uboot.env file from the start into that partition.

       - name: system-boot
         type: 0C
         filesystem: vfat
         filesystem-label: system-boot
         size: 128M

The final gadget.yaml file will now look like:

device-tree: am335x-boneblack
device-tree-origin: kernel
volumes:
  disk:
    bootloader: u-boot
    schema: mbr
    structure:
      - name: mlo
        type: bare
        size: 131072
        offset: 131072
        content:
          - image: MLO
      - name: u-boot
        type: bare
        size: 786432
        offset: 393216
        content:
          - image: u-boot.img
      - name: system-boot
        type: 0C
        filesystem: vfat
        filesystem-label: system-boot
        size: 128M

As you can see building a gadget snap is fairly easy and only requires four files (snapcraft.yaml, gadget.yaml, uboot.patch and uboot.env.in) to have in a github tree that you can then have auto-built on build.snapcraft.io. In subsequent posts i will explain the patch and uboot.env.in files in more detail. I will also describe the setup of default interfaces a gadget can provide as well as how to set some system defaults from the gadget.yaml file. If you want to take a look at the full source tree used for the above example, go to [1].

Documentation of the gadget snap syntax can be found at [2]. The dd commands used as input for the gadget.yaml file can be found at [3] and documentation how to build an image out of a gadget snap is at [4]. If you have any questions feel free to ask at [5] (i recommmend using the “device” category).

[1] https://github.com/ogra1/beaglebone-gadget
[2] https://forum.snapcraft.io/t/the-gadget-snap/696
[3] http://elinux.org/Beagleboard:U-boot_partitioning_layout_2.0
[4] https://docs.ubuntu.com/core/en/guides/build-device/image-building
[5] https://forum.snapcraft.io/