Use UbuntuCore to create a WiFi AP with nextcloud support on a Pi3 in minutes.

UbuntuCore is the rolling release of Ubuntu.
It is self updating and completely built out of snap packages (including kernel, boot loader and root file system) which provides transactional updates, manual and automatic roll-back and a high level of system security to all parts of the system.

Once installed you have a secure zero maintenance OS that you can easily turn into a powerful appliance by simply adding a few application snaps to it.

The raspberry Pi3 comes with WLAN and ethernet hardware on board, which makes it a great candidate to turn it into a WiFi AccessPoint. But why stop here ? With UbuntuCore we can go further and install a WebRTC solution (like spreedme) for making in-house video calls or an UPNP media server to serve our music and video collections, an OpenHAB home automation device … or we can actually turn it into a personal cloud using the nextcloud snap.

The instructions below walk you through a basic install of UbuntuCore, setting up a WLAN AP, adding an external USB disk to hold data for nextcloud and installing the nexcloud snap.

You need a Raspberry Pi3 and an SD card.


Create an account at and upload your public ssh key (~/.ssh/ in the “SSH Keys” section. This is where your UbuntuCore image will pull the ssh credentials from to provide you login access to the system (by default UbuntuCore does not create a local console login, only remote logins using this ssh key will be allowed).

Download the image from:

…or if you are brave and like to live on the edge you can use a daily build of the edge channel (…bugs included 😉) at:

Write the image to SD:

Put your SD card into your PC’s SD card reader …
Make sure it did not get auto mounted (in case it did, do not use the filemanager to unmount but unmount it using the commandline (in the example below my USB card reader shows the SD as /dev/sdb to the system)):

ogra@pc~$ mount | grep /dev/sdb # check if anything is mounted
ogra@pc~$ sudo umount /dev/sdb1 # unmount the partition
ogra@pc~$ sudo umount /dev/sdb2

Use the following command to write the image to the card:

ogra@pc~$ xzcat /path/to/ubuntu-core-16-pi3.img.xz | sudo dd of=/dev/sdb bs=8M

Plug the SD into your pi3, plug an ethernet cable and either a serial cable or a monitor and keyboard in and power up the board. Eventually you will se a “Please press enter” message on the screen, hitting Enter will start the installer.

Going through the installer …

Configure eth0 as default interface (the WLAN driver is broken in the current pi3 installer. Simply ignore the wlan0 device at this point):

Give your account info so the system can set up your ssh login:

The last screen of the installer will tell you the ssh credentials to use:

Ssh into the board, set a hostname and call sudo reboot (to work around the WLAN breakage):

ogra@pc:~$ ssh ogra@

It's a brave new world here in Snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snap --help'
for app installation and transactional updates.

ogra@localhost:~$ sudo hostnamectl set-hostname pi3
ogra@localhost:~$ sudo reboot

Now that we have installed our basic system, we are ready to add some nice application snaps to turn it into a shiny WiFi AP with a personal cloud to use from our phone and desktop systems.

Install and set up your personal WiFi Accesspoint:

ogra@pi3:~$ snap install wifi-ap
ogra@pi3:~$ sudo wifi-ap.setup-wizard
Automatically selected only available wireless network interface wlan0 
Which SSID you want to use for the access point: UbuntuCore 
Do you want to protect your network with a WPA2 password instead of staying open for everyone? (y/n) y 
Please enter the WPA2 passphrase: 1234567890 
Insert the Access Point IP address: 
How many host do you want your DHCP pool to hold to? (1-253) 50 
Do you want to enable connection sharing? (y/n) y 
Which network interface you want to use for connection sharing? Available are sit0, eth0: eth0 
Do you want to enable the AP now? (y/n) y 
In order to get the AP correctly enabled you have to restart the backend service:
 $ systemctl restart snap.wifi-ap.backend 
2017/04/29 10:54:56 wifi.address= 
2017/04/29 10:54:56 wifi.netmask=ffffff00 
2017/04/29 10:54:56 share.disabled=false 
2017/04/29 10:54:56 wifi.ssid=Snappy 
2017/04/29 10:54:56 
2017/04/29 10:54:56 
2017/04/29 10:54:56 disabled=false 
2017/04/29 10:54:56 dhcp.range-start= 
2017/04/29 10:54:56 dhcp.range-stop= 
2017/04/29 10:54:56 
Configuration applied succesfully 

Set up an USB key as permanently mounted disk:

Plug your USB Disk/Key into the Pi3 and immediately call the dmesg command afterwards so you can see the name of the device and its partitions … (in my case the device name is /dev/sda and there is a vfat partition on the device called /dev/sda1)

Now create /etc/systemd/system/media-usbdisk.mount with the following content:

Description=Mount USB Disk



And enable it:

ogra@pi3:~$ sudo systemctl daemon-reload 
ogra@pi3:~$ sudo systemctl enable media-usbdisk.mount 
ogra@pi3:~$ sudo systemctl start media-usbdisk.mount 

Install the nextcloud snap:

ogra@pi3:~$ snap install nextcloud 

Allow nextcloud to access devices in /media:

ogra@pi3:~$ snap connect nextcloud:removable-media 

Wait a bit, nextclouds auto setup takes a few minutes (make some tea or coffee) …

Turn on https:

ogra@pi3:~$ sudo nextcloud.enable-https self-signed 
Generating key and self-signed certificate... done 
Restarting apache... done 

Now you can connect to your new WiFi AP SSID and point your browser to afterwards.

Add an exception for the self signed security cert (note that nextcloud.enable-https also accepts Let’s Encrypt certs in case you own one, just call “sudo nextcloud.enable-https -h” to get all info) and configure nextcloud via the web UI.

In the nextcloud UI install the “External Storage Support” from the app section and create a new local Storage pointing to the /media/usbdisk dir so your users can store thier files on the external disk.

An alternate approach to Ubuntu Phone Web App containers

It bothers me since a while that Web Apps on the Ubuntu Phone have their back button at the top left of the screen. It bothers me even more that the toolbar constantly collapses and expands during browsing … most of the time it does that for me when I just want to tap on a link. The page content suddenly moves 50px up or down…

Since Dekko exists on the Ubuntu Phone I became a heavy user of it for reading my mails and I really fell in love with the new bottom menu that Dan Chapman integrated so nicely (based on the circle menu work from Nekhelesh Ramananthan)

So this weekend it struck me to simply combine a WebView with this menu work to ge a shiny bottom navigation menu. I grabbed the recent google plus app from Szymon Waliczek, the latest source of Dekko and some bits from the webbrowser-app tree to combine them into a new webapp-container like framework.

You can find an experimental G+ click package (one that surely wins the contest for the ugliest icon) here.

I pushed the code to launchpad together with a README that describes how you can use it in your own WebApp, you can branch it with:

bzr branch lp:~ogra/junk/alternate-webapp-container

Meet node-snapper a helper to easily create .snap packages of your node.js projects

When I created the “Snappy Chatroom” package for WebRTC Video chat on snappy I used node.js to provide the necessary server bits. During building the snap I noticed how hard it is to actually put the necessary binaries and node modules in place. Especially if you want your snap to be arch independent (javascript is arch independent, so indeed our snap package should be too).

The best way I found was to actually build node.js from source on the respective target arch and run “npm install” for the necessary modules, then tarring up the matching dirs and putting them into my snap package tree.

This is quite some effort !!!

I’m a lazy programmer and surely do not want do that every time I update the package. Luckily there are already binaries of node for all architectures in the Ubuntu archive and it is not to hard to make use of them to run npm install in a qemu-user-static chroot for all target arches and to automate the creation for the respective tarballs. As little bonus i thought it would be nice to have it automatically generate the proper snap execution environment in form of a service startup script (with properly set LD_LIBRARY_PATh etc) so you only need to point node to the to-execute .js file.

This brought me to write node-snapper, a tool that does exactly do the above. It makes it easy to just maintain the actual code I care about in a tree (the site itself and the packaging data for the snap). I leave the caring for node itself or for the modules to the archive, respectively the npm upstreams and just pull in their work as needed.

See for the upstream code.

To outline how node-snapper works I took some notes below how I roll the chatroom snap as an example.

Using node-snapper:

First we create a work dir for our snap package.

ogra@anubis:~$ mkdir package

To create the nodejs and npm module setup for our snap package we use node-snapper, let’s branch this so we can use it later.

ogra@anubis:~$ bzr branch lp:node-snapper

Now we move into the package dir and let node-snapper create the tarballs with the “express”, “”, “” and “ws” node modules since chatroom makes use of all of them.

ogra@anubis:~$ cd package
ogra@anubis:~/package$ sudo ../node-snapper/node-snapper express ws

This created three files.

ogra@anubis:~/package$ ls
amd64-dir.tgz  armhf-dir.tgz

We unpack the tarballs and remove them.

ogra@anubis:~/package$ tar xf amd64-dir.tgz
ogra@anubis:~/package$ tar xf armhf-dir.tgz
ogra@anubis:~/package$ ls
amd64  amd64-dir.tgz  armhf  armhf-dir.tgz
ogra@anubis:~/package$ rm *.tgz
ogra@anubis:~/package$ ls
amd64  armhf

… and branch the chatroom site and packaging code branch.

ogra@anubis:~/package$ bzr branch lp:~ogra/+junk/chatroom
ogra@anubis:~/package$ mv chatroom/* .
ogra@anubis:~/package$ rm -rf chatroom/
ogra@anubis:~/package$ ls site/
add.png      cam_on.png    expand.png      fullscreen.png  mute.png   server.js  unmute.png
cam_off.png  collapse.png  index.html      script.js  style.css
ogra@anubis:~/package$ ls meta/
icon.png  icon.svg  package.yaml

The file we want node to execute on startup is the server.js file in the “site” dir in our snap package. We edit so that the MY_EXECUTABLE variable looks like:


This is it, we are ready to roll a .snap package out of this

ogra@anubis:~/package$ cd ..
ogra@anubis:~$ snappy build package
ogra@anubis:~$ ls
chatroom.ogra_0.1-5_multi.snap  package

As you can see, node-snapper makes supplying javascript nodejs code as snap package a breeze. You only need to keep your site and package files in a git or bzr tree and node-snapper will always provide you the latest nodejs setup and npm installed modules as needed just at package build time.

Indeed we now want to test our snap package. I have a RPi2 running snappy at with enabled developer mode, so I can easily use snappy-remote to install the package.

ogra@anubis:~$ snappy-remote --url=ssh:// install chatroom.ogra_0.1-5_multi.snap

The service should start automatically. Opening chromium, pointing it to and approving access to microphone and camera will now give us a Video chat (pointing an android phone to it at the same time enables you to talk to yourself while watching you from different angles 😉 … note that the mute button is very helpful when doing this …)

I hope we will see some more node.js projects land in the snappy app store soon. A PPA with node-snapper to make it easier installable should be ready next week and if I see there is demand I will also push it to universe in the Ubuntu archive.

I hope you found that little howto helpful 🙂

Porting Ubuntu Snappy to a yet unsupported armhf board

With the appearance of Snappy Ubuntu steps into the world of embedded systems. Ubuntu Snappy is designed in a way that will make it safe to run in critical environments from drones over medical equipment to robotics, home automation and machine control. The automatic rollback features will prevent you from outages when an upgrade fails, application confinement prevents you from apps, servers and tools doing any evil to your system and the image based design makes upgrades happen in minutes instead of potentially hours you are used to from package based upgrade systems.

By its design of separating device, rootfs and application packages strictly Snappy provides a true rolling release, you just upgrade each of the bits separately, independent from each other. Your Home Automation Server software can just stay on the latest upstream version all the time, no matter what version or release the other bits of your system are on. There is no more “I’m running Ubuntu XX.04 or XX.10, where do i find a PPA with a backport of the latest LibreOffice”, “snappy install” and “snappy upgrade” will simply always get you the latest stable upstream version of your software, regardless of the base system.

Thanks to the separation of the device related bits porting to yet unsupported hardware is a breeze too, though since features like automated roll-back on upgrades as well as the security guarding of snap packages depend on capabilities of the bootloader and kernel, your port might operate slightly degraded until you are able to add these bits.

Let’s take a look what it takes to do such a port to a NinjaSphere developer board in detail.

The Snappy boot process and finding out about your Bootloader capabilities

This section requires some basic u-boot knowledge, you should also have read

By default the whole u-boot logic in a snappy system gets read and executed from a file called snappy-system.txt living in the /boot partition of your install. This file is put in place by the image build software we will use later. So first of all your Bootloader setup needs to be able to load files from disk and read their content into the bootloader environment. Most u-boot installs provide “fatload” and the “env import” commands for this.

It is also very likely that the commands in your snappy-system.txt are to new for your installed u-boot (or are simply not enabled in its build configuration) so we might need to override them with equivalent functions your bootloader actually supports (i.e. fatload vs load or bootm vs bootz).

To get started, we grab a default linux SD card image from the board vendor, write it to an SD card and wire up the board for serial console using an FTDI USB serial cable. We stop the boot process by hitting enter right after the first u-boot messages appear during boot, which should get us to the bootloader prompt where we simply type “help”. This will show us all the commands the installed bootloader knows. Next we want to know what the bootloader does by default, so we call the “printenv” command which will show us all pre-set variables (copy paste them from your terminal application to a txt file so you can easier look them up later without having to boot your board each time you need to know anything).

Inspecting the “printenv” output of the NinjaSphere u-boot you will notice that it uses a file called uEnv-NS.txt to read its environment from. This is the file we will have to work with to put overrides and hardware sepcific bits in place. It is also the file from which we will load snappy-system.txt into our environment.

Now lets take a look at the snappy-system.txt file, an example can be found at:

It contains four variables we can not change that tell our snappy how to boot, these are snappy_cmdline, snappy_ab, snappy_stamp and snappy_mode. It also puts the logic for booting a snappy system into the snappy_boot variable.
Additionally there are the different load commands for kernel, initrd and devicetree files and as you can see when comparing these with your u-boot “help” output they use commands out installed u-boot does not know, so the first bits we will put into our uEnv-NS.txt files are adjusted version of these commands. In the default instructions for the NinjaSphere for building the Kernel you will notice that it uses the devicetree attached to an uImage and can not boot raw vmlinuz and initrd.img files by using the bootz command. It also does not use an initrd at all by default but luckily in the “printenv” output there is at least a load address set for a ramdisk already, so we will make use of this. Based on these findings our first lines in uEnv-NS.txt look like the following:

loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}

We will now simply be able to run “loadfiles_ninja” instead of “loadfiles” from our snappy_boot override command.

Snappy uses ext4 filesystems all over the place, looking at “printenv” we see the NinjaSphere defaults to ext3 by setting the mmcrootfstype variable, so our next line in uEnv-NS.txt switches this to ext4:


Now lets take a closer look at snappy_boot in snappy-system.txt, the command that contains all the magic.
The section  “Bootloader requirements for Snappy (u-boot + system-AB)” on describes the if-then logic used there in detail. Comparing the snappy_boot command from snappy-system.txt with the list of available commands shows that we need some adjustments though, the “load” command is not supported, we need to use “fatload” instead. The original snappy_boot command also uses “fatwrite” to touch snappy-stamp.txt. While you can see from the “help” output, that this command is supported by our preinstalled u-boot, there is a bug with older u-boot versions where using fatwrite results in a corrupted /boot partition if this partition is formatted as fat32 (which snappy uses). So our new snappy_boot command will need to have this part of the logic ripped out (which sadly breaks the auto-rollback function but will not have any other limitations for us (“snappy upgrade” will still work fine as well as a manual “snappy rollback” will)).

After making all the changes our “snappy_boot_ninja” will look like the following in the uEnv-NS.txt file:

snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}

As the final step we now just need to set “uenvcmd” to import the variables from snappy-system.txt and then make it run our modified snappy_boot_ninja command:

uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

This is it ! Our bootloader setup is now ready, the final uEnv-NS.txt that we will put into our /boot partition now looks like below:

# hardware specific overrides for the ninjasphere developer board
loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}


snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}

uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

Building kernel and initrd files to boot Snappy on the NinjaSphere

Snappy makes heavy use of the apparmor security extension of the linux kernel to provide a safe execution environment for the snap packages of applications and services. So while we could now clone the NinjaSphere kernel source and apply the latest apparmor patches from linus’ mainline tree, the kind Paolo Pisati from the Ubuntu kernel team was luckily interested in getting the NinjaSphere running snappy and did all this work for us already, so instead of cloning the BSP kernel from the NinjaSphere team on github, we can pull the already patched tree from:;a=shortlog;h=refs/heads/snappy_ti_ninjasphere

First of all, let us install a cross toolchain. Assuming you use an Ubuntu or Debian install for your work you can just do this by:

sudo apt-get install gcc-arm-linux-gnueabihf

Now we clone the patched tree and move into the cloned directory:

git clone -b snappy_ti_ninjasphere git://
cd ubuntu-vivid

Build uImage with attached devicetree, build the modules and install them. All based on Paolos adjusted snappy defconfiig:

export CROSS_COMPILE=arm-linux-gnueabihf-; export ARCH=arm
make snappy_ninjasphere_defconfig
make -j8 uImage.var-som-am33-ninja
make -j8 modules
mkdir ../ninjasphere-modules
make modules_install INSTALL_MOD_PATH=../ninjasphere-modules
cp arch/arm/boot/uImage.var-som-am33-ninja ../uImage
cd -

So we now have a modules/ directory containing the binary modules and we have a uImage file to boot our snappy, what we are still missing is an initrd file to make our snappy boot. We can just use the initrd from an existing snappy device tarball which we can find at

mkdir tmp
cd tmp
tar xzvf vivid-preinstalled-core-armhf.device.tar.gz

Do you remember, our board requires an uInitrd … the above tarball only ships a raw initrd.img, so we need to convert it. In Ubuntu there is the u-boot-tools package that ships the mkimage tool to convert files for u-boot consumption, lets install this package and create a proper uInitrd:

sudo apt-get install u-boot-tools
mkimage -A arm -T ramdisk -C none -n "Snappy Initrd" -d system/boot/initrd.img-* ../uInitrd
cd ..
rm -rf tmp/

If you do not want to keep the modules from the -generic kernel in your initrd.img you can easily unpack and re-pack the initrd.img file as described in “Initrd requirements for Snappy” on and simply rm -rf lib/modules/* before re-packing to get a clean and lean initrd.img before converting to uInitrd.

Now we have a bootloader configuration file, uImage, uInitrd and a dir with the matching binary modules we can use to create our snappy device tarball.

Creating the Snappy device tarball

We are ready to create the device tarball filesystem structure and roll a proper snappy tarball from it, lets create a build/ dir in which we build this structure:

mkdir build
cd build

As described on our uInitrd and uImage files need to go into the assets subdir:

mkdir assets
cp ../uImage assets/
cp ../uInitrd assets/

The modules we built above will have to live underneath the system/ dir inside the tarball:

mkdir system
cp -a ../modules/* system/

Our boootloader configuration goes into the boot/ dir. For proper operation snappy looks for a plain uEnv.txt file, since our actual bootloader config lives in uEnv-NS.txt we just create the other file as empty doc (it would be great if we could use a symlink here, but remember, the /boot partition that will be created from this uses a vfat filesystem and vfat does not support
symlinks, so we just touch an empty file instead).

mkdir boot
cp ../uEnv-NS.txt boot/
touch boot/uEnv.txt

Snappy will also expect a flashtool-assets dir, even though we do not use this for our port:

mkdir flashtool-assets

As last step we now need to create the hardware.yaml file as described on

echo "kernel: assets/uImage" >hardware.yaml
echo "initrd: assets/uInitrd" >>hardware.yaml
echo "dtbs: assets/dtbs" >>hardware.yaml
echo "partition-layout: system-AB" >>hardware.yaml
echo "bootloader: u-boot" >>hardware.yaml

This is it ! Now we can tar up the contents of the build/ dir into a tar.xz file that we can use with ubuntu-device-flash to build a bootable snappy image.

tar cJvf ../device_part_ninjasphere.tar.xz *
cd ..

Since I personally like to re-build my tarballs regulary if anything changed or improved, I wrote a little tool I call snappy-device-builder which takes over some of the repetitive tasks you have to do when rolling the tarball, you can branch it with bzr from launchpad if you are interested in it (patches and improvements are indeed very welcome):

bzr branch lp:~ogra/+junk/snappy-device-builder

Building the actual SD card image

Install the latest ubuntu-device-flash from the snappy-dev beta PPA:

sudo add-apt-repository ppa:snappy-dev/beta
sudo apt-get update
sudo apt-get install ubuntu-device-flash

Now we build a 3GB big image called mysnappy.img using ubuntu-device-flash and our newly created device_part_ninjasphere.tar.xz with the command below:

sudo ubuntu-device-flash core --size 3 -o mysnappy.img --channel ubuntu-core/devel-proposed --device generic_armhf --device-part device_part_ninjasphere.tar.xz --developer-mode

.. and write the create mysnappy.img to an SD card that sits in the SD Card Reader at /dev/sdc:

sudo dd if=mysnappy.img of=/dev/sdc bs=4k

This is it, your NinjaSphere board should now boot you to a snappy login on the serial port, log in with “ubuntu” with the password “ubuntu” and if your board is attached to the network i recommend doing a “sudo snappy install webdm”, then you can reach your snappy via http://webdm.local:4200/ in a browser and install/remove/configure snap packages on it.

If you have any problems with this guide, want to make suggestions or have questions, you can reach me as “ogra” via IRC inthe #snappy channel on or just mail the mailing list with your question.

The recent Ubuntu Community Council marketing drivel about Mint … or how to put your foot into it with a run-up

Three days ago the Ubuntu Community Council released this document about the downstream distro agreement Canonical provides.

This is one of the worst documents i have seen released by the Council ever, it explains exactly nothing but tells you “Canonical is doing it right, trust them, we do too”. There is not a single piece of the technical background behind all this explained anywhere (while there is a lot to explain and if people read it they might even grasp why this agreement exists) or why it has nothing to do with licensing the ownership of anything to anyone, claiming debian copyrights, violating the GPL or any other such rubbish that gets claimed all around the net now. In this specific case i feel the Council has not done their homework … and why … oh why .. did you guys even have to put the word License into your headline at all. While the word might have some meaning to lawyers in this context, it must have been clear to you that this word will blow everything out of proportion .. the emphasis here needs to be that it is an agreement between both sides to protect the Mint users from technical breakage of their system …

So lets see if a proper technical explanation can probably calm the tempers a bit …

We’ll have to do some time travelling for this … come back with me to the year 2004 when a small developer team of less than 20 people grabbed the giant debian archive, made some changes to it, recompiled the whole thing and released it under the name Ubuntu.

Back then people read everywhere that this Ubuntu thing is based on debian and indeed when they found some third party packages built for debian (and not Ubuntu) they could sometimes even install them without breaking their system. Some people even went further and actually added debian repositories to their sources.list and were surprised to see their systems go up in flames.

Ubuntu packages aren’t at all the same as debian packages, Ubuntu might have made changes to some UI library where a binary symbol that offers the “draw rectangle” function actually draws an oval on your screen. Binary packages that were recompiled against this lib in the Ubuntu archive know about this change, binary packages from the debian archive that have been compiled against the debian version of the library will not know that, once they try to use the “draw rectangle” function, something unpredictable happens and the app falls over.

Today it is a known fact to most that you should not simply add a debian repo to your package sources, the info has spread across the net after 10 years, the generally established support answer if someone asks about adding plain debian binaries is “yes, you can and if you are lucky it even works, but if it breaks you have to keep both pieces”. The fact of binary incompatibility has caused lots of discussions and some anger on the debian side back then. “Why can’t you just do your work in the debian archive” was a question i remember hearing often from DDs back then. Ubuntu wanted to do stuff faster than debian and move ahead with some pieces. Debian is a rock solid tanker, it is gigantically big like a tanker but it is also as slow as one. Imagine that without Ubuntu doing this step things like developing upstart would not have been possible, imagine also that without Ubuntu moving forward with upstart and changing the init system there would most likely be no systemd today. Sometimes you need to move out of boundaries to get the whole thing to the next level.

Now lets fast forward back into presence again. Ubuntu has grown over ten years and there are many “in-archive” and also many “out-of-archive” downstream distros. It is facing something similar debian had to face back then. Downstream distros start to make changes out of the archive and provide their own binary packages for some bits. There is no problem at all with distros like Kubuntu, Xubuntu or Lubuntu here, their changes happen inside the Ubuntu archive packages in the archive will be linked against i.e. Kubuntu library changes. If it comes to the “out-of-archive” downstream distros there can indeed be the exact same problem that Ubuntu faced with debian back in 2004. These “out-of-archive” distros want to innovate the code, user experience or system design in a way that is either not appropriate for them to contribute back, or not appropriate for Ubuntu to take said contribution (because it breaks some existing functionality in Ubuntu or whatever). They feel they need to do their changes outside of the archive like Ubuntu felt back then with debian. Sadly these downstreams often enough do not have the resources to actually rebuild the whole archive so they start providing a mishmash of their re-done binary packages with the binaries from the Ubuntu archive which will leave you as a user with the same situation the early Ubuntu users.

For Mint you can find the list of changed packages on this page … you will see that for example libgtk is in the list, in case Mint decides to switch the “draw rectangle” function to actually draw triangles without bumping the ABI all Gtk apps that you use directly from the Ubuntu archive will possibly fall flat on their face. Users will be upset and blame either the bad Mint or the bad Ubuntu quality for their breakage. Now imagine what would happen if Mint decides to make any innovation of libc (like Valves SteamOS does very heavily with regard to debians libc), the library everything is linked against either directly or indirectly. Most likely the majority of original Ubuntu packages would just break and they would be limited to only use the packages listed in that linked document.

Lets do a short detour towards Trademarks now … Canonical owns the Ubuntu trademark, this means two things … one is Canonical can make money from it (yay, that luckily pays my salary so i can do paid work on my favorite OS all day !!!) … but that part is not involved in this agreement at all, nobody is asking Mint to pay anything. The agreement also does not mean that Canonical claims any ownership of any code from debian, violates the GPL or steals credit for any of the code in the Ubuntu archive.

Remember, I said owning the Trademark means two things … the second thing is that Ubuntu means a certain quality (be it good or bad is up to the beholder here, feel free to make your own judgement). If i install Ubuntu i can expect a certain level of quality and that the basic desktop largely works (yes, yes, i know there are enough bugs to make a lot of jokes over that statement). It also means that distros claiming to be “based on Ubuntu” inherit that archive quality of the binary packages. You can rely on the fact that the Kubuntu installer is using the same back ends and libs the Ubuntu installer uses for example. Bugs on the low level are shared and fixed for both. Now lets look back at the list of Mint packages and we will see that they provide their own “Ubiquity” package. I don’t know what changes are in there and I don’t know if it does not make the Mint installer completely incompatible to what Ubuntu, Xubuntu, Edubuntu or Lubuntu use, it will likely introduce different bugs and features into the binary resulting from that source. So this second part of owning the Trademark is about protecting the brand by guaranteeing a certain quality under this brand (which in turn indeed helps for the first part).

While the agreement with Mint kind of targets the second part here, it protects the reputation of Mint more than it does protect the Ubuntu reputation. It makes sure that Mint users will not have false expectations and that their systems will not suffer from technical breakage by Mint claiming it is 100% Ubuntu binary compatible … And while this whole agreement might technically be treated as License in front of a court (where it most likely will never go) IMHO the bigger info here is that there is an agreement between two distros to prevent Mint users from the same issues Ubuntu users faced back in 2004 with regards to debian. If this makes Canonical evil is up to you to judge, I personally think it does not and is a good thing for both sides in the end.

Please note that all the above is my personal opinion and should in no way be quoted as Canonical statement, I solely write all this as an Ubuntu developer independently of my employer (I would have written the same if i would work at Google and do Ubuntu development in my spare time). In case you want to re-share it on your news site, please get this differentiation straight !

… lots of Canonical in my mouth …

Getting online this morning was in interesting experience, seemingly some news sites picked up a two week old post to a mailing list thread from me to turn it into something that generates revenue for them …

… intrestingly even though the original post was linked in all of these articles, people seem to be more intrested in the interpretation of the reporters of the sites than to read the actual thread, putting potential words from Canonical into my mouth that i didn’t ever say.

I must say I find that pretty offending to me as an individual … yes, I do work for Canonical (still happily since nearly 9 years now and I love what I do and plan to go on to do so …) but please allow me to have my own mind and opinions, not everything an Ubuntu developer says is coordinated by Canonical, even if this statement might trash you conspiracy theories … (oh, and not every Ubuntu developer works for Canonical … unlike some people might want you to think, the Ubuntu dev community is healthy and happily chuggin along, with the new Ubuntu Touch community vibrantly growing)

What I wrote was my own personal opinion (that was actually the reason to use the word “personally” in my sentece about home banking, but I am not a native english speaker, so I might have misunderstood its meaning for all these years)

What I also did was to point to code evidence that shows that Linux Mint supresses security updates for certain software in its default setup … while it might be true that it is configurable and that it is only disabled for certain packages out of the box, this is still an evident fact and can be seen in the code, there is nothing to argue or discuss about (and as I learned now it seems to be part of the Mint philosophy since security updates seem to have caused them instabilities in the past)

Indeed I couldn’t keep my feet still and made the mistake to actually read comments on the different articles …

“He fears losing his job and needs to stir up stuff” … dude … after such a long time in an opensource company and getting headhunter offers regulary, you dont have to worry for your job … what I’m actually worried about is the undeserved badmouthing of Ubuntu based on FUD. Ubuntu is more than Canonical or its decisions, please don’t discredit the work of many many contributors out there … if you want to attack Canonical, do it, but pretty please take into account that Ubuntu is more than Canonical … Oh, and the stirring up part … I can tell you it isn’t any pleasure to be in focus like that for a side statement you made weeks ago …

“He wants to badmouth Linux Mint because they steal users from Ubuntu” … I seriously don’t care if users use Ubuntu, Xubuntu, Mint or elementaryOS, in fact it makes me proud to know they base on work I participated in (note that I maintained one of the first derivative distros of Ubuntu (edubuntu) for about two years nearly on my own) derivatives (and the work they feed back into the Ubuntu archive) are a big part of the Ubuntu ecosystem, why would any sane developer badmouth them ?

A big thanks to OMGUbuntu for fixing their headline … which initially suggested that I “advised” users to not use Mint because it is vulnerable, I never did, I just stated that I personally would not use it for online banking since I know they dont install all available security updates by default …

Seriously, I HATE raisins, I would never eat a cheesecake that contains raisins … did I ask anyone to not eat raisins or did I propose to stop producing them in the former sentence ? no, obviously I didnt … and I dont want the raisin farmers go jobless just because I dont like their product … why people did read something like this into my words in the mail that was quoted is really beyond me …

So lets see if we can get something constructive out of all this, obviously would I be a debian developer that had posted to some debian ML nobody would have picked it up … but since this trivial statement has drawn so much attention we can probably both benefit from it…

Clement Lefebvre’s statement

To me PERSONALLY suppressing any available security updates is a no-go and while Clem points out that it is configurable in Mint, I don’t belive my Mom or my sister would get along with that, they would just use the default. Which would leave them obviously vulnerable with some packages (wether the vulnerabilities are exploited or not, there are open security holes in your system after all) … obviously the practice to suppress these updates stems from bad experience with using Ubuntu provided security updates …

Hey Clem ! … so how about we take a look at this and improve that situation for you, obviously something in Ubuntu doesn’t work like you need it, Canonical puts a lot of time and money into improving the QA since about two years. I think it would be really helpful to sit down and look if we can improve it well enough for both of us to benefit (Ubuntu from your feedback and you from improvements we can do to the package quality) … wether you still want to suppress updating any packages or not even after we fixed the issue for you, is indeed your choice, but please dear press allow me to also still not use Mint for online banking then 😉

Quotes from the comments section in the above page:

… “Maybe it is time to re-evaluate whether security updates should be held back by default. Ubuntu have made steps to avoid regressions such as Phased Updates.” …
Clem: … “I’d be happy to have that discussion and look at the pros and cons post Mint16 release. It’s not a reaction to a particular incident though, it’s a difference in policy. We actually built the tools that would allow us not to make it trivial for people to apply changes blindly. There’s pros and cons to it, and that’s why it’s configurable.” …

So hey, as much out of bounds these press posts were for such a non-issue, it apparently caused some discussions and will possibly improve the situation for all of us in the end …

Oh, and btw … many people missed the actually interesting part in the mail thread … having Mate in debian and Ubuntu will definitely reduce the maintenance work for Mint since they can just pull it in from the respective archives (and it might bring Ubuntu another new derivative distro)

Important changes in Ubuntu Engineering

With the recent decisions of Canonical to do more upstream development like UnityNext, Mir and UbuntuTouch the
Ubuntu Engineering team took some fundamental decisions to help funding this development.

After a long, heated, team internal discussion it was decided that we no longer want to be a cost factor for
Canonical and that Ubuntu development will be split out into it’s own company under the umbrella of the
Ubuntu Foundation. The new company will be called “Ubuntu Engineering Ltd.” and will be seeking a listing at
worldwide stock markets within the next 5 years.

To make developing Ubuntu a more a more profitable thing for you as a developer and volunteer we worked out a
new upload concept which is just being put into place on launchpad right now.

* Each upload of a source package will be charged to your launchpad account with €0.05 (we picked the Euro here
simply because it is the more stable currency).
* For each successful build on one of the Ubuntu architectures (currently i386, amd64, armhf and powerpc) you
will get refunded with €0.01 per successfully built binary. An FTBFS (build failure) on any of these architectures
will cost €0.02.
* We are still in discussion with the debian project about the chargement for package syncs from debian and are
confident we will come to a conclusion with the DPL very soon (since syncs cost so much more we will likely be
charging slightly more than for a source package upload that goes directly into Ubuntu. Charging costs will likely
be handled inside debian directly)
* The refunds will immediately be turned into stock options at €1.00 per 1% of an Ubuntu Engineering stock or (at
your choice) into 0.5% of an album download from the UbuntuOne Music store.

You may also have heard about the recent decisions of the Ubuntu Technical board to provide a kind of rolling opportunity to follow the development release without being forced to upgrade your sources.list file.

We decided (in reminiscence to the release cycle this decision happened in) to stay with the R naming scheme for this and call the newly created rolling developer release the “Rolling Rouble”. The infrastructure for this will be in place within the next 10 days, if you are an Ubuntu Developer and plan to participate please update your sources.list by end of April 10th to point to “rolling-rouble”.

If you are a bug Triager you won’t be left out ! In agreement with the Ubuntu bug control team a similar yet to be fully defined set of features will also be put in place for bug triage. The same goes for blueprints where the refunding concept is already a bit more progressed (an accepted blueprint will cost €5.00, each fulfilled Work Item will be refunded with €0.01 in stock options or (at your choice) into 0.01% of an album download from the UbuntuOne Music store)

We belive this is an excellent business model for you as a developer, bug triager and blueprint creator and look forward to a bright future of Ubuntu development and full pockets for developers employed at Ubuntu Engineering Ltd.

In case you have any unanswered questions we will do our best to leave it like this if you mail our mailing list under: “Ubuntu Engineering Ltd.”

On behalf of the Ubuntu Engineering team
Sincerely yours, Oliver Grawert