An extra postscript.Testing submissions on the compiler course has been working well with docker over the last year. The latest version builds a report summary that is stored on the web-server, and then rendered into a viewable report for students. It seems to have made many platform-dependent bugs fixable for students, which has increased the quality of submissions. It has been successful enough that I ported it over to the Linux course where it has made an impact on the grading process in the course.
Then it all broke.
The testing process was performed on one of two machines, depending on where I was working at the time:
- A Mac laptop, using boot2docker inside virtualbox.
- A linux desktop, using local install of docker.
Running Docker under Ubuntu.The benefit of docker (over manipulating raw VM images) is the convenience that the cmd-line tools give for handling containers and images. The performance benefits are not so important for this application. But both of these attributes arise because Docker builds images on onion-filesystems, building up on readonly layers.
Switching to Ubuntu caused an unforeseen problem in the testing environment - all the core dumps disappeared. Investigating this revealed that the ulimit -c unlimited in the testing script was not sufficient to generate cores. The kernel checks /proc/sys/kernel/core_pattern to decide where to write the image.
In a docker container this is simply a read-only copy of the host! When /proc only served as an informative (reflective) interface to the kernel status this was not a problem. But not that /proc is also used as a configuration interface it means that details of the host are leaking into the container. In particular Ubuntu sets this to:
|/usr/share/apport/apport %p %s %c %P
So that cores are piped into a reporting tool - which is not installed in the docker container, and is not the desired behaviour anyway.
Conclusion: using docker in its default mode on linux as a form of configuration management for a testing environment is fatally flawed.
Wrapping the linux docker inside boot2docker.The official way to install docker does not seem to include a virtualized linux option. The VM approach is used on Windows and OS-X, but the installer for them (Docker-Toolkit) is not available on linux. So this needs to be done manually:
curl -L https://github.com/docker/machine/releases/download/v0.8.2/docker-machine-`uname -s`-`uname -m` -odocker-machine chmod 755 docker-machine sudo mv docker-machine /usr/local/bin/ sudo chown root:root /usr/local/bin/docker-machine docker-machine create --driver virtualbox default
Yes, I shit you not. It really is that ugly to get it onto an Ubuntu system. Life now takes a turn for the more "interesting":
Error creating machine: Error in driver during machine creation: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
It seems that VT-X is disabled by default on the HP EliteDesks. Enabling it allows the boot2docker image to run successfully (https://github.com/docker/machine/issues/1983), and then docker-machine nv default produces the right values to connect.
Note: all the old scripts use sudo docker, this still works - but it connects the daemon to a different docker server. Using docker as user (instead of sudo) shows the right machine where stuff works. This is confusing to use.
Standard install for the debian_localdev image used to test submissions:
docker run -it --name localdev debian /bin/bash > apt-get update > apt-get install gcc g++ clang gdb make flex bison graphviz vim > ^d docker commit localdev debian_localdev
After this we still get leakage from the server host - but boot2docker is quite minimal so we should be able to tolerate the configuration leakage.
Need to remember to update the docker scripts before retesting all the submisssions..