GUI fun inside Docker or Kubernetes

The year is 2019 so everyone and their grandparents are running their sophisticated microservice solutions on Kubernetes. Same as many, we run our Kubernetes clusters on Amazon EC2. Striving to have a dev and prod environments as similar as possible, we run a single-node Kubernetes cluster inside a Docker-For-Mac VM on our OSX based dev stations. All this has it’s pros and cons, which is a completely separate discussion. We recently hit a minor nuisance – for a troubleshooting session we needed to run a GUI tool from inside the isolated Kubernetes network. If you want to find a few possible solutions to this (probably) common case, please read on.

A Feature

On one particularly bad day, we needed a piece of code that takes screenshots of some dynamic HTML pages in our system and stores them on Amazon S3 service. Due to our JS-heavy codebase, we chose the JS lib Puppeteer to aid in this. It seems mature and actively developed by the Google Chrome team. Most importantly to us, it allows to programmatically control a Chrome browser, started in headless mode, so we don’t need any window managers or X servers.

All our excitement at the quickness at which this feature was implemented was stifled when it turned out all screenshots were coming out blank. The logical first troubleshooting attempt, i.e. to load the requests from the local browser, was unsuccessful as pages loaded just fine. We searched Puppeteer’s logs but couldn’t find anything. We tried to do our best to replicate the issue with curl requests but failed on this one as well. It looked, smelled and behaved like a configuration problem, but we needed a better tooling to pinpoint it.

Brainstorming time

We needed to make an HTTP request from a GUI browser from inside the Kubernetes network, ideally from the exact container in which the Puppeteer code was running. The easiest options that came to us were:

SSH Local Port Forwarding

If we have an SSH server inside the container, we can use it as a jump box, do some clever port forwarding and open the needed page on our local browser. This approach is fairly easy but has some important limitations:

  • It needs SSH server inside the container. That means we either need to rebuild the container or tinker around in it;
  • It needs SSH ports to be forwarded from outside the Kubernetes network to the container. In some cases that might mean a whole redeployment of the stack. In some organizations that might not be an advisable move on production;
  • It works only for this particular case of loading a web page. If we needed to run another GUI tool inside the container, we would need another solution;
  • In some cases (including ours), the locally forwarded port can already be used for another application or handled by some firewall rules.

X forwarding via SSH

We can share our local windows managers through SSH, a colleague shouted ecstatically! This one is also easy to implement but again has limitations that outweigh the benefits for our case:

  • It requires X Server and the OSX developers don’t include one by default. We can install one separately, but it does not play nicely with the rest of the OSX ecosystem. If we were using Linux dev stations, this might have been our go-to option.
  • Just as the SSH Port Forwarding solution, it requires SSH server inside the container and the proper port to be exposed.

Depending on your environment, you should seriously consider this option. For certain cases the benefits of it are undeniable. For example, with a decent network connection, you may transfer all your development to discardable cloud, VM or container instances. You can start an instance and forward your X server so you receive all UI changes directly to your laptop.

Open a VNC connection to the container

We can install a window manager inside the container, add a VNC server on top, connect to it and run Chrome in full GUI mode. This sounded good, but:
* It requires a window manager. In some cases that might be OK, but installing a ton of big and messy packages inside a running container is not always recommended. For example on Ubuntu 16.04 XFCE4 depends on about 475 packages and Gnome on about 1500.
* We need a VNC server and the VNC port to be forwarded, which requires a possible redeployment of the stack.

VNC server in a new container with shared X Server

The VNC connection seemed almost perfect but we knew there is a way to make it better. We have some experience with how service meshes work – e.g. Istio and Linkerd. The basic premise is to start a container right next to your application container and share some resources between them. In the Istio and Linkerd cases, they share the pod’s network stack. We wanted something very similar, starting a separate container with X Server and VNC in it and sharing the X Server inside the Chrome running container. We chose centos-xfce-vnc as it comes pre-bundled with noVNC that allows us to access the container directly from our browsers without a VNC client.

Implementation

As it was a development environment and we needed all of this just for testing purposes we could get away with hacking together a few docker commands and checking what a browser loads. If a proper integration in a multi-node Kubernetes cluster is required, the basic idea stays the same, but much more infrastructure specific parameters will need to be taken into account. That might be the topic of another blog post – consider this one as just a proof of concept.

As per the docs of centos-xfce-vnc, we ran docker run --rm -p 6901:6901 consol/centos-xfce-vnc, managed to connect over VNC and play around a bit in the XFCE interface.

Now we just needed to share the X Server with our Chrome running container. Luckily, the good guys at X Server conveniently expose Unix domain sockets over which all the X related communication is happening, all contained inside the directory/tmp/.X11-unix. That is a regular directory that we can easily share between containers using Docker volumes. We just added a volume argument to our previous docker command.

<code>docker run -v '/tmp/.X11-unix:/tmp/.X11-unix:rw' --rm -p 6901:6901 consol/centos-xfce-vnc</code>.
Code language: Bash (bash)

Tinkering around with the exposed X Server, we found we were using authenticated X sessions so we had to share the .Xauthority file with the Chrome container as well. Please be aware that because of the way docker volumes work you have to create the file before you start the VNC container. You can do it by executing something like docker run -ti --rm -v /:/host/ ubuntu:16.04 touch /host/var/lib/.Xauthority.

The last thing you need to do is to set the proper value for the DISPLAY environment variable inside the Chrome container. In most of the cases, it should be DISPLAY=:1.

So the whole X Server exposing setup between the containers can be summarised in following three commands:

docker run -ti --rm -v /:/host/ ubuntu:16.04 touch /host/var/lib/.Xauthority docker run -v '/tmp/.X11-unix:/tmp/.X11-unix:rw' -v "/var/lib/.Xauthority:/headless/.Xauthority:rw" --rm -p 6901:6901 consol/centos-xfce-vnc docker run -v '/tmp/.X11-unix:/tmp/.X11-unix:rw' -e DISPLAY=:1 --rm --cap-add=SYS_ADMIN justinribeiro/chrome-headless google-chrome
Code language: Bash (bash)

After that, you can go to http://localhost:6901/?password=vncpassword/ and play around with your chrome inside docker container.

Standard disclaimer: regard this just as a dumbed-down proof of concept. Depending on your specific needs, it should be adapted and converted to a full-fledged Kubernetes deployment before breaking your own infrastructure.

If there’s one thing you can take away from this article, let it be that with enough tinkering you can adapt your Kubernetes deployment to whatever case hits you, without having to dramatically change your underlying codebase.