[Unsupported] Developing inside Docker with precompiled TPL binaries

For development purposes, you may want to use the publicly available docker images instead of compiling them yourself. While this is possible and this page will help you in through this journey, please note that this is not officially supported by the GEOS team that reserves the right to modify its workflow or delete elements on which you may have build your own workflow.

There are multiple options to use the exposed docker images.

  • A lot of IDE now provide remote development modes (e.g. CLion, VS Code, Eclipse Che and surely others). Depending on your choice, please read their documentation carefully so you can add their own requirements on top the TPL images that are already available.

  • Another option is to develop directly inside the container (i.e. not remotely). Install your favorite development inside the image (be mindful of X display issues), connect to the running container and start hacking!

  • It is also possible to develop directly in the cloud using GitHub codespaces. This product will let you buy a machine in the cloud with an environment already configured to build and run geos. The submodules are automatically cloned (except for the integratedTests which you may need to init yourself if you really need them, see .devcontainer/postCreateCommand.sh). You do not need to run the scripts/config-build.py scripts since cmake and vscode are already configured. Last, run cmake through the vscode interface and start hacking!

You must first install docker on your machine. Note that there now exists a rootless install that may help you in case you are not granted extended permissions on your environment. Also be aware that nvidia provides its own nvidia-docker that grants access to GPUs.

Once you’ve installed docker, you must select from our docker registry the target environment you want to develop into.

  • You can select the distribution you are comfortable with, or you may want to mimic (to some extend) a production environment.

  • Our containers are built with a relative CPU agnosticism (still x86_64), so you should be fine.

  • Our GPU containers are built for a dedicated compute capability that may not match yours. Please dive into our configuration files and refer to the official nvidia page to see what matches your needs.

  • There may be risks of kernel inconsistency between the container and the host, but if you have relatively modern systems (and/or if you do not interact directly with the kernel like perf) it should be fine.

  • You may have noticed that our docker containers are tagged like 224-965. Please refer to Continuous Integration process for further information.

Now that you’ve selected your target environment, you must be aware that just running a TPL docker image is not enough to let you develop. You’ll have to add extra tools.

The following example is for our ubuntu flavors. You’ll notice the arguments IMG, VERSION, ORG. While surely overkill for most cases, if you develop in GEOS on a regular basis you’ll appreciate being able to switch containers easily. For example, simply create the image remote-dev-ubuntu20.04-gcc9:224-965 by running

export VERSION=224-965
export IMG=ubuntu20.04-gcc9
export REMOTE_DEV_IMG=remote-dev-${IMG}
docker build --build-arg ORG=geosx --build-arg IMG=${IMG} --build-arg VERSION=${VERSION} -t ${REMOTE_DEV_IMG}:${VERSION} -f /path/to/Dockerfile .

And the Dockerfile is the following (comments are embedded)

 1# Define you base image for build arguments
 2ARG IMG
 3ARG VERSION
 4ARG ORG
 5FROM ${ORG}/${IMG}:${VERSION}
 6
 7# Uninstall some packages, install others.
 8# I use those for clion, but VS code would have different requirements.
 9# Use yum's equivalent commands for centos/red-hat images.
10# Feel free to adapt.
11RUN apt-get update
12RUN apt-get remove --purge -y texlive graphviz
13RUN apt-get install --no-install-recommends -y openssh-server gdb rsync gdbserver ninja-build
14
15# You may need to define your time zone. This is a way to do it. Please adapt to your own needs.
16RUN ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime && \
17    dpkg-reconfigure -f noninteractive tzdata
18
19# You will need cmake to build GEOSX.
20ARG CMAKE_VERSION=3.23.3
21RUN apt-get install -y --no-install-recommends curl ca-certificates && \
22    curl -fsSL https://cmake.org/files/v${CMAKE_VERSION%.[0-9]*}/cmake-${CMAKE_VERSION}-linux-x86_64.tar.gz | tar --directory=/usr/local --strip-components=1 -xzf - && \
23    apt-get purge --auto-remove -y curl ca-certificates
24RUN apt-get autoremove -y
25
26# You'll most likely need ssh/sshd too (e.g. CLion and VSCode allow remote dev through ssh).
27# This is the part where I configure sshd.
28
29# The default user is root. If you plan your docker instance to be a disposable environment,
30# with no sensitive information that a split between root and normal user could protect,
31# then this is a choice which can make sense. Make your own decision.
32RUN echo "PermitRootLogin prohibit-password" >> /etc/ssh/sshd_config
33RUN echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config
34RUN mkdir -p -m 700 /root/.ssh
35# Put your own public key here!
36RUN echo "ssh-rsa AAAAB... your public ssh key here ...EinP5Q== [email protected]" > /root/.ssh/authorized_keys
37
38# Some important variables are provided through the environment.
39# You need to explicitly tell sshd to forward them.
40# Using these variables and not paths will let you adapt to different installation locations in different containers.
41# Feel free to adapt to your own convenience.
42RUN touch /root/.ssh/environment &&\
43    echo "CC=${CC}" >> /root/.ssh/environment &&\
44    echo "CXX=${CXX}" >> /root/.ssh/environment &&\
45    echo "MPICC=${MPICC}" >> /root/.ssh/environment &&\
46    echo "MPICXX=${MPICXX}" >> /root/.ssh/environment &&\
47    echo "MPIEXEC=${MPIEXEC}" >> /root/.ssh/environment &&\
48    echo "OMPI_CC=${CC}" >> /root/.ssh/environment &&\
49    echo "OMPI_CXX=${CXX}" >> /root/.ssh/environment &&\
50    echo "GEOSX_TPL_DIR=${GEOSX_TPL_DIR}" >> /root/.ssh/environment
51# If you decide to work as root in your container, you may consider adding
52# `OMPI_ALLOW_RUN_AS_ROOT=1` and `OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1`
53# to your environment. This will prevent you from adding the `--allow-run-as-root` option
54# when you run mpi. Of course, weigh the benefits and risks and make your own decision.
55
56# Default ssh port 22 is exposed. For _development_ purposes,
57# it can be useful to expose other ports for remote tools.
58EXPOSE 22 11111 64010-64020
59# sshd's option -D prevents it from detaching and becoming a daemon.
60# Otherwise, sshd would not block the process and `docker run` would quit.
61RUN mkdir -p /run/sshd
62ENTRYPOINT ["/usr/sbin/sshd", "-D"]

Now that you’ve created the image, you must instantiate it as a container. I like to do

docker run --cap-add=SYS_PTRACE -d --name ${REMOTE_DEV_IMG}-${VERSION} -p 64000:22 -p 11111:11111 -p 64010-64020:64010-64020 ${REMOTE_DEV_IMG}:${VERSION}

that creates the container remote-dev-ubuntu20.04-gcc9-224-965, running instance of remote-dev-ubuntu20.04-gcc9:224-965.

  • Note that you’ll have to access your remote development instance though port 64000 (forwarded to standard port 22 by docker).

  • Additional port 11111 and ports from 64010 to 64020 will be open if you need them (remote paraview connection , multiple instances of gdbserver, …).

  • Please be aware of how to retrieve your code back: you may want to bind mount volumes and store you code there (-v/--volume= options of docker run).

  • Change docker to nvidia-docker and add the --gpus=... option for GPUs.

You can stop and restart your container with

docker stop ${REMOTE_DEV_IMG}-${VERSION}
docker start ${REMOTE_DEV_IMG}-${VERSION}

Now hack.