When developing any app to work in Docker, there are a number of pits a newbie developer can fall into that may not seem apparent.  In this post, we map out 3 of these pits so that you can avoid them for your app.

Localhost is not local to your localhost

If your app configuration requires using an IP address or hostname to access a database or any other external requirement, you must be aware of this pitfall.  Your docker host’s localhost is not accessible in the same way from your docker container as from the host server.  Localhost within a docker container is the container's address and not the host the container is running on.

So if you’re developing an app and want to test access a database running on the same host, you can not use “localhost” or “127.0.0.1” in the app’s configuration.  Instead you will need to use the actual IP address of your host server, or if using docker 18.03+ on Windows/macOS, by using host.docker.internal.  Of course this is only for development or testing purposes, and you should avoid accessing external requirements without FQDN or static IPs.

Docker-compose can help you and hurt you

Docker-compose is a tool to allow packaging docker dependencies together and managing how they are started and configured, Imagine your app has a bunch of requirements, an SQL database, a kv-store, etc.  If the enduser is an enterprise client, they may have a whole bunch of requirements for these and rolling them in with docker-compose may jsut not be possible.  So how do you package an app that can be used by both enterprise and individuals?

The solution is to use docker-compose sparingly to package the requirements, but allow your app’s docker image to run without compose if the enduser is managing the requirements themselves. So ensure your app can run using a single docker image if all the requirements can be provided by the enduser, but provide a docker-compose file for those endusers who may not want to manage those requirements themselves or want to test for trial purposes.

Exporting docker images isn’t an obvious process

There are many ways to release a docker container to the world, but the simplest is not the most obvious.  Docker provides a way to generate a named image that can be used either directly or via docker-compose.  But this process is two step.

Firstly, you need to build the image to a specific build location:

docker image build -f /path/to/app/Dockerfile /path/to/build/location

Second, you need to roll the image into a file for distribution:

docker save yourimagename | gzip > yourimagename.tar.gz

The enduser can then load the image on their docker host quiet easily using:

docker load -i yourimagename.tar.gz

At this point the image will be available to be started directly or via a docker-compose file.