The Docker Life Cycle
Docker allows OS-level virtualization for developers to deliver software in containers
We’ve discussed Docker in past blogs, which can be found here. Let’s go over a brief recap. Docker essentially allows you to run multiple virtual machines in their own separate containers. This allows you to develop, run, and deploy programs uniformly regardless of the operating systems they run on. Whether you're using Linux, Windows, or Mac OS, leveraging Docker allows your team(s) to work without the constraints and headaches of managing or deploying to different operating systems.
The Docker Life Cycle
Containers have a dedicated place throughout your SDLC, though some rules should be established for specific use cases.
Docker in Dev
Within an established team, containers can be an immensely useful tool to ensure platform stability. A mature code base should have some form of `CONTRIBUTING.md` document to help new developers get a local environment up and running. This document might assume pre-built OS platforms:
> All of our developers are issued Microsoft Windows laptops with MySQL Community Server pre-installed.
Or it could require alternate instructions for linux-based systems:
> Install the latest package of mysql-server with apt, yum, or dnf. (Mac users should brew install mysql.)
Even better, there could be instructions on setting up containerized services for development support, regardless of operating system:
> Run latest mysql and redis as docker containers.
The best way, however, would be to include in your code base a compose file defining the services that make up your app:
# docker-compose.yml
services:
db:
image: mysql
cache:
image: redis
With this file in place, a developer has only to run `docker compose up` to ensure all services needed by the app are locally installed. Each service defined in the compose file is run as its own container, isolated from any other service which might also be running locally (though port and volume conflicts could easily occur if you’re not explicit in the services definition). Docker handles the necessary steps to ensure service discovery is enabled within the container network, such that service names resolve as hostnames in DNS. If a Dockerfile is already part of your app, you can even include the build instruction in your compose file to complete your app environment.
Docker In Test: CI
Though containerized development environments offer a lot of benefits, the true place that Docker shines is within a CI/CD pipeline. Rather than relying on shell scripts to cobble together build environments and having to maintain multiple databases for each deployment tier, rely on containers to provide these same services on ephemeral resources.
Your specific pipeline strategy will obviously depend on the hosted solution you employ, but the basic steps remain the same regardless of tooling. As an example, let’s consider a CI/CD pipeline for a Python project hosted in GitLab:
default:
image: python
services:
- mysql
- redis
test:
before_script:
- pip install pytest
script:
- pytest
Here we have declared that our base image is the latest Docker image of Python, with networked service containers for both MySQL and Redis (similar to the dev environment’s compose file). Our CI/CD pipeline consists of a single step identified as `test` which first ensures pytest has been installed on our default image, then runs the test suite. If any line in the `script` block exits with a non-successful code, our pipeline will fail.
Similar to other build tools, such as Jenkins, CI/CD pipelines can be configured to push build artifacts and containers to registries for use throughout the SDLC. The pipeline could ensure the “latest” tag is always updated when code is merged to the main branch; this container would then be pulled when running `docker compose up` in dev.
build:
only: main
script:
- docker build -t $CI_REGISTRY/app:latest .
- docker push $CI_REGISTRY/app:latest
This additional `build` step would, only for the main branch, build and push our docker image to the project’s container registry
Docker in Prod
Just as in Dev, a compose file will allow the project to be run in a production environment; though, security should be much more of a concern. Additionally, production will require some consideration towards resource scalability; but this is precisely the intent of containerization: reproducible, deterministic application and environment configuration.
A development compose file will typically use non-standard ports, volume bind mounts for live-editing of code, and additional extra services such as build and debug tools; all of which will need to be modified for a production environment. This is easily accomplished by extending the local compose file with production-specific values in a second compose file meant to override and supplement the local file.
Consider a very basic `docker-compose.yml` file:
web:
image: my_python_app
depends_on:
- db
- cache
db:
image: mysql
cache:
image: redis
To extend this stack in a production environment, consider the following `docker-compose.prod.yml` file:
web:
ports:
- 80:80
environment:
APP_ENVIRONMENT: prod
cache:
environment:
TTL: "500"
The CD pipeline would then need to be updated to use:
$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
This same strategy could be used with different combinations of compose files for CI, staging, review, and release environments. Infrastructure as Code (IaC) principles then become part of the code base, further strengthening the application with deterministic, environment-specific builds.
The JBS Quick Launch Lab
Free Qualified Assessment
Quantify what it will take to implement your next big idea!
Our assessment session will deliver tangible timelines, costs, high-level requirements, and recommend architectures that will work best. Let JBS prove to you and your team why over 24 years of experience matters.