Docker Swarm is an advanced solution that provides a high availability cluster for deploying services. It is a great option for teams already comfortable working with the Docker container framework. 

On the Healthcare Blocks platform, Docker Swarm includes the following features:

  • A small cluster consisting of a build node, two app nodes, and a load balancer
  • Git-based deployments
  • Automatic provisioning and renewal of LetsEncrypt SSL certificates
  • Private Docker Registry (Harbor) for storing images and scanning for vulnerabilities
  • Traefik reverse proxy

How It Works
When you deploy (via git push) an application containing a Dockerfile definition, the Dockerfile is used to build an application image. The image is then launched as one or more containers using Docker Swarm Mode; each container represents an isolated set of processes. The services running in the containers are not directly accessible - instead, an HTTP reverse proxy listens to incoming requests (on port 80 and 443) and routes traffic accordingly using host names (e.g. www.mydomain.com, admin.mydomain.com). 

Preparing Your First App
In your local development environment, install Docker and create a Dockerfile appropriate for your application stack. Your Dockerfile should include a RUN statement that starts up your application's server. This same Dockerfile will be used in production to build and run your application. More information can be found here.

Ensure your app builds locally: 

docker build -t app .

...and runs successfully:

docker run -it app

You should see output that represents a running application. If you encounter an error and your container exits, you will need to debug and repeat the above steps until everything runs successfully. 

You should be storing your source code in a service like GitHub or Bitbucket, but your Healthcare Blocks build server uses a local repository to be able to receive pushes and trigger builds. If you haven't done so already, create a local git repository for your application. 

git init
git add .
git commit -m "Initial commit"

Now define a git remote to point to your Healthcare Blocks build server. The values below will be provided to you by a support technician in your initial provisioning email.

git remote add hcb ssh://<username>@<build server address>/data/apps/<customer name>/deploy.git

Configure App Settings on the Server
The following steps describe how to configure production-specific settings, including domain name, for your app.

First, establish an SSH connection to your server:

ssh <username>@<build server address>.healthcareblocks.com

To view and modify environment variables that will be injected into your app during runtime, edit /data/apps/<customer name>/.env. Note: if your application source code already includes an .env file, it will be overwritten by the build server's version during the build step.

The Docker service settings for your app are located in /data/apps/<customer_name>/docker-compose.yml. In most cases, you will not need to modify this file, but power users are more than welcome to do so. And if you need our assistance or guidance, feel free to create a support ticket. Note: if your git repo already includes a development version of docker-compose.yml, it will be ignored during deployment. So don't worry about potential conflicts.

The "traefik.port" setting refers to the port being used by your application, which depends on your application framework and any defaults you might have included in Dockerfile. Many modern Web frameworks use port 3000 or 5000. Note that this port is not exposed to the outside world. Please change this value to match your app framework's config, if necessary:

- "traefik.port=3000

Default URL and Domain Names
By default, your application is accessible at app.<load balancer address>.healthcareblocks.com, as defined in the config file:

- "traefik.frontend.rule=Host:app.<load balancer address>.healthcareblocks.com"

You can append your own subdomain/domain:

- "traefik.frontend.rule=Host:app.<load balancer address>.healthcareblocks.com,www.mydomain.com"

...but don't set your domain name until after you've updated your DNS settings and they've propagated. Running the command, "dig www.mydomain.com" should display your load balancer address. Most customers set their final domain name after they've confirmed their app is fully functional.

Your DNS settings (managed at an external DNS provider) should have a CNAME record for your subdomain (or "www") pointing to app.<load balancer address>.healthcareblocks.com.

SSL Certificates
The reverse proxy automatically provisions LetsEncrypt.org certificates when a new app or service is deployed that has a "traefik.frontend.rule=Host:" rule listed in docker-compose.yml. Thus, your application will be available via HTTPS automatically, and HTTP will always redirect to HTTPS, per security best practices. The SSL certificate is automatically renewed; no action is required at your end.

Your environment includes an SSL certificate for app.<load balancer address>.healthcareblocks.com. So after you deploy your app the first time, accessing your app via HTTPS at that address is a good sanity check.

Be sure your DNS settings have been already updated to point your domain to your Healthcare Blocks load balancer address!

Deploying Your App
Before you deploy your app for the first time, you will need to SSH to your build server and authenticate to the private Docker registry. You will receive your credentials from your support technician.

docker login --username <username> https://<build server>

Upon successful login, a Docker token is cached in your server's home directory under your home directory. Running the docker logout https://<build server> command will clear this token. If you logout, be sure to remember to login prior to doing a deployment. Every team member responsible for deployments should have a separate username and should run the login command.

After you've made any relevant configuration settings on the server and you've logged into your registry, you are now ready to deploy your app.

In your local development environment, run this command:

git push hcb master

You will see deployment output, as well as any potential problems that might have prevented a successful build. A copy of the same output is stored under /var/log/deployments, which is useful for reference if you have multiple team members responsible for deployments.

"Too many authentication failures" error
Be sure the SSH key you provided us is loaded into your local SSH agent:

ssh-add ~/path/to/private-key

Can't deploy; "Everything up-to-date" message during git push

If your deployment errored out on the server and you need to re-push, you will need to trigger a change in your local git repo. You can either run the following command:

git commit --amend --allow-empty

And then run the git push command. Or you can maintain a version file in the root of your project, incrementing and committing it every time you need to deploy.

Checking Application Runtime Status
To view all Docker-driven services in your cluster:

docker service ls

Command reference page

To list the tasks of one or more services and their current locations:

docker service ps <service name>

Command reference page

Viewing Your Application Log

docker service logs <service name>

Tailing your log:

docker service logs --tail 10 -f <service name>

Command reference page

Scaling Your Application
By default, your application is deployed to a single app node, but it's a good idea to scale it to two nodes to maintain high availability and to distribute load evenly as traffic increases.

docker service scale <service name>=<number of instances>

# example
docker service scale app=2

To check the progress of your scaling action, you can do:

docker service ps <service name>

Accessing Your Docker Registry
Your build server includes Harbor, an enterprise-grade Docker registry that includes a Web UI. You can access it via https://<build server address> using the credentials supplied by your support technician.

Managing Harbor Users
Registry users are used by Docker Swarm to authenticate requests for pushes and pulls of Docker images. They are also used for Harbor UI access. A default user should already exist for your organization. You can change its password, if desired. Just be sure any time you change the password, you also re-authenticate via the docker login command as described above. If you have multiple team members deploying code, it is highly recommend that you have a separate SSH user and Harbor user for each member. Adopting this best practice will help you pass security-related audits. To create new users, go to Administration, Users. Setting a user as an admin gives them the ability to manage Harbor projects and other users.

Managing Harbor Projects
Projects are used to store Docker images and to grant Harbor users access to projects. A default project should already exist for your organization. In most cases, you will not need to create additional projects, unless you plan to deploy multiple applications to your cluster. If you create additional Harbor users, go to Projects, <Project Name>, Members, click the +User button, and type the username of the team member, and choose a role ("Developer" is usually sufficient).

Vulnerability Scans
Your default project is configured to automatically scan recently built Docker images for vulnerabilities in system components and libraries. You can view the results of the scans by clicking individual Docker images listed under the project name. In some cases, a detected CVE might have a low threat risk and can be safely ignored. For higher risk issues, you should investigate if a newer version is available of the Docker base image, libraries, and other components referenced in your image. In some cases, a patch script can be embedded in your image. If you need guidance, feel free to create a help desk ticket.

Official Harbor User Guide

How to Deploy a Background Process
Many Web apps implement a background worker pattern to handle queued jobs and other tasks that don't make sense to process during an HTTP request cycle. You can extend your app's docker-compose.yml service configuration by adding another section that is responsible for deploying a separate worker process. Under the default "app" configuration, add another section that looks like this:

  worker:
    image: "<build server>/<customer name>/app:latest"
    command: <worker startup command>
    deploy:
      labels:
        - "traefik.enable=false"
      placement:
        constraints:
          - node.labels.type != builder
    networks:
      - <docker network name>
    env_file:
      - /data/apps/<customer name>/.env
    volumes:
      - /etc/ssl:/etc/certs

The "worker" name can be set to any desired value. The image should be set to the same value as your main image, since the assumption is that the background process shares the same code base and configuration. If this is not the case, refer to "How to Deploy Multiple Apps" below. The "command" should be set to an executable instruction that starts up the worker as a foreground process. For example, in Ruby on Rails apps, this could be set to "rake jobs:work" or "bundle exec sidekiq" depending on the choice of implementation. Under labels, disable Traefik, since we don't want to expose this service to the outside world.

After you've updated the docker-compose.yml file, you can either git push your app or run the following command:

docker stack deploy -c /data/apps/<customer name>/docker-compose.yml --with-registry-auth <customer name>

If you'd like for the background process to run on both app nodes, you can scale it as described under Scaling Your Application. Alternately, you can include a declaration in the docker-compose.yml by adding "mode: global" under the "deploy:" key. "Global" instructs Docker Swarm to run the service on every available node.

How to Deploy Multiple Services (Advanced Topic)
a. If your app has a dependency on backing services such as Memcached, Redis, or other components that aren't necessarily part of your code base, just follow the "How to Deploy a Background Process" guidance above but set the "image" tag to a publicly known project from Docker Hub. Usually the Docker image for a backing service exposes its service port (e.g. 6379 for Redis) but you can also specify it in the docker-compose.yml as needed using the expose setting. Your primary application can access backing services using the names specified in docker-compose.yml - no need to hardcode IP addresses.

b. If you have multiple apps in a single git repo but use one common Dockerfile, then all you have to do is update the docker-compose.yml by duplicating the "app" section, renaming "app" to a new name associated with your second app and specify a "command" to startup the process.

c. If, on the other hand, you've separated your apps into subdirectories in your repo, each with their own Dockerfile, then you'll need to edit the /data/apps/<customer name>/build file, duplicating the docker build, tag, push, and rmi instructions. The docker build command should simply reference the relative path in your git repo in which the Dockerfile is located, example:

docker build --no-cache -t $APP_NAME:$short_hash some_subdirectory/

You'll then need to set a different variable in place of APP_NAME for each section you duplicate. Finally, update docker-compose.yml to include a section for each service, along with a unique "image" name.

d. If your apps are located in separate git repositories, you can duplicate /data/apps/<customer name> to a new directory, editing build and docker-compose.yml with the appropriate values, and then setup a separate git remote (see above).

If you need assistance/guidance for any of these steps, please create a help desk ticket.

Scheduled Jobs
The build server includes a special agent that can be used to schedule time-based jobs. This is the equivalent of "cron jobs" in traditional Linux environments. Run the following command to create a new Docker service:

docker service create -d --name some_scheduled_job \
--label swarm.cronjob.enable="true" \
--label swarm.cronjob.schedule="0 0 * * * *" \
--label swarm.cronjob.skip-running="true" \
--network platform \
--constraint 'node.labels.type == builder' \
--restart-condition none \
<docker image name> <command to execute>

The "schedule" setting uses cron notation, see this page for reference. The "skip-running" setting prevents long running jobs from stacking up; you can set this to "false" if necessary. Notice that the "constraint" flag is set to the "builder" node, meaning that this job will always execute on the build server, which means it won't compete for resources on your app nodes. Removing this constraint (or setting it to "!=") will have the opposite effect. The Docker image name can be set to a public Docker Hub image or your recently built app image. To identify the name, look at the contents of the /data/apps/<customer name>/build file. In most cases, it will resemble this format: <build server>/<customer name>/<app name>. By not including a :<tag> at the end, the latest version will always be used for this service. After this service is deployed, it will run initially and then the next run will occur based on the schedule setting.

To view the log of the service:

docker service logs some_scheduled_job

To view the status and summary of all scheduled jobs:

docker service logs platform_swarm_cronjob

Traefik Advanced Configurations
The Traefik reverse proxy uses Docker service labels and a key-value store to manage the real-time behavior of your deployed services and ingress traffic. You can review the list of supported labels on this page. These can be appended to your docker-compose.yml. For convenience, here are direct links to Traefik documentation pages on specific topics that might be of interest to you.
Custom Error Pages
Health Checks
Maximum Connections
Request Timeouts
Buffering (File Uploads)

Setting Resource Limits
You can limit the amount of CPU and memory used by your deployed services by updating the docker-compose.yml. See this page for details.

Expanding/Upsizing Your Cluster
For larger deployments, you might need larger app nodes or additional nodes. Please create a help desk ticket with your requested changes. We'll help you plan the best option.

Persistent Storage
Most customers prefer to use Amazon S3 for storing uploaded files, images, PDFs, etc. because of its lower costs vs. server storage. See this topic for details. In addition, Healthcare Blocks can provide network storage that is attached to each app node, with fast, low latency syncing. Either way, please create a help desk ticket specifying your desired storage solution.

FAQ: When is persistent storage useful?
Containers have an ephemeral filesystem, meaning that any files created dynamically at runtime will not be persisted between service restarts. Therefore, you will need a persistent storage solution if your application generates dynamic content that needs to be stored on disk. Also, backing services like Redis typically persists their data to the filesystem. Please create a help desk ticket for assistance.

FAQ: Where is the Healthcare Blocks SSL CA file?
A copy of this file is automatically injected into your running container and is available at /etc/certs/hcb_ca.pem. You will need to reference this file when connecting to a managed Healthcare Blocks MySQL database service.

FAQ: How do I use Docker secrets in Swarm mode?
For background information, see this page and Docker Compose secrets reference.

FAQ: Do I have to use the git deployment workflow?
Certainly not. The git interface is a convenience that wraps the various Docker commands scripted in the "build" file. But you can adapt the process to your preferred workflow. Some practical examples:

  • You can scp files to your build server and execute a custom build script to build and run the docker tag, push, and stack deploy commands.
  • You can build locally, tag and push to the build server's registry, and run the docker stack commands there.
  • You can build Docker images in a continuous integration/deployment (CI/CD) service, tag and push them to the build server's registry, and run the docker stack deploy command in the CI/CD service. This will require deploying your CI/CD's SSH key to your build server and establishing an SSH tunnel to the Docker daemon socket. Contact us for more details.