Lightweight, stand-alone, and secure, containers provide a reliable runtime environment that will work consistently from host to host. Looking to get started building and deploying your own containerized apps? In this article, we’ll explore how to create a Docker image so that you can build and deploy your own containerized apps.
What Is Docker?
Docker is a platform that allows you to build, test, and deploy containerized applications. It provides operating-system-level virtualization by packaging applications and their dependencies into lightweight, portable software bundles called containers. Let’s take a closer look at some of the features Docker has to offer.
What Is a Docker Image?
Docker images are read-only templates Docker can use to create and run containers. The image contains the installations, application code, and dependencies needed to configure a container environment.
How Docker Images Fit into Today’s Workflows
Docker images aren’t just a way to package a simple app — they’re the backbone of how modern teams build, test, and scale software:
- Microservices: Each service (like payments, authentication, or search) runs from its own image. That makes services consistent, portable, and easy to scale independently.
- CI/CD pipelines: Images are the artifacts that move through the pipeline. Code changes trigger an automated build → image → test → deploy cycle, ensuring the same version that passed testing is the one running in production.
- DevOps workflows: Teams treat Dockerfiles as code and rely on images as immutable units of deployment. This makes automation, rollbacks, monitoring, and integration with orchestration platforms (like Kubernetes) much simpler.
In short, Docker images act as the glue between developers, operations, and production environments — ensuring software runs the same way everywhere, from a laptop to the cloud.
What Is a Dockerfile?
A Dockerfile is a text document that contains the build instructions needed to build a container image. It contains all the commands you would manually input into your terminal to create and run a container in Docker. The Dockerfile can be used to automate this process. One advantage of storing a Dockerfile as opposed to the image itself is that an automated build can ensure you always have the latest versions available.
When writing Dockerfiles, it’s important to follow best practices for efficiency and security. For example, use multi-stage builds to keep final images lightweight, only copy in what’s necessary, and avoid embedding secrets or credentials. You can also optimize by pinning dependencies, setting non-root users, and minimizing the number of layers in your image. These steps help reduce attack surface, speed up builds, and make images more portable in production.
What Is Docker Hub?
Docker Hub is a repository of container images from the Docker team and the larger community of software developers, vendors, and open source projects. You can push and pull container images, automate builds, and integrate with other code repositories like GitHub and Bitbucket. Docker Hub is the place to go to retrieve Docker images you can use as a base to start your own projects.
Creating a Docker Image with a Dockerfile
While you could input the commands of the previous section to create your own Docker image from scratch, it’s much easier in practice to automate this process for future runs by transcribing these commands into step-by-step instructions within a Dockerfile. Here’s how:
Install Docker
The first step is to get Docker set up on your machine. Navigate to the Docker documentation and install Docker Engine for your preferred operating system. For the purposes of this tutorial, we’ll be using Docker Desktop on Windows.
Create a Dockerfile
Creating a Dockerfile is as simple as creating a text file in your text editor with all the commands you would call in the command line to assemble an image. You can name this file whatever you want, but we’ll be using the name “Dockerfile” for simplicity. You can use the syntax from the Docker documentation to specify these build instructions.
Create a Dockerfile in the ‘/app’ directory of your project folder. In order for this tutorial to work, we’ll also create a simple Flask app in an ‘app.py’ file within the same directory:
from flask import Flaskapp = Flask(__name__)
@app.route('/')def my_app(): return 'This is a Flask App'
if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
Example of a Dockerfile
# Creating a Dockerfile for Python 3# Use an existing base image from Docker HubFROM ubuntu:latest
# Set the working directory inside the containerWORKDIR /app
# Copy the application files from the host to the containerCOPY . .
# Install any required dependenciesRUN apt-get update && apt-get install -y python3 python3-pip
# Install Flask using pipRUN pip3 install Flask
# Expose a port on the containerEXPOSE 5000
# Specify the command to run when the container startsCMD ["python3", "app.py","--host", "0.0.0.0"]
In this example, we start with an Ubuntu base image pulled from Docker Hub. We set the working directory to ‘/app’ inside the container. Then, we copy the application files from the host machine to the container’s ‘/app’ directory.
Next, we use the ‘RUN’ instruction to update the package manager and install Python 3 and Flask inside the container. This ensures that the necessary dependencies are installed.
Finally, we use the ‘CMD’ instruction to specify the command that should be executed when the container starts. In this case, it runs the ‘app.py’ Python script using the Python 3 interpreter.
This Dockerfile can be used to build a Docker image, which is a template for creating containers. When the image is built and a container is created from it, the container will have the specified dependencies and will run the specified command when started.
Persisting Data with Volumes
In real-world applications, you’ll often need to persist data beyond the lifecycle of a container—for example, database files. You can attach a host directory as a volume like this:
docker run -d -p 5432:5432 -v /my/local/data:/var/lib/postgresql/data postgres:latest
This ensures your database data remains intact even if the container stops or is removed.
Using Multi-Stage Builds
Multi-stage builds are a best practice for creating smaller, more efficient images. For instance, you can compile your code in one stage and copy only the binaries to the final stage:
FROM golang:1.22 AS builder WORKDIR /app COPY . . RUN go build -o myapp FROM alpine:latest COPY --from=builder /app/myapp . CMD ["./myapp"]
This approach dramatically reduces image size and improves security.
Building a Docker image
With Dockerfile in hand, you can build the Docker image using the ‘docker build’ command while providing a name for the image with the ‘t’ flag (e.g., ‘myapp:latest’).
In the terminal type:
docker build -t myapp:latest .
Don’t forget the ‘.’ at the end. This specifies the build context, in this case, the directory where the Dockerfile is located.
Congratulations you now have a Docker image!
You can verify that an image has been created by clicking the Images tab in Docker Desktop:
Each image can be identified by a name, a tag, and an image ID.
Note that you may also type “docker image ls” or “docker images” (with no arguments) into the terminal to list all images.
Creating a Container from a Docker Image
Now that you have a Docker image, it’s time to create and run a container off of that image.
Type the following command into the terminal:
docker run -p 5000:5000 --name mycontainer myapp:latest
The ‘–name’ tag tells Docker to create and run a container named ‘mycontainer’ based off of the image ‘myapp:latest.’ In our example, you now have an Ubuntu environment running the ‘app.py’ file specified within the Dockerfile and Python 3. You can view your newly created active container in Docker Desktop:
If you navigate to https://localhost:5000 in the browser, it will allow you to see your app printing the text “This is a Flask App.”
Managing Multi-Container Apps with Docker Compose”
Most modern apps aren’t just one container—they might include a web server, a database, and a cache service. Docker Compose allows you to define and run these together with a simple YAML file:
version: '3' services web: build: . ports: - "5000:5000" db: image: postgres:latest volumes: - db_data:/var/lib/postgresql/data volumes db_data:
Running docker-compose up will spin up both services, networked automatically. This makes local development of microservices much easier.
Creating a Docker Image from a Running Container
While you can save changes from a running container with docker commit, this approach isn’t recommended for production use since it’s not reproducible or version-controlled. Instead, always codify your changes in a Dockerfile or CI/CD pipeline to ensure consistency across environments.
Scaling Your Container Usage
As applications grow beyond a single container, orchestration becomes essential. Tools like Docker Compose make it easy to manage small multi-service apps on a developer’s machine, while Kubernetes provides the automation, scaling, and resilience needed in production. Understanding this bridge—from local Docker builds to orchestrated deployments—helps teams prepare for real-world, enterprise-level workloads.
Working with Data and Persistent Storage
Containers are stateless by design—when a container stops, its internal filesystem disappears. For applications like databases or file services, this is a critical concern. Docker supports volumes and bind mounts to persist and manage data. For example:
docker run -d -v mydata:/var/lib/mysql mysql:latest
This command ensures your MySQL database files are stored outside the container, so they survive restarts.
In production, persistent storage is often integrated with enterprise storage platforms to ensure durability, backups, and data availability across container clusters.
Troubleshooting and Debugging Containers
It’s common to hit errors when first building or running images. A few best practices can save time:
- Check logs: Run docker logs <container_name> to view container output and error messages.
- Inspect running containers: Use docker ps to see which containers are active and their ports.
- Debug interactively: Use docker exec -it <container_name> /bin/bash to get inside the container for hands-on debugging.
- Rebuild incrementally: Make small changes to your Dockerfile and rebuild, rather than editing containers directly.
These steps mirror how containers are managed in production, where quick root-cause analysis is key to keeping services reliable.
Conclusion
In this article, we walked through building and running a simple container from a Docker image. Once you’re comfortable with these basics, the next step is to explore multi-container workflows, CI/CD automation, and orchestration platforms like Kubernetes for scaling in production.
You’ll also want to adopt best practices for debugging (using tools like docker logs and docker exec), security (scanning images for vulnerabilities, running as non-root users), and storage (integrating containers with persistent volumes and enterprise data services).
Containers are foundational to modern DevOps and cloud-native architectures. By going beyond single-container use cases, you’ll be better prepared to design, test, and deploy resilient, production-ready applications.







