Multi-Container ML Application using Docker Compose

Container Orchestration & Multi-Service MLOps Deployment

Implemented a multi-container machine learning application where a Flask-based UI service communicates with a Dockerized ML inference service using Docker Compose and service networking. This project demonstrates end-to-end MLOps deployment with container orchestration.

Project Summary

Comprehensive Project Overview

Project Category

Container Orchestration & Multi-Service Application Deployment (Docker Compose)

Industry/Domain

Platform Engineering / Cloud Infrastructure

Domain Focus

MLOps - Model Serving & Multi-Container Application Systems

Key Technologies & Concepts

Core Technologies Used

Docker Compose Keywords

Docker Compose (docker-compose.yml) Multi-Container Application Architecture Service-Oriented Container Design ML Inference Service Container UI / API Gateway Container Inter-Container Networking Docker Bridge Network Service Name-Based Discovery REST API Service Communication End-to-End Model Invocation Stateless Service Containers Port Mapping & Exposure Environment Variable Configuration depends_on (Service Startup Ordering)

Problem & Objective

What problem did this project solve?

Problems Solved

  • Running the ML inference service and client/UI separately caused manual coordination
  • Networking complexity and inconsistent startup made end-to-end invocation unreliable
  • Lack of standardized deployment process for multi-service ML applications

Primary Objectives

  • Orchestrate multiple containerized services using Docker Compose
  • Enable reliable service-to-service communication between ML and UI containers
  • Facilitate end-to-end invocation of machine learning models
  • Create unified application setup for development and deployment

Solution & Architecture

Architectural Overview

Solution Overview

The solution uses Docker Compose to orchestrate two containerized services: a machine learning inference service and a UI/API gateway service. The architecture enables seamless communication between containers through Docker's internal networking.

Docker Compose provides an internal network where each service is reachable by its service name instead of hard-coded IP addresses, allowing reliable container-to-container communication even when containers are restarted or recreated.

This approach decouples the ML inference logic from the UI presentation layer, enabling independent scaling, maintenance, and deployment of each component while maintaining reliable communication.

Docker Compose Multi-Container Architecture Diagram
1
Source Code & Dockerfiles
2
Docker Images
3
Docker Compose Orchestration
4
Service Containers
5
End-to-End Model Invocation

Key Components

  • Machine Learning Inference Service: Flask + Gunicorn container with trained model
  • UI / API Gateway Service: Flask container for client interface
  • Dockerfiles: Separate blueprints for each service
  • Docker Compose: Orchestration through docker-compose.yml
  • Docker Engine: Runtime environment
  • Inter-Container Network: Docker bridge network for service communication
  • Docker Hub: Container registry for image distribution

Scalability & Reliability

  • Services designed as stateless containers enabling horizontal scaling under orchestrators
  • Docker Compose ensures reliable service startup and networking
  • Consistent end-to-end invocation during development and testing
  • Lightweight base images and stateless service design optimize container startup

AI/ML & DevOps Implementation

MLOps Focus and Implementation Details

AI/ML Focus

AI/ML-focused project emphasizing multi-container orchestration of machine learning inference service and client application using Docker Compose. The focus is on service integration and deployment readiness rather than model training.

Models & Pipeline

  • Deployed pre-trained machine learning inference service as standalone container
  • Implemented automated service-to-service invocation between UI/API gateway and ML service
  • Breast cancer prediction model using XGBoost algorithm

Containerization & Orchestration Tools

Docker Engine / Docker Desktop Docker CLI Docker Compose (Multi-container orchestration) Docker Hub (Container Registry) Flask (API & UI Service) Gunicorn (Production WSGI Server) Postman (API Testing & Validation)

Skills & Technologies Used

Technical Proficiency Demonstrated

Primary Skills

  • Docker & Containerization - Intermediate: Building, managing, and optimizing Docker containers
  • Docker Compose (Multi-Container Orchestration) - Intermediate: Orchestrating multiple services with dependencies
  • Machine Learning Model Serving - Intermediate: Deploying and serving ML models in production
  • Service-to-Service Networking - Intermediate: Configuring container networking and communication

Secondary Tools / Frameworks

  • Flask (API & UI Service Framework)
  • Gunicorn (Production WSGI Server)
  • Postman (API Testing & Validation)
  • Docker Compose CLI
  • Linux Command Line

Programming Languages

  • Python: Primary language for ML service and UI/API gateway
  • YAML: Docker Compose configuration and Dockerfile instructions

Cloud & DevOps Tools

Docker Engine Docker CLI Docker Compose (docker-compose.yml) Docker Hub (Container Registry)

Docker Compose Components

Service Architecture Details

ML Model Inference Service

Service Name ml-model
Port 9696
Base Image python:3.10-slim
Framework Flask + Gunicorn
ML Model XGBoost (Breast Cancer)
API Endpoint /predict (POST)

UI / API Gateway Service

Service Name ui-app
Port 5000
Base Image python:3.10-slim
Framework Flask
Communication HTTP requests to ml-model
API Endpoint /predict (Proxy)

Service Discovery: UI container calls ML service using http://ml-model:9696/predict - leveraging Docker's internal DNS resolution

Architecture to Dockerfile Mapping

Architecture Component Dockerfile Mapping Purpose
Base Runtime FROM python:3.10-slim Lightweight Python runtime for ML inference
Working Directory WORKDIR /app Sets isolated execution directory inside container
Copy Application & Model COPY . /app Copies inference code, model file (.pkl), and configs
Dependency Installation RUN pip install -r requirements.txt Installs ML and API dependencies
Network Exposure EXPOSE 9696 Exposes inference API port
Application Startup CMD ["python", "predict.py"] Launches Flask-based ML inference service

Docker Compose Commands

Essential Commands for Orchestration

Command Purpose
docker-compose up --build Builds (if necessary) and starts all services defined in docker-compose.yml
docker-compose ps Lists status of containers managed by Docker Compose
docker-compose logs Displays combined logs of all services for debugging
docker-compose down Stops and removes all containers and their networks
docker-compose build Builds or rebuilds services without starting them
docker-compose up <service> Starts or restarts a specific service
docker-compose run Starts a one-off task without launching entire stack
docker-compose exec Runs a command on a running container
docker-compose restart Restarts all services defined in docker-compose.yml
docker-compose down -v Stops and removes containers, networks, and volumes

Challenges & Outcomes

Technical Challenges and Resolutions

Key Technical Challenges

  • Coordinating startup and communication between multiple containerized services
  • Configuring reliable inter-container networking for end-to-end ML inference
  • Managing service dependencies and startup ordering
  • Ensuring consistent API communication between containers

Resolution Strategies

  • Used Docker Compose to orchestrate services with defined dependencies and shared network
  • Leveraged service-name-based communication for reliable container-to-container API invocation
  • Implemented depends_on for proper service startup sequencing
  • Used Docker's internal DNS resolution for service discovery

Note: Terminal screenshots intentionally include intermediate build errors and corrections to demonstrate real-world debugging, problem-solving, and iterative development during containerization.

Assets & References

Code, diagrams, study material

GitHub Repository

Source code repository containing the multi-container ML application with Docker Compose configuration.

Access Repository

Study Material Resources

Click the button below to open the study materials

Request Study Material

Study Material - Docker Compose MLOps

Docker Compose Generic Code Guide
Complete guide to Docker Compose configuration and multi-service orchestration
Download
Official Docker Documentation
Comprehensive Docker commands and Docker Compose creation guide
Download
ML Model Serving with Docker
Best practices for containerizing and serving machine learning models
Download
Docker Compose Specific Configuration
Advanced Docker Compose configurations for MLOps (Restricted Access)
Download
Container Networking Guide
Complete guide to Docker networking and inter-container communication
Download
Production MLOps Architecture
Enterprise architecture patterns for scalable ML deployments with containers
Download
Flask & Gunicorn Deployment
Production deployment guide for Flask applications with Gunicorn in containers
Download
Multi-Container Debugging Guide
Troubleshooting and debugging techniques for Docker Compose applications
Download