DeployFlow
DeployFlow
Modern
Self-Hosted Platform-as-a-Service (PaaS) for Containerized Application
Deployment and Management
Introduction
DeployFlow is a comprehensive, self-hosted Platform-as-a-Service (PaaS) solution designed to simplify the deployment, monitoring, and management of containerized applications. This project demonstrates the complete lifecycle of cloud application deployment using modern technologies including Docker, Node.js, Next.js, MongoDB,
and Redis.
The project implements a multi-tier architecture consisting of:
●
Frontend Dashboard: Built with Next.js 15,
TypeScript, and Tailwind CSS for real-time application monitoring.
●
Backend API: RESTful API with
Express.js for application management and deployment.
●
Container Orchestration: Docker integration for
scalable deployments.
●
Database Layer: MongoDB for persistent
storage and Redis for caching/sessions.
●
Git Integration: Custom Git server for
repository management and automated deployments.
●
DNS & Reverse Proxy: Custom DNS server and
Nginx for routing and load balancing.
This comprehensive
platform showcases cloud computing concepts including containerization,
orchestration, microservices architecture, real-time communication, and
automated CI/CD pipelines.
Objectives of Part 1
(DA1)
Primary Objectives:
1.
Container Setup and Configuration
○
Set up and configure
MongoDB container for database persistence.
○
Configure Redis
container for caching and session management.
○
Implement custom Git
server container for repository management.
○
Configure DNS server
container for internal service discovery.
2.
Backend API Development
○
Build RESTful API using
Express.js and Node.js.
○
Implement JWT-based
authentication and authorization.
○
Create database models
and ODM integration with MongoDB.
○
Set up Docker socket
integration for container management.
3.
Network Architecture
○
Create Docker bridge
network (paas-network) for inter-container communication.
○
Configure port mappings
for all services.
○
Implement volume
mounting for data persistence.
4.
Basic Deployment Workflow
○
Establish Git-based
deployment pipeline.
○
Implement webhook
receivers for GitHub/GitLab.
○
Create basic application
build and deployment scripts.
Objectives of Part 2
(DA2)
Primary Objectives:
1.
Frontend Dashboard Development
○
Build modern responsive
UI with Next.js 15 and TypeScript.
○
Implement real-time
WebSocket communication using Socket.IO.
○
Create interactive
components for application management.
○
Develop authentication
UI with Google OAuth integration.
2.
Real-time Monitoring System
○
Implement live
deployment logs streaming.
○
Create real-time metrics
dashboard (CPU, memory, network usage).
○
Build activity logging
and audit trail system.
○
Develop notification
system for deployment status.
3.
Advanced API Features
○
Implement application
CRUD operations.
○
Create deployment
management endpoints.
○
Build environment
variable management system.
○
Develop user management
and role-based access control.
4.
Nginx Reverse Proxy Setup
○
Configure Nginx as
reverse proxy for frontend and backend.
○
Set up SSL/TLS
certificates.
○
Implement load balancing
for deployed applications.
○
Configure virtual hosts
for multi-app routing.
Objectives of Part 3
(DA3)
Primary Objectives:
1.
Complete System Integration
○
Integrate all
microservices into unified platform.
○
Implement end-to-end
deployment pipeline.
○
Configure
production-ready docker-compose orchestration.
○
Set up health checks and
auto-restart policies.
2.
Advanced Container Orchestration
○
Implement dynamic
container creation for user applications.
○
Configure container
resource limits and quotas.
○
Set up container
networking and service discovery.
○
Implement container
lifecycle management.
3.
Production Deployment
○
Configure production
environment variables.
○
Implement security best
practices (JWT secrets, encryption).
○
Set up database
authentication and authorization.
○
Configure Redis password
protection.
4.
Testing and Validation
○
Test complete deployment
workflow.
○
Validate real-time
communication features.
○
Test authentication and
authorization flows.
○
Verify database
persistence and data integrity.
5.
Documentation and Presentation
○
Create comprehensive
system documentation (this document).
○
Prepare architecture
diagrams.
○
Document deployment
procedures.
○
Create user guides for
platform usage.
Name of the Containers
Involved and Download Links
This project uses a combination of custom-built images and
official third-party images.
Core Application Containers
|
Container Service |
Base Image |
Purpose |
Base Image Download Link |
|
paas-api |
node:18-alpine |
Backend API
(Express.js) |
|
|
dashboard |
nginx:alpine |
Frontend UI
(React/Next.js) |
|
|
git-server |
alpine:latest |
Git SSH Server |
|
|
dns-server |
alpine:latest |
Internal DNS (dnsmasq) |
Third-Party Containers
|
Container Service |
Image Used |
Purpose |
Download Link |
|
mongo |
mongo:6 |
Database
|
|
|
redis |
redis:7-alpine |
Cache & Session
Store
|
|
|
nginx-proxy |
nginx:latest |
Reverse Proxy
|
Name of the Other
Software Involved Along with the Purpose
|
Software/Tool |
Purpose/Role in the Project |
|
Node.js v18+ |
JavaScript runtime for
backend (Express.js) and frontend (Next.js build).
|
|
Docker Desktop |
Used to build, run,
and manage Docker containers locally.
|
|
Docker Compose |
Used to define and
orchestrate all multi-container services.
|
|
Git |
Version control for
application code and to trigger deployments.
|
|
npm / pnpm |
Package managers for
installing Node.js dependencies.
|
|
Express.js |
Backend framework for
building the RESTful API and WebSocket server.
|
|
Mongoose |
Object Data Modeling
(ODM) library for MongoDB.
|
|
Socket.IO |
Library for enabling
real-time, bidirectional communication (live logs, metrics).
|
|
Passport.js |
Authentication
middleware for Node.js (used for JWT and Google OAuth).
|
|
Next.js 15 / React 18 |
Framework/library for
building the interactive frontend dashboard.
|
|
TypeScript |
Used for type-safety
in both backend and frontend code.
|
|
Tailwind CSS |
Utility-first CSS
framework for styling the dashboard.
|
|
Nginx |
Used as a reverse
proxy and to serve the static frontend files.
|
|
dnsmasq |
Lightweight DNS server
used inside the dns-server container.
|
|
OpenSSH |
Used inside the
git-server container to provide secure SSH access.
|
|
Docker Socket |
(/var/run/docker.sock)
Mounted into the paas-api to allow it to manage containers.
|
Overall Architecture of
All Three DAs
Architecture Description
The DeployFlow platform follows a modern microservices
architecture with clear separation of concerns, orchestrated by Docker Compose.
All services communicate over a custom bridge network (paas-network).
Part 1 (DA1) - The
Foundation: This phase established
the core infrastructure. The mongo and redis containers provide the data and
caching layers. The paas-api (backend) container was created to house all
business logic, connect to the database, and interface with the Docker socket. The
git-server (for receiving code) and dns-server (for service discovery) were
also built and deployed.
Part 2 (DA2) - The
Interface: This phase built the user-facing components.
The dashboard container (a Next.js app served by Nginx) was created to provide
the web UI. This container communicates with the paas-api for data. A real-time
link was established using Socket.IO for live logs. The nginx-proxy container
was introduced as the main entry point, routing user traffic to either the
dashboard (for the UI) or the paas-api (for API requests).
Part 3 (DA3) - The
Integration: This phase unified all
components into a single, automated system. The docker-compose.yml file defines
the complete stack, service dependencies, and persistent volumes. The final
workflow is:
1.
A user pushes code via
Git to the git-server.
2.
A post-receive hook in
the git-server triggers an endpoint on the paas-api.
3.
The paas-api clones the
repository, builds a new Docker image from the user's code, and starts a new
container for that application.
4.
The paas-api updates the
dns-server and nginx-proxy to route a subdomain to the new container.
5.
The paas-api sends
real-time status and log updates via WebSocket to the user's dashboard.
This architecture
demonstrates key cloud concepts: containerization for isolation, orchestration
for management, microservices for modularity, and an automated CI/CD pipeline
for rapid deployment.
Procedure - Part 1 (DA1)
Step 1: Environment Setup
1.
Installed Docker
Desktop, Node.js, and Git.
2.
Created the main project
directory DeployFlow.
Step 2: Configure Environment Variables
1.
Created a .env.example
file to list all required variables.
2.
Copied it to .env and
filled in secrets for Google OAuth, MongoDB, and JWT.
Step 3: Build Custom Docker Images
1.
Created a Dockerfile for
the git-server (using alpine, adding openssh, git, docker-cli).
2.
Created a Dockerfile for
the dns-server (using alpine, adding dnsmasq).
3.
Created a Dockerfile for
the paas-api (using node:18-alpine, installing dependencies).
4.
Ran docker build -t
<image_name> . in each directory.
Similarly, for dns-server, paas-api, dashboard also.
Step 4: Create Docker Network
1.
Created a custom bridge
network for inter-container communication.
docker network create paas-network(Figure 11)
Step 5: Start Database & Infrastructure
Services
1.
Created the
docker-compose.yml file.
2.
Added services for
mongo, redis, dns-server, and git-server.
3.
Specified volumes for
data persistence (e.g., mongo-data:/data/db, redis-data:/data).
4.
Ran docker-compose up -d
mongo redis dns-server git-server.
Step 6: Initialize Database
1.
Connected to the running
MongoDB container.
docker exec -it mongo mongosh
(Figure 14)
2.
Created the database and
collections.
use paas_dashboard
db.createCollection("users")
db.createCollection("applications")
db.createCollection("deployments")
exit
(Figure 15)
Procedure - Part 2 (DA2)
Step 1: Start Backend API
1.
Added the paas-api
service to docker-compose.yml, ensuring it's on the paas-network.
2.
Mounted the Docker
socket: - /var/run/docker.sock:/var/run/docker.sock.
3.
Ran docker-compose up -d
paas-api.
4.
Checked logs: docker
logs -f paas-api.
5.
Tested the /health
endpoint with curl http://localhost:5000/health.
Step 2: Start Frontend Dashboard
1.
Created the Dockerfile
for the dashboard (multi-stage build with node:18-alpine and nginx:alpine).
2.
Added the dashboard
service to docker-compose.yml.
3.
Ran docker-compose up -d
dashboard.
4.
Opened http://localhost:3000
in the browser to see the login page.
Step 3: Configure Nginx Reverse Proxy
1.
Created a custom
nginx-config/nginx.conf file.
2.
This config sets up
upstream blocks for frontend and backend.
3.
It proxies /api and
/socket.io to the backend, and / to the frontend.
4.
Added the nginx-proxy
service to docker-compose.yml, mapping port 80:80.
5.
Ran docker-compose up -d
nginx-proxy.
6.
Accessed
http://localhost/ (port 80) and verified the dashboard loaded.
Step 4: Test Authentication
1.
Navigated to
http://localhost/register.
2.
Registered a new user.
3.
Was redirected to the
login page.
4.
Logged in with the new
credentials and was taken to the main dashboard.
Step 5: Test Application Creation
1.
On the dashboard,
clicked "Create New Application".
2.
Filled in the
application name (e.g., "Sample-app").
3.
Submitted the form.
4.
Verified the new
application appeared on the dashboard list.
Step 6: Test Real-time Features
1.
Opened the browser's
developer console (F12) and went to the "Network" tab.
2.
Filtered by
"WS" (WebSocket) and confirmed a successful WebSocket connection.
3.
Clicked
"Deploy" (even though it wouldn't fully work yet) and saw
"Deployment Started" logs appear in real-time in the UI.
Procedure - Part 3 (DA3)
Step 1: Complete System Integration
1.
Stopped all services:
docker-compose down.
2.
Ensured the
docker-compose.yml file was complete with all 7 services (mongo, redis,
dns-server, git-server, paas-api, dashboard, nginx-proxy).
3.
Ensured all services
were on the paas-network and had correct depends_on settings.
4.
Started the entire
stack: docker-compose up -d.
(Figure 22)
Step 2: Configure Git Server Post-Receive Hook
1.
Modified the
git-server's post-receive script.
2.
This script now reads
the repo name and user ID, then sends a POST request to the backend: curl -X
POST http://paas-api:5000/api/v1/apps/deploy/git/...
3.
This hook is what
connects the git push to the backend's deployment logic.
Step 3: Test Complete End-to-End Deployment
Pipeline
1.
Created a new sample
Node.js app locally.
2.
Added the git-server as
a remote:
git remote add deployflow
ssh://git@localhost:2222/my-test-app.git
3.
Pushed the code: git
push deployflow main.
4.
[Insert Screenshot:
Command line output of the successful 'git push']
5.
Watched the dashboard UI
in the browser.
6.
Saw the logs appear in
real-time: "Deployment started...", "Cloning
repository...", "Building image...", "Container
started...".
Step 4: Verify Deployed Application
1.
Checked the paas-api
logs to see the Docker build process.
2.
Ran docker ps on the
host to see the new container (destify_...) running.
3.
Navigated to the
application's URL (http://destify.paas.local) provided on the dashboard.
Step 5: Test Persistence, Recovery and Monitoring
1.
Stopped and removed the
entire stack: docker-compose down.
2.
Verified all application
containers were gone.
3.
Restarted the stack:
docker-compose up -d.
4.
Logged back into the
dashboard.
5.
Verified all data was intact: The
user account, the created application and its deployment history were all still
there. (The app container itself was gone, as expected, but its configuration was persisted in the mongo
volume).
What Modification is
Done in the Containers
1. MongoDB Container (mongo:6)
●
Base Image: mongo:6 (Official)
●
Modifications: No Dockerfile was
used. Modifications were applied via docker-compose.yml:
○
Environment: Set
MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD for authentication.
○
Volumes: Mounted
mongo-data:/data/db to persist database files.
○
Networking: Placed on the
paas-network to be accessible by the API.
2. Redis Container (redis:7-alpine)
●
Base Image: redis:7-alpine
(Official)
●
Modifications:
○
Volumes: Mounted
redis-data:/data to persist session data.
○
Networking: Placed on the
paas-network.
3. Nginx Proxy (nginx:latest)
●
Base Image: nginx:latest
(Official)
●
Modifications:
○
Volumes: Mounted a custom
nginx-config/nginx.conf file to /etc/nginx/nginx.conf. This file contains the
reverse proxy logic, upstream definitions, and location blocks to route traffic
to the dashboard and paas-api services.
4. Backend API (paas-api)
●
Base Image: node:18-alpine
●
Dockerfile Modifications:
# Stage 1: Builder
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm ci && npm cache clean --force
COPY src/ ./src/
RUN npm run build
# Stage 2: Production
FROM node:18-alpine AS production
WORKDIR /app
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy built app from builder
COPY --from=builder /app/dist ./dist
# Create and own log/temp directories
RUN mkdir -p logs && chown -R nodejs:nodejs logs
RUN mkdir -p temp && chown -R nodejs:nodejs temp
RUN chown -R nodejs:nodejs /app
# Switch to non-root user
USER nodejs
EXPOSE 5000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e
"require('http').get('http://localhost:5000/health', (res) => {
process.exit(res.statusCode === 200 ? 0 : 1) })"
CMD ["node", "dist/server.js"]
●
Summary of Modifications:
○
Used a multi-stage build for optimization.
○
Installed npm ci
--only=production to keep the image slim.
○
Created and switched to
a non-root user (nodejs) for
security.
○
Added a HEALTHCHECK
instruction.
5. Frontend Dashboard (dashboard)
●
Base Image: node:18-alpine (for
building), nginx:alpine (for serving)
●
Dockerfile Modifications:
# Stage 1: Builder
FROM node:18-alpine AS builder
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Production
FROM nginx:alpine AS production
# Copy built files from builder
COPY --from=builder /app/dist /usr/share/nginx/html
# Create custom Nginx configuration for a React SPA
RUN echo 'server {' > /etc/nginx/conf.d/default.conf && \
echo ' listen 3000;' >>
/etc/nginx/conf.d/default.conf && \
echo ' server_name localhost;' >>
/etc/nginx/conf.d/default.conf && \
echo ' location / {' >>
/etc/nginx/conf.d/default.conf && \
echo ' root /usr/share/nginx/html;' >>
/etc/nginx/conf.d/default.conf && \
echo ' index index.html index.htm;' >>
/etc/nginx/conf.d/default.conf && \
echo ' try_files $uri $uri/ /index.html;'
>> /etc/nginx/conf.d/default.conf && \
echo ' }' >> /etc/nginx/conf.d/default.conf
&& \
echo '}' >>
/etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
●
Summary of Modifications:
○
Used a multi-stage build. The final image is a
tiny nginx server, not a heavy node image.
○
The nginx.conf is
dynamically created inside the Dockerfile to serve the React application,
ensuring try_files correctly routes all requests to index.html (for client-side
routing).
○
6. Git Server (git-server)
●
Base Image: alpine:latest
●
Dockerfile Modifications:
○
Installed openssh, git,
bash, curl, and docker-cli.
○
Created a git user and
configured sshd for key-based authentication only.
○
Copied a post-receive
script to /usr/local/bin to be triggered on git push.
○
This container is
modified in docker-compose.yml to mount the Docker socket
(/var/run/docker.sock) so the post-receive script can run docker commands (via
curl to the API).
7. DNS Server (dns-server) (Custom)
●
Base Image: alpine:latest
●
Dockerfile Modifications:
○
Installed dnsmasq.
○
Copied a custom
dnsmasq.conf file to /etc/dnsmasq.conf.
○
This config file is set
up to resolve *.paas.local to the nginx-proxy container's IP (this is managed
by the paas-api).
Github link / Dockerhub
link
GitHub Repository
The complete source code, including all Dockerfiles, Docker
Compose files, and application code, is available at:
●
Link: https://github.com/dg-giridharen/deployflow
Docker Hub Links
The custom-built images for this project can be published to
Docker Hub.
Dashboard:
docker pull dggiridharen/deployflow-dashboard:latest
Backend API: docker pull dggiridharen/deployflow-api:latest
Git Server: docker pull dggiridharen/deployflow-git-server:latest
DNS
Server: docker pull dggiridharen/deployflow-dns-server:latest
Outcomes of this DA:
Technical Achievements
1. Fully Functional PaaS: A complete Platform-as-a-Service was built, capable of deploying applications from a simple git push.
2. Real-time Monitoring: The dashboard provides live, real-time deployment logs and container metrics (CPU, memory) using WebSockets.
3. Automated CI/CD Pipeline: An end-to-end, Git-based deployment workflow was created. Pushing code automatically triggers a build and deployment.
4. Scalable Microservices Architecture: The system is composed of 7 independent microservices, all containerized and orchestrated, demonstrating a scalable and resilient design.
5. Security Implementation: Best practices were implemented, including non-root containers, database authentication, and secure API endpoints with JWT.
6. Data Persistence: The platform correctly uses Docker volumes to ensure that all user data, application configurations, and deployment history persist even if containers are stopped or restarted.
Learning Outcomes
1. Container Mastery: Gained deep proficiency in Docker, including writing optimized multi-stage Dockerfiles, managing container lifecycles, and network/volume configuration.
2. Microservices Orchestration: Learned to design, deploy, and manage a complex multi-container application using Docker Compose, handling service dependencies and communication.
3. Full-Stack Development: Acquired hands-on experience in connecting a modern frontend (React/Next.js) to a robust backend (Node.js/Express) with a database (MongoDB).
4. DevOps Practices: Implemented core DevOps principles, including Infrastructure as Code (docker-compose.yml), CI/CD (Git hooks), and system monitoring (live logs).
5.
Cloud-Native Concepts: Applied fundamental
cloud concepts like containerization, orchestration, service discovery (DNS),
and API gateways (Nginx) in a practical project.
Conclusion
The DeployFlow project successfully demonstrates a comprehensive
understanding of cloud computing and containerization. Throughout these three
DAs, a simple set of services was built into a fully functional
Platform-as-a-Service (PaaS).
●
DA1 established the
foundation with core infrastructure (Mongo, Redis, Git, DNS) and a basic API.
●
DA2 built the user-facing
layer with a modern React dashboard and a reverse proxy, enabling user
interaction and real-time monitoring.
●
DA3 integrated all
components into a cohesive system, culminating in a fully automated git
push-to-deployment pipeline.
This project provided
invaluable hands-on experience with Docker, microservices, real-time
communication, and DevOps methodologies. The final platform not only meets all
assignment requirements but also serves as a strong foundation for
understanding and building enterprise-level, cloud-native applications.
References
1. Original Container Sources:
○ MongoDB: https://hub.docker.com/_/mongo (MongoDB, Inc.)
○
Redis: https://hub.docker.com/_/redis (Redis Labs)
○
Nginx: https://hub.docker.com/_/nginx (Nginx, Inc.)
○
Node.js: https://hub.docker.com/_/node (Node.js Foundation)
○ Alpine: https://hub.docker.com/_/alpine (Alpine Linux Team)
2. Educational Resources:
○ IIT Bombay Spoken Tutorial for Docker: https://spoken-tutorial.org/tutorial-search/?search_foss=Docker&search_language=English
○ Docker Official Documentation: https://docs.docker.com/
○ Next.js Documentation: https://nextjs.org/docs
○ Express.js Documentation: https://expressjs.com/
Acknowledgement
I would like to express my sincere gratitude to my professor, Dr. T. Subbulakshmi, for her invaluable guidance, clear instructions, and constant encouragement throughout this project. I am thankful to VIT Chennai, School of Computer Science and Engineering (SCOPE), for providing the curriculum and opportunity to work on this comprehensive cloud computing assignment for the Fall 2025 semester. I also want to acknowledge the foundational knowledge gained from the IIT Bombay Docker Tutorial, which was an excellent resource. Finally, I am grateful to my friends, peers, and family for their support and collaborative discussions, which were essential in overcoming challenges and completing this project.
Giridharen Goguladhevan
Comments
Post a Comment