Designed to Be Simple in Production
One of the things I have learned across many projects is that deployment complexity kills maintenance. If putting a new version live requires someone to remember a sequence of steps, connect to multiple servers, or have the build toolchain installed in production — things will eventually go wrong.
The RAD-System Docker setup is built around one principle: production servers should be as dumb as possible. They run containers. They do not build code. The source is on GitHub.
1. The Containerised Stack
The entire system is orchestrated by a single docker-compose.yml. Every service runs in isolation on a dedicated internal network rad_network — the database is never exposed to the public internet. Only the ports the application needs are opened.
Data persistence is handled via explicit volume mounts:
./Volumes/postgres— database data survives container restarts and rebuilds./Config/backend/.env— environment variables are injected at runtime, never baked into the image
2. PostgreSQL: Auto-Initialisation and Custom Configuration
On first start, the PostgreSQL container executes any SQL or shell scripts found in ./Config/postgres/init/, mounted to /docker-entrypoint-initdb.d/. This creates users, databases, and installs extensions automatically — no manual intervention after the first docker compose up.
Custom postgresql.conf and pg_hba.conf files override the defaults to match the system's specific performance and security requirements.
3. Hot Reload in Development, Runtime Config in Production
During development, the backend source is mounted directly into the container via ./Volumes/backend/app. Code changes on the host trigger hot-reload inside the container — the development loop is fast without any special tooling.
The frontend is served by Nginx. The app-config.json file is mounted at runtime, not compiled into the image. Changing the API URL in production means editing one JSON file and restarting the frontend container — no rebuild.
4. Artifact-Based Deployment: The Strategy That Eliminated My Build Server
This is the part of the system I am most satisfied with operationally. Traditional CI/CD pipelines rebuild Docker images for every release. That requires Node.js, npm, and build tools either on the CI server or inside the image. It is slow, it is complex, and it ties your deployment to your build infrastructure.
RAD-System uses a different strategy: the Docker images are static runtimes. The application code is an artifact — a versioned tarball — injected via volumes.
How a release works:
- Build locally —
build-be.shandbuild-fe.shcompile the code and produce a versioned tarball (e.g.,rad-backend-1.0.2.tgz) - Upload the artifact to the production server
- Deploy:
docker compose stop backend
tar -xzf rad-backend-1.0.2.tgz -C ./Volumes/backend/app
docker compose up -d backend
What this means in practice:
- No build tools in production — the server runs containers, nothing else
- Instant rollback — extract the previous tarball, restart the container
- Clean images — Docker images change only when the runtime (Node version, Nginx config) changes, not on every code change
- Fast deploys — unpacking a tarball is seconds, not minutes
I have been using this pattern for years. It works reliably, it is easy to automate with a simple shell script if needed, and it is easy to understand when something goes wrong.