Deploying an AI agent step by step — from local development to a running production service. No abstractions, no “it depends,” just the actual commands and decisions.
Step 1: Get a Server
You need a Linux server with at least 2GB RAM (4GB recommended), a public IP address, and SSH access. A $10-20/month VPS from any major provider works fine. I use Ubuntu 22.04 LTS because it’s stable and well-documented.
After provisioning: update the system (apt update && apt upgrade), create a non-root user, set up SSH key authentication, and disable password login. This takes 10 minutes and is the bare minimum for security.
Step 2: Install Dependencies
Node.js (LTS version), npm, and Git. That’s the minimum for OpenClaw. If you’re using a database, install that too. If you’re using Docker, install Docker.
I install Node via nvm (Node Version Manager) because it makes version management easy. When OpenClaw requires a new Node version, nvm install and nvm use handles it without affecting anything else on the server.
Step 3: Clone and Configure
Clone your repository (or install OpenClaw from npm). Create the configuration file with your API keys, model preferences, and integration settings.
The configuration file is the most important artifact in your deployment. Everything the agent does depends on it. I keep a template config in the repo with placeholder values and a separate .env file (not in the repo) with the actual secrets.
Step 4: Test Locally on the Server
Before setting up process management or reverse proxies, run the agent manually: node index.js (or whatever the start command is). Send it a test message. Verify it responds correctly. This catches environment-specific issues (missing dependencies, wrong paths, inaccessible APIs) before you add layers of complexity.
If it doesn’t work manually, it won’t work behind PM2 or Docker either. Debug here, not later.
Step 5: Process Management
Install PM2 (npm install -g pm2) and set up the agent as a managed process: pm2 start ecosystem.config.js.
PM2 gives you: automatic restart on crash, log management with rotation, startup script generation (so the agent starts on boot), and monitoring commands to check status.
Run pm2 startup to generate the system startup script. Run pm2 save to save the current process list. Now the agent survives reboots.
Step 6: Reverse Proxy (If Needed)
If the agent needs to receive webhooks from external services, set up a reverse proxy (Caddy or nginx) that handles SSL termination and routes traffic to the agent.
Caddy is simpler: a 4-line Caddyfile gets you HTTPS with automatic certificate renewal. Nginx is more configurable but requires manual certificate management (use Certbot).
If the agent only makes outbound connections (messaging platforms, API calls) and doesn’t receive inbound traffic, skip this step.
Step 7: Monitoring
At minimum: set up log monitoring and a basic health check.
Logs: PM2 handles log rotation. Set up a way to view logs remotely (I use pm2 logs via SSH or a simple log viewer).
Health check: A cron job that checks if the process is running and pings you if it’s not. Five lines of bash script. Set it to run every 5 minutes.
For more sophisticated monitoring (metrics, dashboards, alerts), see my Grafana article. But the basic health check covers 90% of what you need.
Step 8: Backup
Set up daily backups of your configuration and data files. See my backup strategy article for the full approach. At minimum: copy your config and data to a second location daily.
Step 9: Documentation
Write down: the server IP, how to SSH in, where the config lives, how to restart the service, how to view logs, and how to rollback. Future-you will thank present-you.
The Whole Process Takes About 2 Hours
Steps 1-4: 45 minutes. Steps 5-6: 30 minutes. Steps 7-9: 45 minutes.
After these 2 hours, you have a production-ready AI agent running on a reliable server with process management, monitoring, backups, and documentation. Not over-engineered. Not under-engineered. Just right for a starting point that you can improve incrementally.
🕒 Last updated: · Originally published: December 31, 2025