Running WordPress on-premise is great for control and performance, but if your home lab or self-hosted hardware goes down, your website shouldn’t go offline with it. In this guide, I’ll show you how I built a fully automated failover setup where traffic seamlessly moves from an on-prem server to AWS — and back — with no downtime.
The best part?
It uses tools you’re probably already familiar with:
- Coolify (on-prem & AWS)
- Route53 DNS failover
- A custom health check endpoint
- A lightweight PHP status script
- Zero manual intervention once deployed
Here’s how it works.
🚀 Architecture Overview
For this example, we’ll use a fake domain:
examplefailover.com
And two WordPress servers:
| Environment | Location | IP Address |
|---|---|---|
| Primary | On-Prem | 10.10.10.10 |
| Failover | AWS EC2 | 203.0.113.25 |
DNS is hosted in Route53, and SSL certificates are issued using the Route53 DNS-01 method.
Here’s the traffic flow:
- Route53 checks the on-prem server’s health using a dedicated endpoint.
- If healthy → traffic goes to on-prem.
- If unhealthy → traffic automatically fails over to AWS.
- When restored → traffic automatically returns to on-prem.
You get real-world high availability without running load balancers or Kubernetes.
🔧 Step 1: Configure SSL on Both Coolify Instances
Both Coolify deployments (on-prem & AWS) need valid SSL certificates for:
examplefailover.com
www.examplefailover.com
Inside each Coolify instance:
- Open Settings → Domains & SSL → ACME DNS Providers
- Add your Route53 IAM credentials
- Add domains to your WordPress app:
examplefailover.comwww.examplefailover.com
- Click Enable SSL
Using DNS-01 validation means both servers can generate certificates no matter which one DNS currently points at.
🔧 Step 2: Create a Reliable Health Check Endpoint
Using your homepage for health checks is risky. WordPress crashes, plugin errors, or PHP upgrades can accidentally trigger failover.
The fix is to build a dedicated health check endpoint that bypasses WordPress entirely:
https://examplefailover.com/healthcheck/index.php
Create the directory inside WordPress’s volume:
mkdir -p /var/lib/docker/volumes/<YOUR_VOLUME_NAME>/_data/healthcheck
Add a smart PHP health script:
/healthcheck/index.php
<?php
header('Content-Type: application/json');
$server_type = getenv('SERVER_TYPE') ?: 'unknown';
$response = [
"status" => "healthy",
"server" => $server_type,
"hostname" => gethostname(),
"ip" => $_SERVER['SERVER_ADDR'] ?? 'unknown',
"time" => date('Y-m-d H:i:s'),
];
echo json_encode($response);
Tag each environment in Coolify:
On-Prem:
SERVER_TYPE=on-prem
AWS:
SERVER_TYPE=aws-failover
Now loading the endpoint returns clear JSON:
{
"status": "healthy",
"server": "on-prem",
"hostname": "coolify-primary",
"ip": "10.10.10.10",
"time": "2025-12-07 14:33:12"
}
This helps you debug and verify which server is responding.
🔧 Step 3: Create a Route53 Health Check
Go to:
Route53 → Health checks → Create health check
Use:
- Protocol: HTTPS
- Domain: examplefailover.com
- Path:
/healthcheck/index.php - Port: 443
- Request interval: 30 seconds
- Failure threshold: 3
- Optional string matching:
healthy
If the endpoint fails, Route53 marks the server as UNHEALTHY.
🔧 Step 4: Set Up DNS Failover in Route53
You will create two A-records for each domain, a primary and a secondary.
Root domain (examplefailover.com)
Primary (on-prem)
- Type: A
- Value:
10.10.10.10 - Routing policy: Failover → Primary
- Health check: Use the one created above
Secondary (AWS)
- Type: A
- Value:
203.0.113.25 - Routing policy: Failover → Secondary
- Health check: None
Repeat the same for www.examplefailover.com.
This ensures both root and www domain fail over correctly.
✔️ Failover Behavior Explained
Normal Operation
Healthcheck OK → Route53 routes traffic to on-prem (10.10.10.10)
On-Prem Fails
Healthcheck FAIL → Route53 routes traffic to AWS (203.0.113.25)
On-Prem Recovers
Healthcheck returns OK → Route53 routes traffic back to on-prem
Visitors experience zero downtime — it’s seamless.
💡 Why This Setup Works So Well
- Zero cloud load balancers required
- No need for highly available networking gear
- Coolify deploys identical apps in both environments
- DNS-01 SSL validation avoids certificate conflicts
- Dedicated health endpoint avoids WordPress false positives
- Route53’s global health check network ensures accuracy
- Failover is fast and automatic
This approach gives you “cloud-level high availability” with simple, inexpensive tools.
🎉 Conclusion
Pairing Coolify with Route53 failover lets you build a robust, self-healing WordPress environment without complex infrastructure. Whether you’re self-hosting for fun or running a real production site, combining:
- on-prem hardware
- AWS failover
- automated SSL
- a dedicated health check
- and smart DNS logic
allows your site to stay online under almost any circumstance.