Run a Node Behind a Proxy
Running a publicly accessible node exposes your infrastructure to the open internet. The proxy configurations below are starting points, not complete security solutions. Do this at your own risk. You are responsible for securing and maintaining your own infrastructure.
If you plan to run a Stacks node with publicly accessible RPC endpoints, it is strongly recommended at a minimum to place the node behind a reverse proxy with rate limiting. Without rate limiting, a public node can be overwhelmed by excessive requests, leading to degraded performance or denial of service.
This guide provides minimal, production-tested configurations for two popular reverse proxies. Choose one — you do not need both:
Nginx — simpler configuration, widely used, good baseline rate limiting.
HAProxy — more advanced abuse detection via stick tables, HTTP proxying with automatic IP blocking.
Ports overview
A Stacks node deployment typically exposes the following services:
Stacks RPC
20443
HTTP
Yes
Stacks P2P
20444
TCP
No
Stacks API
3999
HTTP
Yes, if running
Bitcoin RPC
8332
HTTP
Yes, if exposed
Bitcoin P2P
8333
TCP
No
The P2P ports (20444, 8333) use custom binary protocols for peer-to-peer communication, not HTTP. You can leave them open directly to the network. The proxy configurations below focus on the RPC/API ports which serve HTTP traffic and are the primary target for abuse.
Optional: P2P ports can also benefit from rate-limiting. While unlikely, a denial-of-service attack could flood the P2P port so the node only communicates with malicious peers. Adding connection-rate limits on P2P ports won't hurt and provides an extra layer of protection.
Configure the Stacks node
Before setting up the proxy, configure your Stacks node so its RPC endpoint is not directly reachable from the public internet (i.e. for stacks-node configuration -rpc_bind = "127.0.0.1:30443"). The proxy will be the only public-facing service.
Since the proxy needs to listen on the standard public ports (e.g. 20443), the node itself must bind to different ports to avoid conflicts. The examples below use offset ports (30443, 33999) for the node's RPC and API, while the proxy owns the public-facing ports (20443, 3999). P2P stays on its standard port and is not proxied.
Bare metal
In your node's configuration file (e.g. Stacks.toml), bind the RPC to a localhost address on an offset port:
The proxy will listen on port 20443 and forward RPC traffic to the offset port. P2P binds directly on the standard port 20444 and does not go through the proxy.
Docker (stacks-blockchain-docker)
When running with stacks-blockchain-docker, the node's ports are controlled by the Docker Compose configuration. By default, ports are exposed on all interfaces (0.0.0.0). To restrict the RPC and API to localhost (so only the proxy can reach them), edit compose-files/common.yaml and change the port mappings. P2P is published directly on the standard port:
The format is host_ip:host_port:container_port. The node inside the container keeps its default ports — only the host side changes. Offset host ports (30443, 33999) are necessary because the proxy already occupies the standard ports (20443, 3999) on the host. Binding to 127.0.0.1 ensures the container ports are only reachable from the host (where the proxy runs), not from the public internet. P2P is published directly on the standard port 20444.
Inter-container communication (e.g. the API receiving events from the blockchain node) uses Docker's internal network and service names, not published host ports. These port mapping changes do not affect container-to-container traffic.
Nginx
Nginx can serve as a reverse proxy with rate limiting using the limit_req module. The configuration below rate-limits the Stacks RPC and Stacks API endpoints.
Rate limit parameters explained:
rate=5r/s— allows a sustained average of 5 requests per second per client IP. Requests beyond this rate are delayed or rejected.burst=20— permits up to 20 requests to queue above the base rate before Nginx starts rejecting. This absorbs short traffic spikes without immediately dropping legitimate requests.nodelay— queued burst requests are forwarded immediately rather than being spaced out over time. Withoutnodelay, excess requests would be throttled to match the base rate.
The Stacks API zone uses a higher rate (10r/s) and larger burst (40) because API endpoints typically see more traffic than the node RPC.
Enable the site and restart Nginx:
Verify
HAProxy
HAProxy provides fine-grained connection tracking and abuse detection via stick tables. The configuration below proxies Stacks RPC and API traffic over HTTP, automatically rejecting clients that exceed request rate thresholds.
Adjust maxconn, rate thresholds (ge 25), stick-table sizes, and expiry times to suit your traffic patterns. The values below are conservative defaults.
Verify
How the abuse table works: HAProxy tracks each client IP's HTTP request rate. When a client exceeds the threshold (e.g. 25 HTTP requests in 10 seconds), its gpc0 counter is incremented and all subsequent requests from that IP are denied with HTTP 429. The stick-table entry expires after 30 minutes, lifting the block automatically.
Firewall considerations
Additionally, a host-level firewall adds defense in depth: only the proxy's listening ports and the P2P ports should be reachable from the public internet, while the node's RPC stays accessible only via the proxy (localhost). How you configure this depends on your environment — cloud providers, bare-metal hosts, and container setups all handle firewalling differently.
Refer to your provider's or operating system's firewall documentation for specifics:
AWS — Security Groups
GCP — VPC Firewall Rules
Azure — Network Security Groups
Digital Ocean — Cloud Firewalls
Docker — Docker manipulates
iptablesdirectly and can bypass host firewall rules. See the Docker packet filtering docs for how to enforce restrictions.
Last updated
Was this helpful?