Pomerium is designed to be run in two modes: All-In-One or Split Service. These modes are not mutually exclusive, meaning you can run one or multiple instances of Pomerium in all-in-one mode, and spin up additional instances for specific components as needed.
Each instance of Pomerium runs in all-in-one mode unless specified to run as a specific component by the
services key. See All-In-One vs Split Service mode for more details.
It may be desirable to run in "all-in-one" mode in smaller deployments or while testing. This reduces the resource footprint and simplifies DNS configuration. An all-in-one instances may also be scaled for better performance. All URLs point at the same Pomerium service instance.
In larger footprints, it is recommended to run Pomerium as a collection of discrete service clusters. This limits blast radius in the event of vulnerabilities and allows for per-service scaling and monitoring.
Please also see Architecture for information on component interactions.
In split service mode, you have the opportunity to scale the components of Pomerium independently.
All of Pomerium's components are designed to be stateless, and may all be scaled horizontally or vertically. In general, horizontal scaling is recommended. Vertical scaling will lead to diminished returns after ~8 vCPUs.
The Databroker service, which is responsible for session and identity related data, must be configured for external persistence to be fully stateless.
Pomerium's individual components can be divided into two categories; the data plane and control plane. Regardless of which mode you run Pomerium in, we strongly recommend multiple instances of each service for fault tolerance.
The Proxy service, as the name implies, is responsible for proxying all user traffic, in addition to performing checks to the Authorization service. The proxy is directly in path for user traffic.
Proxy will need resources scaled in conjunction with request count and may need average request size accounted for. The heavier your user traffic, the more resources the Proxy service should have provisioned.
The Authorize service is responsible for policy checks during requests. It is in the hot path for user requests but does not directly handle user traffic.
Authorize will need resources scaled in conjunction with request count. Request size and type should be of a constant complexity. In most environments, Authorize and Proxy will scale linearly with request volume (user traffic).
Note that the compute cost of each request is about two times (2x) greater for the Authorize service compared to Proxy; if Proxy utilizes 5% of CPU resources, Authorize would likely use 10%.
The Authenticate service handles session cookie setup, session storage, and authentication with your Identity Provider.
Authenticate requires significantly fewer resources than other components due to the only-occasional requirement to establish new sessions. This happens when users first sign in, and when their authentication expires (determined by your IdP).
Add resources to the Authenticate service if you have a high session/user churn rate. The requests should be constant time and complexity, but may vary by Identity Provider implementation. Resources for the Authenticate service should scale roughly with your total user count.
Regardless of the low resource utilization, we recommend running no less than 2 instances for resiliency and fault tolerance.
The Databroker service is responsible for background identity data retrieval and storage. It is in the hot path for user authentication. However, it does not directly handle user traffic and is not in-path for authorization decisions.
The Databroker service does not require significant resources, as it provides streaming updates of state changes to the other services. There will be utilization spikes when Authorize services are restarted and perform an initial synchronization.
Databroker resource requirements scale with the number of replicated services in the data plane. That is to say, additional instances of the Proxy and Authorize services will increase demand on Databroker. Additionally, the size of the user directory contributes to the resource requirements for data storage.
In many deployments, 2 replicas of Databroker is enough to provide resilient service.
In a production configuration, Databroker CPU/IO utilization also translates to IO load on the underlying storage system. Ensure it is scaled accordingly!
In any production deployment, running multiple replicas of each Pomerium service is strongly recommended. Each service has slightly different concerns about utilizing the replicas for high availability and scaling, enumerated below.
You should deploy Layer 4 load balancing between end users and Pomerium Proxy services to provide high availability and horizontal scaling. Do not use L7 load balancers, since the Proxy service handles redirects, sticky sessions, etc.
Note that deployments on Kubernetes can utilize The Pomerium Ingress Controller to simplify configuration.
The suggested practice is to use the Pomerium Proxy service to load-balance Authenticate. Alternately, you could use an independent Layer 4 or Layer 7 load balancer, but this increases complexity.
Authorize and Databroker
You do not need to provide a load balancer in front of Authorize and Databroker services. Both utilize GRPC and have special requirements if you should choose to use an external load balancer. GRPC can perform client based load balancing and is the best architecture for most configurations.
By default, Pomerium gRPC clients will automatically connect to all IPs returned by a DNS query for the name of an upstream service. They will then regularly re-query DNS for changes to the Authorize or Databroker service cluster. Health checks and failover are automatic.
As mentioned in scaling, Pomerium components themselves are stateless and support horizontal scale out for both availability and performance reasons.
A given service type does not require communication with its peer instances to provide high availability. E.g., a Proxy service instance does not communicate with Proxy instances.
Regardless of the service mode, it is recommended you run at least 2 instances of Pomerium with as much physical and logical separation as possible. For example, in Cloud environments, you should deploy instances of each service to at least 2 different zones. On-prem environments should deploy >=2 instances to independent hardware.
Ensure that you have enough spare capacity to handle the scope of your failure domains.
Multiple replicas of Databroker or all-in-one service are only supported with external storage configured
The following setup would demonstrate a minimum configuration for the split-service mode, using docker compose on a local host.
This guide intentionally omits provisioning certificates for internal Pomerium interaction for simplicity.
If you haven't, install
mkcert following these GitHub instructions.
Create a trusted root CA and confirm the presence and names of your local CA files:
$ mkcert -install
The local CA is already installed in the system trust store! 👍
The local CA is already installed in the Firefox and/or Chrome/Chromium trust store! 👍
$ ls "$(mkcert -CAROOT)"
The output of
mkcert -install may vary depending on your operating system.
Generate a wildcard certificate and key for Pomerium to use. You should get two files
_wildcard.localhost.pomerium.io.pem. This is a special domain that always resolves to
Copy the certificate authority cert:
cp "$(mkcert --CAROOT)"/rootCA.pem .
Create minimum Pomerium configuration file, filling in your identity provider parameters:
idp_provider: ***FILL IN***
idp_client_id: ***FILL IN***
idp_client_secret: ***FILL IN***
- from: https://httpbin.localhost.pomerium.io
Create docker compose configuration
Now you may bring up your deployment and visit a test route https://httpbin.localhost.pomerium.io
docker compose up