Install
Pomerium offers several ways to install the Enterprise Console to suit your organization's needs. Watch the video below for a quick primer on deploying Pomerium Core and Enterprise, or view the sections below for specific installation instructions.
Install Pomerium Enterprise
- Docker
- OS Packages
- Kubernetes with Kustomize
- Kubernetes with Terraform
The Pomerium Enterprise Docker image is available at a private Cloudsmith Docker registry.
To access the Pomerium Enterprise Docker image:
- In your terminal, run the following command:
docker login docker.cloudsmith.io
- Enter your username and password:
% docker login docker.cloudsmith.io
Username: <username>
Password: <password>
- Pull a specific tagged release of the Pomerium Enterprise image:
docker pull docker.cloudsmith.io/pomerium/enterprise/pomerium-console:${vX.X.X}
See the Enterprise Quickstart for instructions to run and deploy the Enterprise Console with Docker Compose.
You can find the latest rpm and deb packages on Cloudsmith or download them from the GitHub releases page.
| Supported Operating Systems | Supported Architectures |
|---|---|
linux | amd64 |
darwin | arm64 |
DEB installation
To automatically configure the repository for Debian and Ubuntu distributions:
- Replace
[access-key]in the command below and run it:
curl -1sLf \
'https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/setup.deb.sh' \
| sudo -E bash
To manually configure the repository, import the apt-key and create a new .list file in /etc/apt/source.list.d:
curl -1sLf 'https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/gpg.B1D0324399CB9BC3.key' | apt-key add -
echo "deb https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/deb/debian buster main" | sudo tee /apt/sources.list.d/pomerium-console.list
- Update
aptand install Pomerium Enterprise:
sudo apt update; sudo apt install pomerium-console
After you've installed the package, enable and start the system service:
sudo systemctl enable --now pomerium-console
RPM installation
To automatically configure the repository for RHEL based distributions:
- Replace [access-key] in the command below and run it:
curl -1sLf \
'https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/setup.rpm.sh' \
| sudo -E bash
To manually configure the repository, run:
yum install yum-utils pygpgme
rpm --import 'https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/gpg.B1D0324399CB9BC3.key'
curl -1sLf 'https://dl.cloudsmith.io/[access-key]/pomerium/enterprise/config.rpm.txt?distro=el&codename=8' > /tmp/pomerium-enterprise.repo
yum-config-manager --add-repo '/tmp/pomerium-enterprise.repo'
yum -q makecache -y --disablerepo='*' --enablerepo='pomerium-enterprise'
- Update
yumand install Pomerium Enterprise:
yum -y install pomerium-console
After you've installed the package, enable and start the system service:
sudo systemctl enable --now pomerium-console
These steps cover installing Pomerium Enterprise into your existing Kubernetes cluster. It's designed to work with an existing cluster running Pomerium, as described in Pomerium Kustomize. Follow that document before continuing here.
The assumption is that Pomerium is installed into the pomerium namespace, and Enterprise would be installed into the pomerium-enterprise namespace.
Prepare Core
The below command will expose the Pomerium Core Databroker gRPC interface.
kubectl apply -k github.com/pomerium/documentation/k8s/core\?ref=0-31-0
Deploy Enterprise Console
kubectl apply -k github.com/pomerium/documentation/k8s/console\?ref=0-31-0
The Enterprise Console need be configured before it becomes fully operational.
Create Cloudsmith Directory Secret
kubectl create secret docker-registry pomerium-enterprise-docker \
--namespace pomerium-enterprise \
--docker-server=docker.cloudsmith.io \
--docker-username=pomerium/enterprise \
--docker-password="your password provided by Pomerium Sales"
Configure Enterprise Console
Create a config directory, and fill in the configuration parameters for the following template files:
resources:
- config.yaml
- ingress-console.yaml
- secret.yaml
namespace: pomerium-enterprise
See Environment Variables for Config and Secret keys description.
apiVersion: v1
kind: ConfigMap
metadata:
name: enterprise
data:
# should match authenticate service URL from Pomerium Settings CRD
authenticate_service_url: https://authenticate.domain.com/
# audience should correspond to the name in the ingress you created for the console
# without the protocol part
audience: console.domain.com
# administrators is a comma separated list of emails that would be granted admin privileges
# only use it for bootstrapping, and grant explicit permissions via the UI to the Global namespace
administrators: me@domain.com
# databroker service URL allows Console to communicate to Pomerium Core
databroker_service_url: https://pomerium-databroker.pomerium.svc.cluster.local
# external Prometheus service URL, to enable metrics.
# see https://www.pomerium.com#metrics
# prometheus_url: ""
apiVersion: v1
kind: Secret
metadata:
name: enterprise
type: Opaque
stringData:
database_url: postgres://user:password@host/database
database_encryption_key: ''
license_key: ''
# shared_secret must match a base64 encrypted key from the Pomerium Core secret - i.e.
# kubectl get secret bootstrap -n pomerium -o jsonpath="{.data.shared_secret}"
shared_secret: ''
Create Ingress for Console.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pomerium-console
annotations:
cert-manager.io/cluster-issuer: your-cluster-issuer
external-dns.alpha.kubernetes.io/hostname: 'console.domain.com'
# console requires user identity headers
ingress.pomerium.io/pass_identity_headers: 'true'
# console has internal access control. alternatively, use PPL
ingress.pomerium.io/allow_any_authenticated_user: 'true'
# since v0.21.0, console is using TLS by default
ingress.pomerium.io/secure_upstream: 'true'
spec:
ingressClassName: pomerium
tls:
- secretName: console-domain-com
hosts:
- console.domain.com
rules:
- host: 'console.domain.com'
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pomerium-console
port:
name: app
kubectl apply -k ./config
You can deploy Pomerium Enterprise to Kubernetes using Terraform modules from the pomerium/install repository. This approach uses two modules:
- Ingress Controller — deploys Pomerium Core as a Kubernetes ingress controller
- Enterprise Console — deploys the Pomerium Enterprise Console, connecting it to the ingress controller
Keep installation and configuration in separate Terraform runs. The Pomerium Terraform provider must connect to a running Enterprise Console, so if Pomerium is misconfigured or an essential component (such as the console database) is down, all dependent configuration resources will fail and the entire terraform plan / terraform apply will be blocked.
Prerequisites
- A Kubernetes cluster with a configured Terraform Kubernetes provider
- Terraform >= 1.0
- A PostgreSQL database for the Enterprise Console
- A Pomerium Enterprise license key and Cloudsmith registry credentials
Required Providers
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
# Required by the ingress-controller module
tls = {
source = "hashicorp/tls"
version = "~> 4.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
}
}
Deploy the Ingress Controller
The ingress controller module is an all-in-one deployment of Pomerium Core and the ingress controller. It creates the following Kubernetes resources: Namespace, CRDs, RBAC (ClusterRole, ClusterRoleBinding, ServiceAccounts), IngressClass, Deployment, Services, and Secrets.
module "pomerium_ingress_controller" {
source = "git::https://github.com/pomerium/install.git//ingress-controller/terraform?ref=main"
namespace_name = "pomerium"
enable_databroker = true
config = {
# Reference TLS certificates available in the namespace
certificates = ["pomerium/wildcard-tls"]
# See https://www.pomerium.com/docs/k8s/reference for all options
}
}
See the ingress-controller module variables for a full list of configuration options, including identity provider settings, resource limits, and proxy service type.
Deploy the Enterprise Console
The enterprise console module deploys the Pomerium Enterprise Console and connects it to the ingress controller using shared secrets.
module "pomerium_enterprise_console" {
source = "git::https://github.com/pomerium/install.git//enterprise/terraform/kubernetes?ref=main"
# Secrets from the ingress controller module
shared_secret_b64 = module.pomerium_ingress_controller.shared_secret_b64
signing_key_b64 = module.pomerium_ingress_controller.signing_key_b64
# Namespace where Pomerium Core is deployed
core_namespace_name = "pomerium"
# Enterprise image registry credentials
image_registry_password = var.cloudsmith_password
# Enterprise license
license_key = var.pomerium_license_key
# PostgreSQL database connection string
database_url = "postgres://console:${var.db_password}@db-host:5432/pomerium-enterprise?sslmode=require"
# Console web UI and API ingress configuration
console_ingress = {
dns = "console.example.com"
annotations = {}
}
console_api_ingress = {
dns = "console-api.example.com"
annotations = {}
}
# Initial administrator emails for console bootstrap
administrators = ["admin@example.com"]
# Sidecar containers (e.g. cloud-sql-proxy for GCP Cloud SQL)
sidecars = []
depends_on = [module.pomerium_ingress_controller]
}
See the enterprise console module variables for a full list of configuration options, including resource limits, clustered databroker settings, and observability options.
DNS Records
After deployment, point your console DNS records to the ingress controller's proxy service load balancer IP:
data "kubernetes_service" "pomerium_proxy" {
metadata {
name = "pomerium-proxy"
namespace = "pomerium"
}
depends_on = [module.pomerium_ingress_controller]
}
# Create DNS records for the console and API endpoints pointing to the
# load balancer IP:
# - console.example.com -> data.kubernetes_service.pomerium_proxy.status[0].load_balancer[0].ingress[0].ip
# - console-api.example.com -> data.kubernetes_service.pomerium_proxy.status[0].load_balancer[0].ingress[0].ip