Lightup Self-hosted (existing Kubernetes cluster)

Details on the Lightup Self-hosted deployment model.

You can deploy Lightup Self-hosted to an existing self-managed or AWS EKS-managed Kubernetes cluster, using the following procedure.

Contact Lightup Support to begin

To get started, contact [email protected].

You will receive a Lightup token in email.

Before deployment, you'll need to provide Lightup with more information:

  • The e-mail address of the initial admin user, if it's not the same as the one that's making the request. This should be provided before the deployment.
  • The URL or IP address you'll use to access the cluster. This can be supplied later but you won't be able to access the cluster until Lightup has this info.

There are also a number of prerequisite steps.


  1. Outbound connectivity— Your system instance must always have access to the following internet services for the proper functioning of Lightup. You should modify your firewall rules if you cannot access any of these services.

    ServiceDomains to whitelist
    Replicated ( Lightup application software is packaged and licensed using Replicated. The application bundle (Kubernetes binaries, Docker containers, license file) are pulled from Replicated during the installation sequence and subsequent upgrades.- * (enables Upstream Docker images via The on-prem docker client uses a license ID to authenticate to This domain is owned by Replicated, Inc., headquartered in Los Angeles, CA.)
    - (source of replicated images for releases)
    - * (source of replicated images for releases)
    - * (source of replicated images for releases)
    - (source of Kubernetes cluster installation scripts and artifacts: an application identifier is sent in a URL path, and bash scripts and binary executables are served from This domain is owned by Replicated, Inc., headquartered in Los Angeles, CA)
    - (source of tar.gz: packages are downloaded from Amazon S3. The IP ranges to whitelist for accessing these can be scraped dynamically from the AWS IP Address Ranges documentation.)
    Datadog: Lightup uses Datadog for container logging, metric monitoring and Kubernetes pod health monitoring* (enables Lightup monitoring)
    Lightup AWS Services: Lightup leverages a dedicated single-tenant service for install and upgrade requirements.* (enables Lightup system updates and calls)
  2. A supported Kubernetes version— We currently support versions 1.24 through 1.27.

  3. AWS EKS— If you’re using AWS EKS, make sure that the following add-ons or equivalent have been installed:

  4. Sufficient cluster resources— You need a minimum of 4 vCPUs / 16 GB memory per node and a minimum total resources across all nodes of 8 vCPUs, 64GB RAM.


Step 1: Set up Postgres Server 14

  1. Install Postgres Server 14 with the following resources:
    • 4 CPUs
    • 8 GB memory
    • 200 GB storage
    • 2000 IOPS
  2. Set up daily backups.
  3. When your Postgres instance is ready, log in and complete the following steps:
    a. Make note of the Postgres host, TCP port, username and password - you'll need them during the next installation step (when you bootstrap the Lightup data plane).
    b. Create three databases: adb, sdb, and udb.
    c. In each database, set max_connection >= 500 and run the following code:
SELECT * FROM pg_stat_statements LIMIT 1;
SELECT uuid_generate_v4();

Step 2: Bootstrap the Lightup data plane

  1. Connect to a VM / laptop / server that has access to the Kubernetes cluster and can run bash.
  2. Make sure kubectl, curl, gpg and tar are installed.
  3. Make sure that the kubectl context points to the cluster where you are deploying Lightup. For help, see Kubernetes page, Configure Access to Multiple Clusters.
  4. Run the following command, using the tla and token values from Lightup:
    curl -H 'Cache-Control: no-cache' -L \ | \
    LIGHTUP_TLA=<tla> LIGHTUP_TOKEN=<token> bash -s deploy
  5. Follow the prompts to complete Postgres configuration, entering the values you noted when you set up Postgres.
  6. Wait for about 10 minutes, then proceed to Step 3: Access the Lightup UI.

Step 3: Access the Lightup UI

Lightup exposes a NodePort on port 30443 that you can use to access the Lightup UI.

If the nodes in your EKS cluster have public IP addresses, then you can access the UI at https\://<ip-address-or-host-name-of-your-VM>:30443. If the nodes do not have a public IP address or if you want to configure a URL, you can use an AWS Network Load Balancer or an AWS Application Load Balancer to access the UI. Lightup only works with supported domains, so you'll need to provide Lightup support with your IP address / hostname / URL.

The following example shows how you'd access the Lightup UI at using a Network Load Balancer.

  1. Create a certificate for with AWS Certificate Manager (ACM).

  2. Verify the certificate with the option to create records in Route 53.

  3. Copy the certificate ARN— you'll need this for the following step.

  4. Update the following manifest for the Kubernetes service that will generate the NLB, and save it as a file named lightup-nlb-service.yaml:

     apiVersion: v1  
       kind: Service  
         name: lightup-backend  
           app: backend  
         externalTrafficPolicy: Local  
       - name: backend  
          port: 443  
          protocol: TCP  
          targetPort: 8000  
        app: backend  
        release: backend  
      type: LoadBalancer
  5. Deploy the NLB using kubectl and the YAML file you saved:
    kubectl apply -n lightup -f lightup-nlb-service.yaml

  6. Copy the generated NLB's URL.

  7. Create a new Route 53 A record for to route to the NLB's URL.

Set up a NAT gateway (optional)

If the Lightup application needs to use a singular, consistent IP address to access your datasources, you can configure a NAT gateway for your k8s cluster on any of the following platforms: