Setting up GroundX On-Prem on OpenShift

Installing GroundX On-Prem on OpenShift

GroundX On-Prem is a free and open source retrieval augmented generation (RAG) tool where all necessary elements exist within a single containerized deployment. Thus, GroundX On-Prem allows for RAG workflows to be employed within hardened and secure environments. This guide discusses how to install GroundX On-Prem onto OpenShift, RedHat’s unified containerization development platform.

The guide you are reading now is a modification of this guide, which describes how to install GroundX On-Prem onto AWS EKS.


⚠️ Warning ⚠️

The resources created by following this guide may incur cost. Experience with OpenShift is recommended.


Prerequisites

This guide assumes the following:

  1. You have an existing OpenShift cluster running, be it on a local premises or via a cloud provider
  2. You have the Node Feature Discovery and NVIDIA GPU Operator installed (link)

1) Defining the Infrastructure

Naturally, a good first step is to clone this repo. If you haven’t yet, run

git clone https://github.com/eyelevelai/groundx-on-prem.git

Then run

cd eyelevel-iac/
cp operator/env.tfvars.example-openshift operator/env.tfvars

env.tfvars is the configuration file terraform will use when defining the resources. The content of env.tfvars can be modified to update this configuration as necessary. By copying operator/env.tfvars.example-openshift to env.tfvars you will be using the default configuration of GroundX On-Prem for OpenShift.

2) Update admin information in env.tfvars

For security reasons, we strongly encourage you to modify the following within operator/env.tfvars:

  • admin.api_key: Set this to a random UUID. You can generate one by running bin/uuid. This will be the API key associated with the admin account and will be used for inter-service communications.
  • admin.username: Set this to a random UUID. You can generate one by running bin/uuid. This will be the user ID associated with the admin account and will be used for inter-service communications.
  • admin.email: Set this to the email address you want associated with the admin account.

3) Update Persistent Volume Class Definition in env.tfvars

Many of the pods require a persistent volume (PV). If you have not done so, you will need to define a storage class in your OpenShift cluster which can be used by the GroundX pods.

You will need to modify cluster.pv within operator/env.tfvars, which we modified in the previous section. By default, the storage class is defined as:

name = "eyelevel-pv"
type = "gp2"

4) Label Nodes

GroundX pods can be deployed by as little as 1 node group or up to 5 different node groups, optimized for the needs of the 5 different classes of pods. By default, the pods are assigned to 5 different node groups in cluster.nodes within env.tfvars.

nodes = {
cpu_memory = "eyelevel-cpu-memory"
cpu_only = "eyelevel-cpu-only"
gpu_layout = "eyelevel-gpu-layout"
gpu_ranker = "eyelevel-gpu-ranker"
gpu_summary = "eyelevel-gpu-summary"
}

The GPU node groups and resource needs in a default installation are:

  • gpu_layout
    • 0.5 vCPU
    • 2 GB RAM
    • 8 GB of GPU memory
    • ~10 GB hard disk space
  • gpu_ranker
    • 1.5 vCPU
    • 4 GB RAM
    • 8 GB of GPU memory
    • ~10 GB hard disk space
  • gpu_summary
    • 1 vCPU
    • 4 GB RAM
    • 40 GB of GPU memory
    • ~40 GB hard disk space

The CPU node_groups and resource needs in a default installation are:

  • cpu_memory
    • 3.2 vCPU
    • 2.5 GB RAM
    • ~10 GB hard disk space
  • cpu_only
    • 3.5 vCPU
    • 7 GB RAM
    • ~120 GB hard disk space

The purpose of the 5 node groups are to give flexibility in where pods are deployed. If you have 1 node group with sufficient resources for all 5 of the GroundX node group types, you can change the labels in cluster.nodes within env.tfvars:

nodes = {
cpu_memory = "eyelevel-node"
cpu_only = "eyelevel-node"
gpu_layout = "eyelevel-node"
gpu_ranker = "eyelevel-node"
gpu_summary = "eyelevel-node"
}

If you have GPU and CPU node groups with sufficient resources for all 5 of the GroundX node group types, you can change the labels in cluster.nodes within env.tfvars:

nodes = {
cpu_memory = "eyelevel-cpu"
cpu_only = "eyelevel-cpu"
gpu_layout = "eyelevel-gpu"
gpu_ranker = "eyelevel-gpu"
gpu_summary = "eyelevel-gpu"
}

5) Deploying GroundX

Now that the resources necessary to deploy GroundX have been defined, we can deploy GroundX by running:

operator/setup

This will deploy GroundX On-Prem onto the OpenShift cluster as defined in Step 1.

6) Setting Up A Client To Talk TO GroundX On-Prem

Once GroundX On-Prem is deployed, you can run kubectl -n eyelevel get route to view the API endpoint that can be used to connect the GroundX SDK. The API endpoint will be the address associated with the GroundX load balancer.

For instance, the address might resemble the following:

EXTERNAL-IP
groundx-service-eyelevel.apps.ocp.psdc.lan

The GroundX SDK can communicate with your On-Prem instance of GroundX by specifying the base_url to point to your API endpoint, and also by providing the API key api_key, which should correspond to the admin.api_key defined in step 2.

Here’s an example code of connecting the GroundX SDK to an On-Prem instance in both Python and TypeScript.

1from groundx import GroundX
2
3external_ip = "b941a120ecd91455fa7b8682be2a9e41-1427794132.us-east-2.elb.amazonaws.com"
4api_key="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
5
6client = GroundX(api_key=api_key, base_url=f"http://{external_ip}/api")

A GroundX client which points to an On-Prem instance of GroundX behaves similarly to a traditional client which points to the hosted version of GroundX. See the API Documentation for more tutorials and documentation about specific endpoints and their usage.