Skip to main content

Key Components

To install the complete control plane on your own infrastructure, you need to install the following components:
  • Truefoundry Control Plane + Gateway (Shipped as a single helm chart called truefoundry)
  • PostgreSQL Database (Managed or Self-Hosted with PostgreSQL >= 13)
  • Blob Storage (S3, GCS, Azure Container or any other S3 compatible storage)

Compute Requirements

Truefoundry ships as a helm chart (https://github.com/truefoundry/infra-charts/tree/main/charts/truefoundry) that has configurable options to either deploy both Deployment and AI Gateway feature or just choose the one of them according to your needs. The compute requirements change based on the set of features and the scale of the number of users and requests. Here are a few scenarios that you can choose from based on your needs.
The small tier is recommended for development purposes. Here all the components are deployed on Kubernetes and in non HA mode (single replica). This is suitable if you are just testing out the different features of Truefoundry.
This setup brings up 1 replica of the services and is not highly-available. It can enable you to test the features but we do not recommend this for production mode.
ComponentCPUMemoryStorageMin NodesRemarks
Helm-Chart
(AI Gateway Control Plane components)
6 vCPU12GB60GB
Persistent Volumes (Block Storage) On Kubernetes
2
Pods should be spread over min 2 nodes
Cost: ~ $220 pm(EC2 and EC2 others)
Helm-Chart
(AI Gateway component only)
1 vCPU512Mi-1
Pods should be spread over min 1 node
Cost: ~ $35 pm(EC2 and EC2 others)
Postgres
(Deployed on Kubernetes)
0.5 vCPU0.5GB5GB
Persistent Volumes (Block Storage) On Kubernetes
Cost: ~ $15 pm (RDS compute and storage)
Blob Storage
(S3 Compatible)
20GBCost: ~ $3 pm (S3 storage)

Prerequisites for Installation

  1. Kubernetes Cluster: K8s cluster 1.27+.
  2. Support for dynamic provisioning of storage for PVC (for e.g AWS EBS, Azure Disk etc.) and support for ingress controller (for e.g. Nginx Ingress Controller) or istio service mesh for exposing the control plane dashboard and AI Gateway at an endpoint.
  3. Domain to map the ingress of the Control Plane dashboard and AI Gateway along with certificate for the domain.
    This Domain will be referred as Control Plane URL in our documentation.
  4. Egress Access from TrueFoundry:
  5. Tenant Name, Licence key, and image pull secret - This will be given by TrueFoundry team. Make sure your organization is registered((https://truefoundry.com/register)) on TrueFoundry.
    One Tenant Name and Licence key must only be used to setup one Control Plane. Later, switching to new tenant name and licence key would lead to complete data lose of existing control plane.
  6. PostgreSQL database. We usually recommend managed PostgreSQL database (For e.g. AWS RDS, or Google Cloud SQL, or Azure Database for PostgreSQL) for production environments.
    • PostgreSQL version >= 13
    • IOPS: Default (suitable for dev/testing).
    • For PostgreSQL 17+: Disable SSL, for AWS: by setting force_ssl parameter to 0 in the parameter group, for Azure: by setting require_secure_transport parameter to false in the parameter group
    • For instance requirements, refer to the Compute Requirements section.
      In case, you do not have a managed database just for testing purposes, set devMode to true in the values file to spin up a local PostgreSQL database.
  7. Blob Storage to store the AI Gateway request logs (either S3, GCS, Azure Blob Storage, or any other S3 compatible storage). You can find the instructions in the guide below.

Installation Instructions

1

Create S3 Bucket

Create a S3 Bucket with following config:
  • Make sure the bucket has lifecycle configuration to abort multipart upload set for 7 days.
  • Make sure CORS is applied on the bucket with the below configuration:
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "POST", "PUT"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": ["ETag"],
    "MaxAgeSeconds": 3000
  }
]
2

Setup Control Plane Platform IAM Role

Control Plane IAM Role needs to have permission to access the S3 bucket created in the previous step.
  • Create a new IAM role for Control Plane with a suitable name like tfy-control-plane-platform-deps
  • Add the following trust policy to the Control Plane IAM Role:
{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Principal": {
              "Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
              "StringEquals": {
                  "oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:sub": [
                      "system:serviceaccount:truefoundry:truefoundry",
                  ]
              }
          }
      }
  ]
}
In place of <ACCOUNT_ID>, <AWS_REGION>, and <OIDC_ID> you can also give the values from your EKS cluster. You can find the OIDC_ID from the EKS cluster. Also, here we are assuming that the service account is truefoundry and the namespace is truefoundry, you can change it as per your needs.
Create a IAM Policy to allow access to the S3 Bucket with following config:
{
  "Statement": [
    {
      "Sid": "S3",
      "Effect": "Allow",
      "Action": ["s3:*"],
      "Resource": [
        "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>",
        "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
      ]
    }
  ],
  "Version": "2012-10-17"
}
Attach the IAM Policy to the Control Plane Platform IAM Role. You can also attach the IAM policy to access AWS bedrock models from the link here.
If you are integrating with AWS bedrock models from a different AWS account, you can check the FAQ section.
3

Create Postgres RDS Database

Create a Postgres RDS instance of size db.t3.medium with storage size of 30GB.
Important Configuration Notes: - For PostgreSQL 17: Disable SSL by setting force_ssl parameter to 0 in the parameter group - Security Group: Ensure your RDS security group has inbound rules allowing traffic from EKS node security groups
In case you want to setup PostgreSQL on Kubernetes and not use RDS for testing purposes, skip this step and set devMode to true in the values file below
4

Create Kubernetes Secrets

We will create two secrets in this step:
  1. Store the License Key and DB Credentials
  2. Store the Image Pull Secret
We need to create a Kubernetes secret containing the licence key and db credentials.
If you are using PostgreSQL on Kubernetes in the dev mode, the values will be as follows:DB_HOST: <HELM_RELEASE_NAME>-postgresql.<NAMESPACE>.svc.cluster.local // eg. truefoundry-postgresql.truefoundry.svc.cluster.localDB_NAME: truefoundryDB_USERNAME: postgres # In order to use custom username, please update the same at postgresql.auth.usernameDB_PASSWORD: randompassword # You can change this to any value here.
truefoundry-creds.yaml
apiVersion: v1
kind: Secret
metadata:
  name: truefoundry-creds
type: Opaque
stringData:
  TFY_API_KEY: <TFY_API_KEY> # Provided by TrueFoundry team
  DB_HOST: <DB_HOST>
  DB_NAME: <DB_NAME>
  DB_USERNAME: <DB_USERNAME>
  DB_PASSWORD: <DB_PASSWORD>
Apply the secret to the Kubernetes cluster (Assuming you are installing the control plane in the truefoundry namespace)
kubectl apply -f truefoundry-creds.yaml -n truefoundry
We need to create a Image Pull Secret to enable pulling the truefoundry images from the private registry.
truefoundry-image-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: truefoundry-image-pull-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: <IMAGE_PULL_SECRET> # Provided by TrueFoundry team
Apply the secret to the Kubernetes cluster (Assuming you are installing the control plane in the truefoundry namespace)
kubectl apply -f truefoundry-image-pull-secret.yaml -n truefoundry
5

Create HelmChart Values file

Create a values file as given below and replace the following values:
  • Control Plane URL: URL that you will map to the control plane dashboard (e.g., https://truefoundry.example.com)
  • Tenant Name: Tenant name provided by TrueFoundry team
  • AWS S3 Bucket Name: Name of the S3 bucket you created in the previous step (e.g., my-truefoundry-bucket)
  • AWS Region: Region of the S3 bucket you created in the previous step (e.g., us-west-2)
  • Control Plane IAM Role ARN: ARN of the IAM role you created in the previous step (e.g., arn:aws:iam::123456789012:role/tfy-control-plane-platform-deps)
truefoundry-values.yaml
global:
  # Domain to map the platform to
  controlPlaneURL: https://example.com

  # Ask TrueFoundry team to provide these
  tenantName: <TENANT_NAME>

  # Choose the resource tier as per your needs
  resourceTier: medium # or small or large

  # This is the reference to the secrets we created in the previous step
  existingTruefoundryCredsSecret: "truefoundry-creds"
  imagePullSecrets:
    - name: "truefoundry-image-pull-secret"
  ## Add if you have restricted public registry access
  # image:
  #   pullSecretNames:
  #   - "truefoundry-image-pull-secret"

  config:
    defaultCloudProvider: "aws"
    storageConfiguration:
      awsS3BucketName: "<AWS_S3_BUCKET_NAME>"
      awsRegion: "<AWS_REGION>"

  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: <CONTROL_PLANE_IAM_ROLE_ARN>

  ingress:
    hosts:
      - example.com
    enabled: true
    annotations: {}
    ingressClassName: nginx # Replace with your ingress class name

# In case, you want to spin up PostgreSQL on kubernetes, enable this
# Please add creds and host details in the secret `truefoundry-creds` in the previous step
devMode:
  enabled: false
tags:
  llmGateway: true
  llmGatewayRequestLogging: true

# Disable few dependencies for only LLM Gateway setup
tfyBuild:
  enabled: false
sfyManifestService:
  enabled: false
tfyController:
  enabled: false
tfy-buildkitd-service:
  enabled: false
tfy-configs:
  enabled: false
6

Install Helm chart

helm upgrade --install truefoundry oci://tfy.jfrog.io/tfy-helm/truefoundry -n truefoundry --create-namespace -f truefoundry-values.yaml

FAQ

You can add multiple gateway planes to the control plane by following the steps below:
1

Create Kubernetes Secret for License Key and DB Credentials

We will create two secrets in this step:
  1. Store the License Key
  2. Store the Image Pull Secret
We need to create a Kubernetes secret containing the licence key.
Same license key will be used for all the gateway planes as used for the control plane
truefoundry-creds.yaml
apiVersion: v1
kind: Secret
metadata:
  name: truefoundry-creds
type: Opaque
stringData:
  TFY_API_KEY: <TFY_API_KEY>
Apply the secret to the Kubernetes cluster (Assuming you are installing the control plane in the truefoundry namespace)
kubectl apply -f truefoundry-creds.yaml -n truefoundry
We need to create a Image Pull Secret to enable pulling the truefoundry images from the private registry.
Same image pull secret will be used for all the gateway planes as used for the control plane. Use your credentials if you are pulling TrueFoundry images from your registry.
truefoundry-image-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: truefoundry-image-pull-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: <IMAGE_PULL_SECRET> # Provided by TrueFoundry team
Apply the secret to the Kubernetes cluster (Assuming you are installing the control plane in the truefoundry namespace)
kubectl apply -f truefoundry-image-pull-secret.yaml -n truefoundry
2

Create Helm chart Values file for gateway plane

Create a values file as given below and replace the following values:
  • CONTROL_PLANE_URL: URL that you will map to the control plane dashboard.
  • TENANT_NAME: Tenant name provided by TrueFoundry team.
  • GATEWAY_ENDPOINT_HOST: The domain where you will expose the gateway endpoint (e.g., gateway.example.com)
truefoundry-gateway-values.yaml
global:
  # This is the reference to the secrets we created in the previous step
  imagePullSecrets:
    - name: "truefoundry-image-pull-secret"

  # Choose the resource tier as per your needs
  resourceTier: medium # or small or large
  controlPlaneURL: <CONTROL_PLANE_URL> # eg. https://example-company.truefoundry.cloud
  tenantName: <TENANT_NAME>

ingress:
  enabled: true
  annotations: {}
  ingressClassName: nginx
  tls: []
  hosts:
    - <GATEWAY_ENDPOINT_HOST>

# Optional: Istio configuration (if using Istio instead of standard ingress)
# istio:
#   virtualservice:
#     hosts:
#       - <GATEWAY_ENDPOINT_HOST>
#     enabled: true
#     retries:
#       enabled: true
#       retryOn: gateway-error
#     gateways:
#       - istio-system/tfy-wildcard
#     annotations: {}
3

Install Helm chart for gateway plane

helm upgrade --install tfy-llm-gateway oci://tfy.jfrog.io/tfy-helm/tfy-llm-gateway -n truefoundry --create-namespace -f truefoundry-gateway-values.yaml
Yes. You can configure your Artifactory to mirror our registry.
Credentials for accessing the TrueFoundry private registry are required and will be provided during onboarding.
1. Registry Configuration
  • URL: https://tfy.jfrog.io/
2. Update Helm values
global:
  image:
    registry: <YOUR_REGISTRY> # Replace with your registry
postgresql:
  image:
    registry: <YOUR_REGISTRY> # Replace with your registry, use this if `devMode` is enabled
Yes. We provide a script that uses the truefoundry Helm Chart to identify and copy required images to your private registry.
Credentials for accessing the TrueFoundry private registry are required and will be provided during onboarding.
1. Install required dependencies
  • Skopeo
    • Used to perform the image copy operation.
  • Helm
    • Used to get the list of images from the TrueFoundry Helm Chart.
2. Add TrueFoundry Helm Chart repository
helm repo add truefoundry https://truefoundry.github.io/infra-charts
helm repo update
3. Authenticate to the TrueFoundry source registry
skopeo login -u <USERNAME> -p <PASSWORD> https://tfy.jfrog.io/
Replace <USERNAME> with the TrueFoundry registry username.
Replace <PASSWORD> with the TrueFoundry registry password.
4. Authenticate to your destination registry
skopeo login -u <USERNAME> -p <PASSWORD> <YOUR_REGISTRY>
Replace <USERNAME> with your registry username.
Replace <PASSWORD> with your registry password.
Replace <YOUR_REGISTRY> with the URL of your registry.
Skopeo will use authentication details for a registry that was previously authenticated with docker login.Alternatively, you can use the --dest-user and --dest-password flags to provide the username and password for the destination registry.
5. Run Clone Image Script
export TRUEFOUNDRY_HELM_CHART_VERSION=<TRUEFOUNDRY_HELM_CHART_VERSION>
export TRUEFOUNDRY_HELM_VALUES_FILE=<TRUEFOUNDRY_HELM_VALUES_FILE>
export DEST_REGISTRY=<YOUR_DESTINATION_REGISTRY>

# Dry-run example
curl -s https://raw.githubusercontent.com/truefoundry/infra-charts/main/scripts/clone_images_to_your_registry.sh | bash -s -- --helm-chart truefoundry --helm-version $TRUEFOUNDRY_HELM_CHART_VERSION --helm-values $TRUEFOUNDRY_HELM_VALUES_FILE --dest-registry $DEST_REGISTRY --dry-run

# Live example
curl -s https://raw.githubusercontent.com/truefoundry/infra-charts/main/scripts/clone_images_to_your_registry.sh | bash -s -- --helm-chart truefoundry --helm-version $TRUEFOUNDRY_HELM_CHART_VERSION --helm-values $TRUEFOUNDRY_HELM_VALUES_FILE --dest-registry $DEST_REGISTRY
Replace <TRUEFOUNDRY_HELM_CHART_VERSION> with the version of the Truefoundry helm chart you want to use. You can find the latest version in the changelog.Replace <TRUEFOUNDRY_HELM_VALUES_FILE> with the path to the values file you created in the Installation Instructions.Replace <DEST_REGISTRY> with the URL of your registry.
6. Update the Helm values file to use your registry
global:
  image:
    registry: <YOUR_REGISTRY> # Replace with your registry
postgresql:
  image:
    registry: <YOUR_REGISTRY> # Replace with your registry, use this if `devMode` is enabled
An air-gapped environment is isolated from the internet. Since the control plane and gateway plane ship as a single helm chart (truefoundry), you only need to make the container images available in your private registry and update the helm values to point to it.
  1. Copy images to your private registry — set up a registry mirror or copy images directly using the steps described in the FAQs above
  2. Update helm values to point to your private registry (see the helm value overrides in the same FAQs above)
  3. Continue with the standard installation on this page
You can integrate with AWS bedrock models from a different AWS account by following the steps below:
  1. Add the following IAM policy to the control plane IAM role so that it can assume the IAM role of the AWS account that has the bedrock models:
{
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Effect": "Allow",
      "Resource": "*"
    }
  ],
  "Version": "2012-10-17"
}
  1. In the IAM role in the destination AWS account (which has bedrock access), add the following trust policy to allow the control plane IAM role to assume it:
{
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<CONTROL_PLANE_IAM_ROLE_ARN>"
      },

      "Action": "sts:AssumeRole"
    }
  ],
  "Version": "2012-10-17"
}
  1. Now you can use the IAM role of the destination AWS account while integrating AWS bedrock models in the TrueFoundry AI gateway.
No, we only need block storage for installing and running Truefoundry. This should be supported via the CSI driver and only ReadWriteOnce access is required.
We log access information in standard output with the following format:
  1. logfmt
  2. json
These can be switched with the help of an environment variable to the AI Gateway installation. (Default: logfmt)

Log format

Standard log format structure:
time="%START_TIME%" level=%LEVEL% ip=%IP_ADDRESS% tenant=%TENANT_NAME% user=%SUBJECT_TYPE%:%SUBJECT_SLUG% model=%MODEL_ID% method=%METHOD% path=%PATH% status=%STATUS_CODE% time_taken=%DURATION%ms trace_id=%TRACE_ID%
Log operatorDetails
START_TIMEISO timestamp for request start. eg. 2025-08-12 13:34:50
LEVELinfo|warn|error
IP_ADDRESSIP address of the caller. eg. ::ffff:10.99.55.142
TENANT_NAMEName of the tenant. eg. truefoundry
SUBJECT_TYPEuser|virtualaccount
SUBJECT_SLUGEmail or virtual account name. eg. tfy-user@truefoundry.com|demo-virtualaccount
MODEL_IDModel ID. eg. openai-default/gpt-5
METHODGET|POST|PUT
PATHPath of the request. eg. /api/inference/openai/chat/completions
STATUS_CODE200|400|401|403|429|500
DURATIONDuration of the request. eg. 12
TRACE_IDTrace ID of the request
Examples:
time="2025-08-12 13:34:50" level=info ip=::ffff:10.99.55.142 tenant=truefoundry user=virtualaccount:demo-virtualaccount model=openai-default/gpt-5 method=POST path=/api/inference/openai/chat/completions status=200 time_taken=53ms trace_id=587b2a946c13f62f9160674a8c983ce3
By default, the control plane uses the TrueFoundry Auth Server for user authentication. However, you can configure it to use your own external identity provider instead. We support both OIDC and SAML-compliant identity providers. Read more
If your LLM requests are timing out after a certain duration, the first thing to check is the traces in the TrueFoundry dashboard. Look at the request duration — if you see requests consistently timing out at exactly 60 seconds, the issue is almost certainly the load balancer, not the TrueFoundry AI Gateway. The TrueFoundry gateway does not impose any request timeout.Traces showing requests timing out at 60 secondsThis commonly happens when an Application Load Balancer (ALB) is placed in front of the gateway to expose it. The default Connection idle timeout on AWS ALBs is 60 seconds, which is too short for long-running LLM inference requests (especially streaming responses or large prompts).Solution: Increase the idle timeout on your AWS ALB to a higher value (e.g., 300 seconds or more).You can find this setting in the AWS Console under EC2 → Load Balancers → Select your ALB → Attributes tab → Connection idle timeout.AWS ALB Connection idle timeout settingYou can also update it via the AWS CLI:
aws elbv2 modify-load-balancer-attributes \
  --load-balancer-arn <YOUR_ALB_ARN> \
  --attributes Key=idle_timeout.timeout_seconds,Value=300
If you are using an ingress controller (e.g., NGINX Ingress) in addition to the ALB, also verify that the ingress controller’s proxy timeout settings are configured appropriately.
Yes. TrueFoundry supports exporting metrics to Victoria Metrics as an alternative to Prometheus. To enable this, add the following to your truefoundry-values.yaml file and upgrade the Helm release:
This only installs the VMServiceScrape and related custom resources for scraping TrueFoundry metrics. It does not deploy Victoria Metrics itself — you are responsible for installing and managing your own Victoria Metrics instance.
truefoundry-values.yaml
victoriaMetricsMonitoring:
  enabled: true
Then upgrade the Helm release to apply the changes:
helm upgrade --install truefoundry oci://tfy.jfrog.io/tfy-helm/truefoundry -n truefoundry --create-namespace -f truefoundry-values.yaml
If your TrueFoundry deployment needs to trust custom Certificate Authorities (e.g., for internal services, private registries, or corporate proxies), you can configure custom CA certificates in the Helm chart.There are two methods to provide custom CA certificates:

Method 1: Pass customCA as a multiline string

You can directly provide the CA certificate content as a multiline string in your values.yaml:
truefoundry-values.yaml
global:
  customCA:
    enabled: true
    certificate: |
      -----BEGIN CERTIFICATE-----
      MIIDXTCCAkWgAwIBAgIJAKZ7VqHEqvmKMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
      BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
      ... (rest of your certificate) ...
      -----END CERTIFICATE-----
This method is suitable when you have one or a few CA certificates to add.

Method 2: Pass entire ca-certificates.crt as a ConfigMap

For environments with multiple custom CAs or when you want to maintain a standard ca-certificates.crt file, you can create a ConfigMap containing all your trusted certificates.
1

Prepare your CA certificate file

Add your custom CA certificate(s) to your system’s CA bundle. On a Linux system with the certificate file saved as custom-ca.crt:
# Copy the certificate to the CA directory
sudo cp custom-ca.crt /usr/local/share/ca-certificates/

# Update the CA certificates bundle
sudo update-ca-certificates
This will generate or update /etc/ssl/certs/ca-certificates.crt with your custom CA included.
2

Create a ConfigMap from the ca-certificates.crt file

Create a Kubernetes ConfigMap containing the complete CA bundle:
kubectl create configmap custom-ca-certificates \
  --from-file=ca-certificates.crt=/etc/ssl/certs/ca-certificates.crt \
  -n truefoundry
Alternatively, if you want to create it from a YAML file:
custom-ca-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-ca-certificates
  namespace: truefoundry
data:
  ca-certificates.crt: |
    -----BEGIN CERTIFICATE-----
    ... (your complete ca-certificates.crt content) ...
    -----END CERTIFICATE-----
Apply the ConfigMap:
kubectl apply -f custom-ca-configmap.yaml
3

Reference the ConfigMap in your Helm values

Update your truefoundry-values.yaml to reference the ConfigMap:
truefoundry-values.yaml
global:
  customCA:
    enabled: true
    configMapName: custom-ca-certificates
    configMapKey: ca-certificates.crt
4

Upgrade the Helm installation

Apply the changes by upgrading your Helm release:
helm upgrade --install truefoundry oci://tfy.jfrog.io/tfy-helm/truefoundry \
  -n truefoundry --create-namespace -f truefoundry-values.yaml
The custom CA certificates will be mounted into all TrueFoundry pods and added to the system’s trust store. This ensures that all outgoing HTTPS connections from TrueFoundry services will trust your custom CAs.
After adding custom CA certificates, verify that your TrueFoundry pods have restarted and are running correctly. You may need to restart existing pods for the changes to take effect.
TrueFoundry ships with a built-in monitoring stack that includes Grafana dashboards for the control plane. To enable it, add the following to your truefoundry-values.yaml:
truefoundry-values.yaml
truefoundryMonitoring:
  enabled: true
  grafana:
    grafana.ini:
      auth.jwt:
        jwk_set_url: >-
          https://<your-truefoundry-control-plane-url>/api/svc/v1/keys/<tenant-name>/jwks
Then upgrade the Helm release to apply the changes:
helm upgrade --install truefoundry oci://tfy.jfrog.io/tfy-helm/truefoundry \
  -n truefoundry --create-namespace \
  -f truefoundry-values.yaml
Once enabled, platform admins can access the Grafana dashboard at:
https://<your-truefoundry-control-plane-url>/admin/grafana/
  • Replace <your-truefoundry-control-plane-url> with your actual control plane domain (e.g., app.example.com) and <tenant-name> with your TrueFoundry tenant name provided during onboarding.
  • Only users with the admin role can access this endpoint.
  • Make sure to include the trailing / at the end of the URL.
  • If you already have Prometheus or VictoriaLogs in your cluster, you can point the monitoring stack to them using externalServices instead of installing new instances.
For the full configuration reference, see the Control Plane Monitoring guide.