This section is coming soon.
Access Policies Overview
Access Policies Overview
| Policy | Description |
|---|---|
| ELBControllerPolicy | Role assumed by load balancer controller to provision ELB when a service of type LoadBalancer is created |
| KarpenterPolicy and SQSPolicy | Role assumed by Karpenter to dynamically provision nodes and handle spot node termination |
| EFSPolicy | Role assumed by EFS CSI to provision and attach EFS volumes |
| EBSPolicy | Role assumed by EBS CSI to provision and attach EBS volumes |
| RolePolicy with policies for:- ECR, S3, SSM, EKS Use the trust relationship. | Role assumed by TrueFoundry to allow access to ECR, S3, and SSM services. If you are using TrueFoundry’s control plane the role will be assumed by arn:aws:iam::416964291864:role/tfy-ctl-euwe1-production-truefoundry-deps otherwise it will be your control plane’s IAM role |
| ClusterRole with policies: - AmazonEKSClusterPolicy - AmazonEKSVPCResourceControllerPolicy - EncryptionPolicy | Role that provides Kubernetes permissions to manage the cluster lifecycle, networking, and encryption |
| NodeRole with policies: AmazonEC2ContainerRegistryReadOnlyPolicy, AmazonEKS_CNI_Policy, AmazonEKSWorkerNodePolicy, AmazonSSMManagedInstanceCorePolicy | Role assumed by EKS nodes to work with AWS resources for ECR access, IP assignment, and cluster registration |
Setting up Infrastructure
Requirements:
The requirements to setup control plane in each of the scenarios is as follows:- Billing and STS must be enabled for the AWS account.
- Please make sure you have enough quotas for GPU/Inferentia instances on the account depending on your usecase. You can check and increase quotas at AWS EC2 service quotas
- Please make sure you have created a certifcate for your domain in AWS Certificate Manager (ACM) and have the ARN of the certificate ready. This is required to setup TLS for the load balancer.
- Postgres database with the following requirements:
- Version: >= 13
- Instance Types:
db.t3.mediumordb.t4g.medium - Storage: 20GB of type
gp3with autoscale enabled to 30GB - Encryption: Enabled
- For PostgreSQL 17+: Set
force_sslparameter to0(off) in parameter group if you need to allow non-SSL connections (default is1) - Security Group: Ensure RDS security group allows inbound traffic from EKS node security groups
- S3 bucket to store the intermediate code while building the docker image.
- Egress Access for TrueFoundry Auth: Egress access to https://auth.truefoundry.com and analytics.truefoundry.com is needed to verify the users logging into the TrueFoundry platform for licensing purposes.
- DNS: Domain for control plane and service endpoints. One endpoint to point to the control plane service (e.g., platform.example.com) and the other to point to the compute plane service (e.g., tfy.example.com/service1). The control-plane URL must be reachable from the compute-plane. The developers will need to access the TrueFoundry UI at the provided domain.
- We will need a certificate ARN (for the domain provided above) to attach to the loadbalancer so as to terminate TLS traffic at the load balancer. This will allow the services we deploy on the cluster to be accessed via HTTPS. We recommend using AWS Certificate Manager to add TLS to the load balancer. You can read the instructions in Step 2 below on how to create the certificate in AWS Certificate Manager.
- You need to have enough permissions on the AWS account to create the resources needed for the compute plane. Check this for more details. We usually recommend admin permission on the AWS account, but if you need the exact set of fine-grained permissions, you can check the list of permissions below:
- New VPC and New EKS Cluster
- Existing VPC and New EKS Cluster
- Existing EKS Cluster
- The new VPC should will have a CIDR range of /20 or larger, at least 2 availability zones and private subnets with CIDR
/24or larger. This is to ensure capacity for ~250 instances and 4096 pods. - If you want to use a smaller network range for your EKS cluster, TrueFoundry supports EKS custom networking as well.
- A NAT gateway will be provisioned to provide internet access to the private subnets.
- We should have egress access to
public.ecr.aws,quay.io,ghcr.io,tfy.jfrog.io,docker.io/natsio,nvcr.io,registry.k8s.ioso that we can download the docker images for argocd, nats, gpu operator, argo rollouts, argo workflows, istio, keda, etc.
Setting up control plane
TrueFoundry control plane infrastructure is provisioned using terraform. You can download the terraform code for your exact account by filling up your account details and downloading a script that can be executed on your local machine. To perform the below steps, you need to register an account on TrueFoundry and login to the platform.Choose to create a new cluster or attach an existing cluster
Clusters. Add the following value at the end of your URL &controlPlaneSetupEnabled=true. This will enable the control plane installation for you. You can click on Create New Cluster or Attach Existing Cluster depending on your use case. Read the requirements and if everything is satisfied, click on Continue.
Get Domain and Certificate ARN
*.services.example.com - we will be creating a DNS record with this later in Step 6. We recommend using AWS Certificate Manager (ACM) to create the certificate since it’s easier to manage and renew the certificates automatically. To generate a certificate ARN, please follow the steps below. If you are not using AWS Certificate Manager, you can skip this step and continue to the next step.Create the certificate in AWS Certificate Manager
Create the certificate in AWS Certificate Manager
- Navigate to AWS Certificate Manager in the AWS console
- Request a public certificate
- Specify your domain (e.g.,
*.services.example.com) - Choose DNS validation (recommended)
- Add the CNAME records provided by ACM to your DNS provider. Follow the official AWS guide for DNS validation. For detailed steps on adding CNAME records, see AWS documentation on DNS validation
- Wait for the certificate to change to “Active” status (this may take 30 minutes or longer)
- Copy the certificate ARN for the next step (format will be like:
arn:aws:acm:region:account:certificate/certificate-id)
Fill up the form to generate the terraform code
Submit when done- Create New Cluster
- Attach Existing Cluster
Cluster Name- A name for your cluster.Region- The region where you want to create the cluster.Network Configuration- Choose betweenNew VPCorExisting VPCdepending on your use case.Authentication- This is how you are authenticated to AWS on your local machine. It’s used to configure Terraform to authenticate with AWS.S3 Bucket for Terraform State- Terraform state will be stored in this bucket. It can be a preexisting bucket or a new bucket name. The new bucket will automatically be created by our script.Control Plane Configuration- Control plane URL and the database details. You can chose betweenPostgreSQL on kubernetesorManaged PostgreSQL (RDS)orExisting PostgreSQL configurationdepending on your use case.Load Balancer Configuration- This is to configure the load balancer for your cluster. You can choose betweenPublicorPrivateLoad Balancer, it defaults toPublic. You can also add certificate ARNs and domain names for the load balancer but these are optional.

Copy the curl command and execute it on your local machine
curl command to download and execute the script. The script will take care of installing the pre-requisites, downloading terraform code and running it on your local machine to create the cluster. This will take around 40-50 minutes to complete.
Create DNS Record
LoadBalancer in the istio-system namespace. You can run the following command to get the IP address.| Record Type | Record Name | Record Value |
|---|---|---|
| CNAME | CONTROL_PLANE_DOMAIN | LOADBALANCER_IP_ADDRESS |
Attach the compute plane to the control plane
Clusters. Click on Attach Existing Cluster and fill in the details of the control plane cluster. The key fields to fill up here are:Cluster Name- The name of the cluster.Cluster Addons- Unselect all the addons as we have installed them while bringing up the control plane.Network Configuration- Networking configuration of the control plane cluster.Authentication- This is how you are authenticated to AWS on your local machine. It’s used to configure Terraform to authenticate with AWS.S3 Bucket for Terraform State- Terraform state will be stored in this bucket. It can be a preexisting bucket or a new bucket name. You can use the same bucket that we used for the control plane and change the bucket key to be used for terraform state file.Platform Features- This is to decide which features like BlobStorage, ClusterIntegration, ParameterStore, DockerRegistry and SecretsManager will be enabled for your cluster. To read more on how these integrations are used in the platform, please refer to the platform features page.

Copy the curl command and execute it on your local machine
curl command to download and execute the script. The script will take care of installing the pre-requisites, downloading terraform code and running it on your local machine to create the cluster. This will take around 40-50 minutes to complete.
Verify the cluster is showing as connected in the platform
Start deploying workloads to your cluster
FAQ
Can I use cert-manager to add TLS to the load balancer and not use AWS Certificate Manager?
Can I use cert-manager to add TLS to the load balancer and not use AWS Certificate Manager?
Can I use my own certificate and key files to add TLS to the load balancer?
Can I use my own certificate and key files to add TLS to the load balancer?