Installation on Amazon Web Services via EKS
Overview
This guide covers the installation of CAST Imaging on Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) using Helm charts.
Requirements
- Access to Docker Hub registry - CAST Imaging Docker images are available as listed in the table below
- A clone of the latest release branch from the Git repository containing the Helm chart scripts: git clone https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup(to clone an older release, add the “-b x.x.x” flag with the desired release number).
- A valid CAST Imaging License
- Optional setup choices:
- Deploy the Kubernetes Dashboard (https://github.com/kubernetes/dashboard ) to troubleshoot containers and manage the cluster resources.
- Setup Elastic File Storage for a multi analysis-nodedeployment (Amazon Elastic Block Store (EBS) is used by default)
- Use an external PostgreSQL instance (a PostgreSQL instance is provided as a Docker image and will be used by default)
 
Docker images
CAST Imaging is provided in a set of Docker images as follows:
Installation process
Before starting the installation, ensure that your Kubernetes cluster is running, all the CAST Imaging docker images are available in the registry and that helm and kubectl are installed on your system.
Step 1 - EKS environment setup
- Create your EKS environment, see EKS - Cluster Setup
- Retrieve cluster credentials:
aws eks update-kubeconfig --region xx-xxxx-x --name my-cluster
- Install kubectl, see https://kubernetes.io/docs/tasks/tools/
- Install helm:- Binary download: https://github.com/helm/helm/releases
- Documentation: https://helm.sh/docs/intro/quickstart
 
Step 2 - Prepare and run the CAST Imaging installation
- Review and adjust the parameter values in the values.yamlfile (located at the root of the cloned Git repository branch) in between the section separated with # marks.
- Ensure you set the K8SProvider:option toEKS
- Run helm-install.bat|sh(depending on your base OS) located at the root of the cloned Git repository branch
Step 3 - Configure network settings
You will need to setup a reverse proxy such as an Ingress Service, or a web server (e.g., NGINX) to access the imaging-services “gateway” from outside (with a DNS record/FQDN such as dev.imaginghost.com). The DNS record/FQDN should also have an appropriate SSL certificate.
Another option is to use a CloudFront.
If you want to use an Ingress
Set CreateIngress: true in values.yaml:
# Ingress & LoadBalancer creation (for console-gateway, extendproxy, mcp-server):
CreateIngress: true
Install the Ingress driver on the cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update        
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"
Create TLS Secret(s) using the certificate files associated to the DNS name you are planning to use (e.g. dev.imaginghost.com):
kubectl create secret tls tls-secret-cast        --cert=mycertificatefolder\fullchain.pem --key=mycertificatefolder\privkey.pem -n castimaging-v3
# (fullchain.pem <=> tls.crt ; privkey.pem <=> tls.key)
Optional - for certificates that cannot be verified (e.g., self-signed certificate or internal CA), it will need to be stored in CAST auth-service:
- set: UseCustomTrustStore: trueinvalues.yaml
- Insert the encoded certificate:
- directly inside the auth.caCertificatevariable invalues.yaml
- or using helm upgrade ... --set-file auth.caCertificate=ca.crt ...to override the variable value with theca.crtfile content
 
- directly inside the 
UseCustomTrustStore: true
auth:
  caCertificate: |
    -----BEGIN CERTIFICATE-----
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -----END CERTIFICATE-----
Final steps:
- Update the FrontEndHost:variable in thevalues.yamlfile, e.g. with https://dev.imaginghost.com
- Apply the helmchart changes by runninghelm-upgrade.bat|sh(depending on your base OS) located at the root of the cloned Git repository branch.
- Create a DNS record pointing at the reverse proxy ADDRESS. For an Ingress, this ADDRESS can be displayed using this command:
kubectl get ingress -n castimaging-v3
If you want to use a CloudFront:
First get the imaging-services “gateway” service external host name by running kubectl get service -n castimaging-v3 console-gateway-service, which will return similar to: a8ec2379b09fexxxxxxxxx-532570000.us-east-2.elb.amazonaws.com.
Create a new CloudFront entry in the AWS Console by clicking Create distribution:
- 
In the Origin block: - Set the Origin domain value to the imaging-services“gateway” service external host name, e.g.:_a8ec2379b09fexxxxxxxxx-532570000.us-east-2.elb.amazonaws.com
- Set Protocol to HTTP only
- Set HTTP port to 8090
- Set Name to: castimaging-v3 
 
- Set the Origin domain value to the 
- 
In the Web Application Firewall (WAF) block, select Do not enable security protections

- In the Settings block:
- Set IPv6 to Off
- Set Description (optional): free text description 
 
- Set IPv6 to 
- Leave all other settings at their default
- Click Create distribution button in the bottom right corner to complete the creation process
- Click the Behaviors tab in the newly created Distribution:
- Select Default (*)and click Edit:
  
- In Viewer protocol policy: select HTTPS only:
  
- Save the changes
 
- Select 
- In the General tab for the newly created Distribution ensure that you copy the Distribution domain name value:

- Now open the values.yamlfile (located at the root of the cloned Git repository branch) and update theFrontEndHostvariable with the Distribution domain name you copied previously, e.g.https://xxxxxxxxxxx.cloudfront.net
- Apply the helmchart changes by runninghelm-upgrade.bat|sh(depending on your base OS) located at the root of the cloned Git repository branch. CAST Imaging will be available at https://xxxxxxxxxxx.cloudfront.net .
Step 4 - Install Extend Local Server (optional)
If you need to install Extend Local Server as an intermediary placed between CAST Imaging and CAST’s publicly available “Extend” ecosystem https://extend.castsoftware.com , follow the instructions below. This step is optional and if not completed, CAST Imaging will access https://extend.castsoftware.com to obtain required resources.
- Retrieve the Extend Local Server external IP address by running kubectl get service -n castimaging-v3 extendproxy
- In values.yaml(located at the root of the cloned Git repository branch), setExtendProxy.enabletotrueand update theExtendProxy.exthostnamevariable with the external IP address:
ExtendProxy:
    enable: true
    exthostname:  myextendhost.com
- Run helm-upgrade.bat|sh(depending on your base OS) located at the root of the cloned Git repository branch.
- Review the log of the extendproxypod to find the Extend Local Server administration URL and API key (these are required for managing Extend Local Server and configuring CAST Imaging to use it - you can find out more about this in Extend Local Server). You can open the log file from the Kubernetes Dashboard (if you have chosen to install it). Alternatively, you can get theextendproxypod name by runningkubectl get pods -n castimaging-v3then runkubectl logs -n castimaging-v3 castextend-xxxxxxxxto display the log.
Step 5 - Initial start up configuration
When the install is complete, browse to the CloudFront URL and login using the default local admin/admin credentials. You will be prompted to configure:
- your licensing strategy. Choose either a Named Applicationstrategy (where each application you onboard requires a dedicated license key entered when you perform the onboarding), or aContributing Developersstrategy (a global license key based on the number of users):

- CAST Extend settings / Proxy settings (if you chose to install Extend Local Server (see Step 4 above) then you now need to input the URL and API key so that CAST Imaging uses it).

As a final check, browse to the URL below and ensure that you have at least one CAST Imaging Node Service, the CAST Dashboards and the CAST Imaging Viewer components listed:
https://xxxxxxxxxxx.cloudfront.net/admin/services

Step 6 - Configure authentication
Out-of-the-box, CAST Imaging is configured to use Local Authentication via a simple username/password system. Default login credentials are provided (admin/admin) with the global ADMIN profile so that installation can be set up initially.
CAST recommends configuring CAST Imaging to use your enterprise authentication system such as LDAP or SAML Single Sign-on instead before you start to onboard applications. See Authentication for more information.
How to start and stop CAST Imaging
Use the following script files (located at the root of the cloned Git repository branch) to stop and start CAST Imaging:
- Util-ScaleDownAll.bat|sh
- Util-ScaleUpAll.bat|sh
Optional setup choices
Install Kubernetes Dashboard
Please refer to the Kubernetes Dashboard documentation at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ .
Setup Elastic File Storage for multiple analysis-node(s)
All pods will use Amazon Elastic Block Store (EBS) by default. For the console-analysis-node StatefulSet, it is possible to configure EFS (Elastic File Storage based on the efs.csi.aws.com driver) to enable file sharing between analysis nodes, when multiple analysis nodes are required.
Prior to running the initial CAST Imaging installation (detailed above), follow these steps:
- Create a new EFS file system entry in the AWS Console by clicking Create file system
- Assign a Name
- Click Create file system in the bottom right corner
- In the newly created EFS, click the Access points tab and then Create access point:

- Enter the following configuration settings:
- Details:
- Name: castimaging-shared-datadir
- Root directory path: /castimaging-shared-datadir
 
- Name: 
- Root directory creation permissions:
- Owner user ID: 10001
- Owner group ID: 10001
- Access point permissions: 0777
 
- Owner user ID: 
 
- Details:
- Click Create access point in the bottom right corner
- Copy the newly created File System ID and Access point ID
- In the values.yamllocated at the root of the cloned Git repository branch, update theEFSsystemIDandEFSaccessPointIDvariables with the values copied previously and then setAnalysisNodeFS.enabletotrue
- Update the Security Group of the EFS (check its Network tab) to allow access (inbound rule on NFSport2049) from the Security Group of the Node Instances/AutoScalingGroup
- Proceed with the CAST Imaging installation described above
Use an external PostgreSQL instance
If you do not want use the PostgreSQL instance preconfigured in this helm chart, you can disable it and configure an Amazon RDS for PostgreSQL database instead.
- Setup your Amazon RDS for PostgreSQL database (PostgreSQL 15 - 8GB RAM minimum recommended, e.g. db.m5d.large)
- RDS must be configured with “Self managed” credentials
- master username: postgres
- set a password for the postgresuser
- create a custom-pg15Parameter Group in order to customize this parameter:rds.force_ssl = 0
- Once the RDS instance is created, apply the custom-pg15Parameter Group to it and reboot it.
- Connect to RDS with the “postgres” superuser and execute this script to create the necessary CAST custom users/database:
CREATE USER operator WITH PASSWORD 'CastAIP';
GRANT rds_superuser TO operator;
CREATE USER guest WITH PASSWORD 'WelcomeToAIP';
GRANT ALL PRIVILEGES ON DATABASE postgres TO operator;
CREATE USER keycloak WITH PASSWORD 'keycloak';
CREATE DATABASE keycloak;
GRANT ALL PRIVILEGES ON DATABASE keycloak TO keycloak;
- In the values.yamllocated at the root of the cloned Git repository branch:- Set CastStorageService.enabletofalse(to disable the PostgreSQL instance server preconfigured by CAST)
- Set CustomPostgres.enabletotrue
- Set the CustomPostgres.hostandCustomPostgres.portto match your custom instance host name and port number
 
- Set 
- Proceed with the CAST Imaging installation described above