Installation on Microsoft Azure via AKS
Overview
This guide covers the installation of CAST Imaging on Microsoft Azure Azure Kubernetes Service (AKS) using Helm charts.
Requirements
- CAST Imaging Docker images downloaded and available in the registry - these are available as listed in the table below
- A clone of the appropriate Git repository (https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup ) branch (i.e. matching the version of CAST Imaging you want to deploy) containing the Helm chart scripts - for example to clone the 3.2.3-funcrel release branch use
git clone -b 3.2.3 https://github.com/CAST-Extend/com.castsoftware.castimaging-v3.kubernetessetup
- A valid CAST Imaging License
- Optional setup choices:
- Deploy the Kubernetes Dashboard (https://github.com/kubernetes/dashboard ) to troubleshoot containers and manage the cluster resources.
- Setup Azure Files for a multi
analysis-node
deployment (Azure Disks - block storage is used by default) - Use an external PostgreSQL instance (a PostgreSQL instance is provided as a Docker image and will be used by default)
Docker images
CAST Imaging is provided in a set of Docker images as follows:
CAST Imaging component | Image name | URL |
---|---|---|
imaging-services |
Gateway | https://hub.docker.com/r/castimaging/gateway |
imaging-services |
Control Panel | https://hub.docker.com/r/castimaging/admin-center |
imaging-services |
SSO Service | https://hub.docker.com/r/castimaging/sso-service |
imaging-services |
Auth Service | https://hub.docker.com/r/castimaging/auth-service |
imaging-services |
Console | https://hub.docker.com/r/castimaging/console |
dashboards |
Dashboards | https://hub.docker.com/r/castimaging/dashboards |
analysis-node |
Analysis Node | https://hub.docker.com/r/castimaging/analysis-node |
imaging-viewer |
ETL | https://hub.docker.com/r/castimaging/etl-service |
imaging-viewer |
AI Service | https://hub.docker.com/r/castimaging/ai-service |
imaging-viewer |
Viewer Server | https://hub.docker.com/r/castimaging/viewer |
imaging-viewer |
Neo4j | https://hub.docker.com/r/castimaging/neo4j |
extend-local-server |
Extend Proxy | https://hub.docker.com/r/castimaging/extend-proxy |
utilities |
Init Container | https://hub.docker.com/r/castimaging/init-util |
Installation process
Before starting the installation, ensure that your Kubernetes cluster is running, all the CAST Imaging docker images are available in the registry and that helm
and kubectl
are installed on your system.
Step 1 - AKS environment setup
We do not provide instructions for creating an Azure AKS cluster. Please refer to the Azure on-line documentation .
CAST Imaging also requires:
- Azure CLI to retrieve the cluster credentials
az aks get-credentials --resource-group my-resource-group --name my-cluster
(login withaz login
) kubectl
- see https://kubernetes.io/docs/tasks/tools/helm
- see https://helm.sh/docs/intro/quickstart/ . The binary download is provided here: https://github.com/helm/helm/releases
Step 2 - Prepare and run the CAST Imaging installation
- Review and adjust the parameter values in the
values.yaml
file (located at the root of the cloned Git repository branch) in between the section separated with # marks. - Ensure you set the
K8SProvider:
option toAKS
- When using a custom CA or self-signed SSL certificate, copy the contents into the relevant section in the file
console-authenticationservice-configmap.yaml
located at the root of the cloned Git repository branch and then setUseCustomTrustStore:
option totrue
in thevalues.yaml
file - Run
helm-install.bat|sh
(depending on your base OS) located at the root of the cloned Git repository branch
Step 3 - Configure network settings
- Prepare a CDN like Ingress Service, Application Gateway or a web server (e.g., NGINX) as a reverse proxy to host the
imaging-services
“gateway” service (with a DNS record/FQDN such as castimagingv3.com). The DNS record/FQDN should also have an appropriate SSL certificate. - If you plan to use an Azure Application Gateway, instructions can be found in the file
Azure-ApplicationGateway-for-CastImaging.pdf
(located at the root of the cloned Git repository branch) - Update the
FrontEndHost:
variable in thevalues.yaml
file, e.g. with https://dev.imaginghost.com - Apply the
helm
chart changes by runninghelm-upgrade.bat|sh
(depending on your base OS) located at the root of the cloned Git repository branch. - Ensure you configure a redirection from DNS to external IP.
Step 4 - Install Extend Local Server (optional)
If you need to install Extend Local Server as an intermediary placed between CAST Imaging and CAST’s publicly available “Extend” ecosystem https://extend.castsoftware.com , follow the instructions below.
- Retrieve the Extend Local Server external IP address by running
kubectl get service -n castimaging-v3 extendproxy
- In
values.yaml
(located at the root of the cloned Git repository branch), setExtendProxy.enable
totrue
and update theExtendProxy.exthostname
variable with the external IP address:
ExtendProxy:
enable: true
exthostname: EXTERNAL-IP
- Run
helm-upgrade.bat|sh
(depending on your base OS) located at the root of the cloned Git repository branch. - Review the log of the
extendproxy
pod to find the Extend Local Server administration URL and API key (these are required for managing Extend Local Server and configuring CAST Imaging to use it - you can find out more about this in Extend Local Server). You can open the log file from the Kubernetes Dashboard (if you have chosen to install it). Alternatively, you can get theextendproxy
pod name by runningkubectl get pods -n castimaging-v3
then runkubectl logs -n castimaging-v3 castextend-xxxxxxxx
to display the log.
Step 5 - Initial start up configuration
When the install is complete, browse to the public/external URL and login using the default local admin/admin
credentials. You will be prompted to configure:
- your licensing strategy. Choose either a
Named Application
strategy (where each application you onboard requires a dedicated license key entered when you perform the onboarding), or aContributing Developers
strategy (a global license key based on the number of users):
- CAST Extend settings / Proxy settings (if you chose to install Extend Local Server (see Step 4 above) then you now need to input the URL and API key so that CAST Imaging uses it).
As a final check, browse to the URL below and ensure that you have at least one CAST Imaging Node Service, the CAST Dashboards and the CAST Imaging Viewer components listed:
https://<public or external URL>/admin/services
Step 6 - Configure authentication
Out-of-the-box, CAST Imaging is configured to use Local Authentication via a simple username/password system. Default login credentials are provided (admin/admin
) with the global ADMIN
profile so that installation can be set up initially.
CAST recommends configuring CAST Imaging to use your enterprise authentication system such as LDAP or SAML Single Sign-on instead before you start to onboard applications. See Authentication for more information.
How to start and stop CAST Imaging
Use the following script files (located at the root of the cloned Git repository branch) to stop and start CAST Imaging:
Util-ScaleDownAll.bat|sh
Util-ScaleUpAll.bat|sh
Optional setup choices
Install Kubernetes Dashboard
To install the Kubernetes Dashboard, run the command below. For more information, please refer to the Kubernetes Dashboard documentation at https://github.com/kubernetes/dashboard . Note that internet access is required to retrieve the Helm repository from https://kubernetes.github.io/dashboard .
- Add the helm repo to your local helm repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
- Run the helm upgrade
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
- For Helm-based installation when
kong
is being installed by our Helm chart, run:
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
- Access the dashboard via: https://localhost:8443
- Run the following command to generate the access token required for admin login, login to dashboard and select the
castimaging-v3
namespace from the dropdown menu to manage the CAST Imaging deployment.
kubectl -n kubernetes-dashboard create token admin-user
Setup Azure Files for multiple analysis-node(s)
All pods will use Azure Disks (block storage) by default. For the console-analysis-node StatefulSet
, it is possible to configure Azure Files (based on the file.csi.azure.com driver
driver) to enable file sharing between analysis nodes, when multiple analysis nodes are required.
Prior to running the initial CAST Imaging installation (detailed above), follow these steps:
- Set
AnalysisNodeFS.enable
to true in thevalues.yaml
located at the root of the cloned Git repository branch - Proceed with the CAST Imaging installation described above
Use an external PostgreSQL instance
If you do not want use the PostgreSQL instance preconfigured in this helm
chart, you can disable it and configure an Azure Database for PostgreSQL instead.
- Setup your Azure Database for PostgreSQL (PostgreSQL 15 - 8GB RAM minimum recommended, e.g. B2ms)
- Connect to the database with a superuser and execute this script to create the necessary CAST custom users/database:
CREATE USER operator WITH SUPERUSER PASSWORD 'CastAIP';
GRANT azure_pg_admin TO operator;
CREATE USER guest WITH PASSWORD 'WelcomeToAIP';
GRANT ALL PRIVILEGES ON DATABASE postgres TO operator;
CREATE USER keycloak WITH PASSWORD 'keycloak';
CREATE DATABASE keycloak;
GRANT ALL PRIVILEGES ON DATABASE keycloak TO keycloak;
EOSQL
- In the
values.yaml
located at the root of the cloned Git repository branch:- Set
CastStorageService.enable
tofalse
(to disable the PostgreSQL instance server preconfigured by CAST) - Set
CustomPostgres.enable
totrue
- Set the
CustomPostgres.host
andCustomPostgres.port
to match your custom instance host name and port number
- Set
- Proceed with the CAST Imaging installation described above