Utility Setup =============================================================== In order to deploy the Utility, you will need to complete the following steps:: - Upload the Utility Daml code to your validator node. - Optionally deploy the Utility UI. Upload the Utility Daml code --------------------------------------------------------------- The Utility Daml code is shipped as a set of DAR packages, which need to be uploaded to the node. Setup ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For security purposes, the gRPC ports are not exposed by default in your node. In order to upload the DAR files, you will need to first temporarily port forward the `participant` service to your local machine (specifically, the Admin API port). To connect to services running in k8s, use k8s port forwarding to allocate an internal port in a running pod to a local port. The Canton Admin API is usually exposed on port 5002 of the `participant` service, which facilitates gRPC communication. In one terminal session where ``kubectl`` is connected to the running cluster, port forward as following:: kubectl port-forward -n 5002:5002 The ports specified in the command above are: : To check that this is working, run a gRPC command against the forwarded port on your local machine (in this example, list all dars on the participant):: grpcurl -plaintext localhost:5002 com.digitalasset.canton.admin.participant.v30.PackageService.ListDars Create a separate party for the Utility (Optional) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We recommend creating a party for the Utility workflows separate from the Validator Operator party created during the `Validator setup `_. The Utility works with the Validator Operator party but is also used to pay for traffic on behalf of all other parties hosted on the validator and receives liveliness rewards. Separating these concerns facilitates easier accounting and management. To create a new party and associate it to a user: 1. Create a user in the IAM system configurated for the validator 2. Log into the Wallet UI with this new user 3. Click **Onboard yourself** The validator will allocate a fresh party ID allocated to the logged in user which can be used with the Utility. Alternatively, you can create a party direclty via the gRPC or JSON API. Upload the DARS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The bundle files containing the DAR packages for the Utility are available in `JFrog `_. Download version ``0.3.5`` of the bundle. They can be uploaded to a participant node using the corresponding `gRPC Admin-API endpoint `_. Here is an example upload-dar.sh script for uploading DARs:: #!/bin/bash DAR_DIRECTORY="dars" jwt_token="" canton_admin_api_url="${PARTICIPANT_HOST}:${CANTON_ADMIN_GRPC_PORT}" canton_admin_api_grpc_base_service="com.digitalasset.canton.admin.participant.v30" canton_admin_api_grpc_package_service=${canton_admin_api_grpc_base_service}".PackageService" json() { declare input=${1:-$(`_ or an `External OIDC Provider `_. Specifically, create a new application similar to the wallet/cns named 'Utility UI'. Once this has been created, update your AUTH_CLIENT_ID enviroment variable to be the Client ID of that new application. Determine the Utility operator party ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order to deploy the UI, you will need to specify the operator party. Depending on the environment, the Utility operator party is: +-------------+----------------------------------------------------------------------------------------------------------+ | Environment | Utility Operator Party | +=============+==========================================================================================================+ | DevNet | auth0_007c65f857f1c3d599cb6df73775::12205bba10e5119890e1c2c1dca3a63ef996920ca6bf70075cca3a6d70c4aeeb9da9 | +-------------+----------------------------------------------------------------------------------------------------------+ | TestNet | auth0_007c66019993301e3ed49d0e36e9::12206268795b181eafd1432facbb3a3c5711f1f8b743ea0e9c0050b32126b33071fa | +-------------+----------------------------------------------------------------------------------------------------------+ | MainNet | auth0_007c6643538f2eadd3e573dd05b9::12205bcc106efa0eaa7f18dc491e5c6f5fb9b0cc68dc110ae66f4ed6467475d7c78e | +-------------+----------------------------------------------------------------------------------------------------------+ Download the Docker image ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The Utility UI resides in the ``canton-network-utility-docker`` JFrog repository. The docker images are accessible via ``digitalasset-canton-network-utility-docker.jfrog.io``. For example, version ``0.4.1`` is available via ``digitalasset-canton-network-utility-docker.jfrog.io/frontend:0.4.1``. Note that you will have to log into this repository before attempting to pull down the image: :: docker login digitalasset-canton-network-utility-docker.jfrog.io -u "" -p "" To allow your cluster to pull the necessary artifcats, you will need to create a Kubernetes secret, and patch your service account to use that secret when pulling images from that namespace. For example: :: kubectl create secret docker-registry utility-cred \ --docker-server=digitalasset-canton-network-utility-docker.jfrog.io \ --docker-username=${ARTIFACTORY_USER} \ --docker-password=${ARTIFACTORY_PASSWORD} \ -n $NAMESPACE kubectl patch serviceaccount default -n $NAMESPACE \ -p '{"imagePullSecrets": [{"name": "utility-cred"}]}' Deploy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The UI docker image expects the following environment variables to be set in the container: +-----------------------------------+-------------------------------------------------------------+ | Environment Variable | Description | +===================================+=============================================================+ | ``AUTH_AUTHORITY`` | Your OIDC compatible IAM URL. | | | Example: ``_ | +-----------------------------------+-------------------------------------------------------------+ | ``AUTH_CLIENT_ID`` | The client id of your UI | +-----------------------------------+-------------------------------------------------------------+ | ``AUTH_AUDIENCE`` | The required audience of the participant. | | | Example: ``_ | +-----------------------------------+-------------------------------------------------------------+ | ``UTILITY_APP_OPERATOR_PARTY_ID`` | Set as DA’s operator party for the target environment | | | (see "Determine the Utility operator party" above). | +-----------------------------------+-------------------------------------------------------------+ Utilising your Traffic Management of choice, you need to configure routing for the following requests (assuming ports are the default set in the CN helm charts): a. ``/api/validator/`` * route to ``validator-app`` on port ``5003`` * Should be identical to the configuration required for the Wallet and ANS UIs b. ``/api/json-api/`` * route to the participant on port ``7575`` * match: ``^/api/json-api(/|$)(.*)$`` * rewrite: ``/\2`` * Starting from version ``0.3.5``, if you would like to avoid the path rewrite above, you can set the helm chart value of ``jsonApiServerPathPrefix`` to be ``/api/json-api`` in the ``participant-values.yaml`` file when you are setting up your validator node. * Details on how to set up the validator node can be found in the `Validator node setup documentation `_. The container port for the frontend is ``8080`` Example deployment manifest ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Here is an example deployment (This is the same regardless of Ingress/Load Balancer/Traffic Manager): :: # Create the deployment YAML file cat < ui-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: utility-ui name: utility-ui namespace: $NAMESPACE spec: replicas: 1 selector: matchLabels: app: utility-ui strategy: type: Recreate template: metadata: labels: app: utility-ui spec: containers: - name: utility-ui image: "$IMAGE_LOCATION:$IMAGE_VERSION" env: - name: AUTH_AUTHORITY value: "" - name: AUTH_CLIENT_ID value: "" - name: AUTH_AUDIENCE value: "https://$HOST_ENV_VARIABLE" - name: UTILITY_APP_OPERATOR_PARTY_ID value: "$OPERATOR_ID" ports: - containerPort: 8080 name: http protocol: TCP resources: requests: cpu: 0.1 memory: 240Mi limits: cpu: 1 memory: 1536Mi ==- apiVersion: v1 kind: Service metadata: name: utility-ui namespace: $NAMESPACE spec: selector: app: utility-ui ports: - name: http port: 8080 protocol: TCP EOF Example Ingresses ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following are ingresses that have been used/tested internally or provided by clients. These may not work directly out of the box for your environment, but should provide valuable starting points/reference points to assist in configuring your Ingress. Example Nginx Ingress (GCP): :: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: utility-ingress namespace: $NAMESPACE annotations: nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: tls: - hosts: - utility.${HOST_ENV_VARIABLE} - secretName: ${SECRET_NAME} ingressClassName: nginx rules: - host: utility.${HOST_ENV_VARIABLE} http: paths: - path: /()(.*) pathType: Prefix backend: service: name: utility-ui port: number: 8080 - path: /api/json-api(/|$)(.*) pathType: Prefix backend: service: name: participant port: number: 7575 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: utility-ingress-validator namespace: $NAMESPACE annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.org/rewrites: "serviceName=validator-app rewrite=/api/validator/" spec: tls: - hosts: - utility.${HOST_ENV_VARIABLE} secretName: ${SECRET_NAME} ingressClassName: nginx rules: - host: utility.${HOST_ENV_VARIABLE} http: paths: - path: /api/validator/ pathType: Prefix backend: service: name: validator-app port: number: 5003 Example Nginx reverse Proxy (AWS): :: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-reverse-proxy namespace: validator spec: replicas: 1 selector: matchLabels: app: nginx-reverse-proxy template: metadata: labels: app: nginx-reverse-proxy spec: containers: - name: nginx image: nginx:1.27.3 ports: - containerPort: 80 volumeMounts: - name: config-volume mountPath: /etc/nginx/nginx.conf subPath: nginx.conf # ConfigMap where the actual routing is defined volumes: - name: config-volume configMap: name: nginx-config --- apiVersion: v1 kind: Service metadata: name: nginx-reverse-proxy namespace: validator spec: selector: app: nginx-reverse-proxy ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort #<-- Because ALB seems to prefer this (https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1695) --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-config namespace: validator data: nginx.conf: | events {} http { server { listen 80; # Requirements for Utility UI location /api/json-api/ { # Proxy the request to the participant on port 7575 proxy_pass http://participant.validator.svc.cluster.local:7575/; } } } Example Nginx Ingress(AWS): :: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: validator-ingress-utility-api namespace: validator annotations: alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}" alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: "{{ alb_group }}" alb.ingress.kubernetes.io/healthcheck-path: /api/validator/readyz alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200-399' alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' alb.ingress.kubernetes.io/group.order: '7' spec: ingressClassName: alb rules: - host: "utility.validator.{{ validator_hostname }}" http: paths: - path: /api/validator/* pathType: ImplementationSpecific backend: service: name: validator-app port: number: 5003 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: validator-ingress-utility-participant-api namespace: validator annotations: alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}" alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: "{{ alb_group }}" alb.ingress.kubernetes.io/healthcheck-path: /api/json-api/readyz alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200-399' alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' alb.ingress.kubernetes.io/group.order: '8' spec: ingressClassName: alb rules: - host: "utility.validator.{{ validator_hostname }}" http: paths: - path: /api/json-api/* pathType: ImplementationSpecific backend: service: name: nginx-reverse-proxy port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: validator-ingress-utility namespace: validator annotations: alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}" alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: "{{ alb_group }}" alb.ingress.kubernetes.io/group.order: '9' spec: ingressClassName: alb rules: - host: "utility.validator.{{ validator_hostname }}" http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: utility-ui port: number: 8080