Deployment with Existing VPC

CREDEBL AWS Deployment

This release provisions AWS infrastructure for the CREDEBL project using Terraform within an existing VPC.

Prerequisites

  1. Terraform installed.

  2. AWS CLI configured with appropriate credentials.

  3. Access to the existing VPC ID and subnets.

  4. terraform.tfvars file with environment-specific values (DEV/QA/PROD).

Directory Structure

terraform-script/
└── aws/
    └── modules/          # Terraform modules
    └── platform/
        ├── main.tf       # Terraform resources for CREDEBL
        ├── variables.tf  # Input variables
        ├── backend.tf    # Backend configuration for remote state
        └── terraform.tfvars # Environment-specific values

Checkout to Release Version

Before proceeding with the deployment, make sure you are using the correct tagged release of the Terraform scripts.

  1. Clone the repository (if not already done):

    git clone https://github.com/credebl/install.git
  2. Navigate to the install directory:

    cd install
  3. Checkout to the release version used for this deployment:

    git checkout v2.1.2-alpha.1

Note: This ensures you are deploying the exact version of Terraform scripts tested for CREDEBL infrastructure provisioning. All versions released after v2.1.2-alpha.1 will also be compatible and work with this deployment setup.

Configuring Environment Variables

  1. Open the terraform.tfvars file in the platform directory:

  2. Add or update environment-specific values. For example:

    vpc_id          = "vpc-0123456789abcdef"
    public_subnets  = ["subnet-01234abcd", "subnet-05678efgh"]
    private_subnets = ["subnet-09876ijkl", "subnet-04321mnop"]
    region          = "ap-south-1"
    project_name    = "CREDEBL"
    environment     = "DEV"
    certificate_arn = "arn:aws:acm:us-west-1:123456789012:certificate/abcd1234-ab12-cd34-ef56-abcdef123456"
    domain_name     = "example.com"

    Note: Make sure all required variables defined in variables.tf are set in terraform.tfvars.

Initialising Terraform

  1. Navigate to the Terraform configuration directory:

cd terraform-script/aws/platform
  1. Initialise Terraform to download required providers and configure the backend:

terraform init

Note: Run terraform init whenever providers or modules are updated, or when switching directories.


Planning the Deployment

  1. Check the infrastructure changes before applying:

terraform plan
  • Ensures resources will be created or modified as expected.

  • Always review the plan output before applying.


Applying the Deployment

  1. Provision the infrastructure:

terraform apply
  1. Confirm the apply when prompted (yes).

  • Resources will be created in the existing VPC as defined in terraform.tfvars.


Infrastructure Provisioned

This release will create the required infrastructure on AWS for the CREDEBL platform, using the existing VPC. The following services and components will be provisioned:

  • ECS Services

  • ECS Task Definitions

  • Application Load Balancer (ALB)

  • Amazon S3 Buckets

  • Security Groups (SG)

  • Elastic File System (EFS)

Post-Execution Steps

After Terraform execution completes successfully, perform the following manual steps:

1. Database Setup

You will need three PostgreSQL databases for the services:

  1. credo-controller – Used by the credo controller service.

  2. keycloak – Used by Keycloak for authentication and identity management.

  3. platform – Used by the main CREDEBL platform application.

You have two options for setting up these databases:

  • Option 1: Create a new PostgreSQL database manually.

  • Option 2: Use an existing database by providing its connection details in the environment files.


2. Create Environment Files

You will need to create three environment files for the services, using the environment prefix defined in your terraform.tfvars (e.g., dev, qa, or prod).

Example file names:

DEV-credebl.env
DEV-credo.env
DEV-keycloak.env

File references:

  • DEV-credebl.env → Refer to: https://github.com/credebl/platform/blob/main/.env.demo While configuring DEV-credebl.env for CREDEBL, the Terraform script provisions services on ECS Fargate, so you must update/add the following environment variables:

    # Fargate-specific script
    AFJ_AGENT_SPIN_UP=/agent-provisioning/AFJ/scripts/fargate.sh
    
    # AWS and infrastructure references
    AWS_ACCOUNT_ID=<your_aws_account_id>
    ECS_SECURITY_GROUP_ID=<ECS_CREDO_SG_ID_created_by_TF>
    S3_BUCKET_ARN=<credo_env_file_bucket_ARN>
    ECS_SUBNET_ID=<subnet_ID_for_CREDO_service>
    FILESYSTEMID=<CREDO_EFS_ID>
    AFJ_VERSION=ghcr.io/credebl/credo-controller:latest <optional: specify different image; default uses GHCR image>
    INBOUND_TG_ARN=<CREDO_inbound_target_group_ARN>
    ADMIN_TG_ARN=<CREDO_admin_target_group_ARN>
    
    # Service Connect URLs for ECS services
    NATS_URL=nats://nats-1-4222.dev-credebl-namespace:4222
    REDIS_HOST=redis-sc.dev-credebl-namespace
    REDIS_PORT=6379

    Notes:

    • NATS_URL and REDIS_HOST point to the ECS Service Connect endpoints. ECS Fargate services will use this for communication.

    This ensures the CREDEBL ECS service can start properly with the correct network, security, and service connectivity configuration.

  • DEV-keycloak.env → Use the following template:

KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin

KC_HTTP_ENABLED=true
KC_DB=postgres
KC_DB_URL=jdbc:postgresql://HOST:PORT/Database
KC_DB_USERNAME=DB_USERNAME
KC_DB_PASSWORD=DB_PASSWORD
KC_DB_URL_PORT=PORT
PROXY_ADDRESS_FORWARDING=true

KC_HOSTNAME_ADMIN_URL=https://KEYCLOAK_DOMAIN
KC_HOSTNAME_URL=https://KEYCLOAK_DOMAIN

KC_PROXY=edge
KC_HOSTNAME_STRICT=false
KC_LOG=console
KC_HOSTNAME_STRICT_HTTPS=false

KC_HTTPS_ENABLED=true

./kcadm.sh config credentials --server http://0.0.0.0:8080 --realm master --user admin
./kcadm.sh update realms/master -s sslRequired=NONE

Note: Replace placeholders like HOST, PORT, Database, and KEYCLOAK_DOMAIN with actual values.


3. Upload Environment Files to S3

After creating the three .env files, upload them to the S3 bucket created by Terraform. The bucket name is:

credebl-dev-env-file-bucket

Note: Replace dev with your environment name if different (e.g., qa, prod).

4. Bastion and EFS Configuration

  1. Create a Bastion Server

    • You can either create a new Bastion EC2 instance in the same VPC or use an existing one.

    • Ensure the Bastion host has access to both NATS EFS and CREDO EFS created by Terraform.

  2. Mount EFS File Systems on Bastion

    • Mount both EFS volumes:

      • NATS EFS

      • CREDO EFS

    • Example command to mount EFS:

      sudo mkdir /mnt/nats-efs /mnt/credo-efs
      sudo mount -t efs fs-xxxxxxx:/ /mnt/nats-efs
      sudo mount -t efs fs-yyyyyyy:/ /mnt/credo-efs
  3. Configure the EFS Contents

    • In CREDO EFS:

      • Create a folder named token inside the mounted directory:

        mkdir -p /mnt/credo-efs/token
    • In NATS EFS:

5. Start Services

After completing the Bastion and EFS setup, the services can now be started in the correct order.

Step 5.1: Start Keycloak Service

  1. Start the Keycloak service first.

  2. Configure Keycloak following the guide: Keycloak Setup Documentation

  3. After configuring Keycloak, update the .env file with the Keycloak credentials. Example in credebl.env:

    KEYCLOAK_DOMAIN=https://KEYCLOAK_DOMAIN/
    KEYCLOAK_ADMIN_URL=https://KEYCLOAK_DOMAIN
    KEYCLOAK_MASTER_REALM=master
    KEYCLOAK_MANAGEMENT_CLIENT_ID=adminClient
    KEYCLOAK_MANAGEMENT_CLIENT_SECRET=
    KEYCLOAK_REALM=credebl-platform

Step 5.2: Start Seed Service

  1. Start the Seed Service. This service is used only once to populate initial data in the database.

  2. Monitor the logs. You should see a message like:

    The seed command has been executed
  3. Once the seed command completes, stop the seed service, as it is not required to run continuously.

Note: The Seed Service writes initial data to your database (CREDEBL tables) and will not be needed after this one-time execution.

Step 5.3: Start NATS, Redis, and Remaining Services

  1. Start the NATS service first.

    • Ensure NATS is correctly configured with the nats.config and NKeys authentication is enabled.

  2. Once NATS is running, start the Redis service.

  3. After Redis is running, you can start all other services except the following, which should not be started yet:

    • agent-service

    • agent-provisioning-service

Note: agent-service and agent-provisioning-service will be started later, after the main platform services are confirmed to be healthy.

This ensures the correct service startup order:

  1. Keycloak → 2. Seed Service → 3. NATS → 4. Redis → 5. All remaining services (excluding agent-related services).

Step 5.4: Start Agent-Provisioning and Agent Services

  1. Start the agent-provisioning service first.

  2. Once the agent-provisioning service has started successfully, start the agent service.

    • This will executes a script that provisions the necessary agent resources in the CREDO Controller cluster. (creates credo-controller service)

6. Verification & Validation

After all services have been started successfully, perform the following checks to verify that the infrastructure and services are healthy and functioning correctly.

  1. Check ECS Services Health: Verify that all ECS services are in the Running state.

  2. Check Application Load Balancer (ALB):

    • The Target Groups are healthy.

    • Each registered target shows “Healthy” status.

If all the above checks pass, your CREDEBL infrastructure is successfully provisioned and all services are up and running on AWS.

7. Troubleshooting & Firewall Configuration

If your existing VPC has a firewall or network restrictions that block outbound internet access, some resources may fail to pull container images or communicate between services.


7.1 Whitelist Required Domains

If services cannot connect to the internet to download images or access container registries, whitelist the following domains in your firewall policy or network access rules:

Purpose
Domains to Whitelist

AWS ECR (Public)

*.ecr.aws dkr.ecr.us-east-1.amazonaws.com .cloudfront.net

.s3.amazonaws.com

GitHub Container Registry (GHCR)

ghcr.io

Redis Image Repository

registry-1.docker.io

NATS Image Repository

registry-1.docker.io

Keycloak Image Repository

quay.io

Hint: If your environment uses a proxy, ensure these domains are accessible via the proxy for ECS to successfully pull container images.


7.2 Fixing Service Connectivity Issues

If ECS services are unable to connect to NATS, Redis, or if the ALB health checks are repeatedly showing as unhealthy, review and update your Security Group (SG) configurations.

  1. Identify the Security Groups created by Terraform:

    • credo-sg

    • application-sg -(API-gateway SG)

    • studio-sg

    • keycloak-sg

  2. For each service, add an Inbound Rule:

    • Port: Service-specific port (for example, 8080 for keycloak service, 3000 for frontend, etc.)

    • Source: VPC CIDR range (e.g., 10.0.0.0/16)

    This allows internal traffic between ECS services and ALB within the same VPC.


7.3 Fixing NATS and Redis Service Connect Issues

If ECS Service Connect is not functioning properly or other services cannot communicate with NATS or Redis, add explicit inbound rules to their Security Groups:

Service
Port
Source
Purpose

NATS

4222

VPC CIDR (e.g., 10.0.0.0/16)

Allows ECS services to connect to NATS

Redis

6379

VPC CIDR (e.g., 10.0.0.0/16)

Allows ECS services to connect to Redis


After updating the rules or firewall, re-run your ECS services and verify:

  • All container images are successfully pulledyouryouryouryour.

  • NATS, Redis, and ALB health checks are now Healthy.

  • Internal service-to-service communication works without timeouts or connection errors.

Last updated