Deployment with Existing VPC
CREDEBL AWS Deployment
This release provisions AWS infrastructure for the CREDEBL project using Terraform within an existing VPC.
Prerequisites
Terraform installed.
AWS CLI configured with appropriate credentials.
Access to the existing VPC ID and subnets.
terraform.tfvarsfile with environment-specific values (DEV/QA/PROD).
Directory Structure
terraform-script/
└── aws/
└── modules/ # Terraform modules
└── platform/
├── main.tf # Terraform resources for CREDEBL
├── variables.tf # Input variables
├── backend.tf # Backend configuration for remote state
└── terraform.tfvars # Environment-specific valuesCheckout to Release Version
Before proceeding with the deployment, make sure you are using the correct tagged release of the Terraform scripts.
Clone the repository (if not already done):
git clone https://github.com/credebl/install.gitNavigate to the install directory:
cd installCheckout to the release version used for this deployment:
git checkout v2.1.2-alpha.1
Note: This ensures you are deploying the exact version of Terraform scripts tested for CREDEBL infrastructure provisioning. All versions released after
v2.1.2-alpha.1will also be compatible and work with this deployment setup.
Configuring Environment Variables
Open the
terraform.tfvarsfile in theplatformdirectory:Add or update environment-specific values. For example:
vpc_id = "vpc-0123456789abcdef" public_subnets = ["subnet-01234abcd", "subnet-05678efgh"] private_subnets = ["subnet-09876ijkl", "subnet-04321mnop"] region = "ap-south-1" project_name = "CREDEBL" environment = "DEV" certificate_arn = "arn:aws:acm:us-west-1:123456789012:certificate/abcd1234-ab12-cd34-ef56-abcdef123456" domain_name = "example.com"Note: Make sure all required variables defined in
variables.tfare set interraform.tfvars.
Initialising Terraform
Navigate to the Terraform configuration directory:
cd terraform-script/aws/platformInitialise Terraform to download required providers and configure the backend:
terraform initNote: Run
terraform initwhenever providers or modules are updated, or when switching directories.
Planning the Deployment
Check the infrastructure changes before applying:
terraform planEnsures resources will be created or modified as expected.
Always review the plan output before applying.
Applying the Deployment
Provision the infrastructure:
terraform applyConfirm the apply when prompted (
yes).
Resources will be created in the existing VPC as defined in
terraform.tfvars.
Infrastructure Provisioned
This release will create the required infrastructure on AWS for the CREDEBL platform, using the existing VPC. The following services and components will be provisioned:
ECS Services
ECS Task Definitions
Application Load Balancer (ALB)
Amazon S3 Buckets
Security Groups (SG)
Elastic File System (EFS)
Post-Execution Steps
After Terraform execution completes successfully, perform the following manual steps:
1. Database Setup
You will need three PostgreSQL databases for the services:
credo-controller – Used by the credo controller service.
keycloak – Used by Keycloak for authentication and identity management.
platform – Used by the main CREDEBL platform application.
You have two options for setting up these databases:
Option 1: Create a new PostgreSQL database manually.
Option 2: Use an existing database by providing its connection details in the environment files.
2. Create Environment Files
You will need to create three environment files for the services, using the environment prefix defined in your terraform.tfvars (e.g., dev, qa, or prod).
Example file names:
DEV-credebl.env
DEV-credo.env
DEV-keycloak.envFile references:
DEV-credebl.env→ Refer to: https://github.com/credebl/platform/blob/main/.env.demo While configuringDEV-credebl.envfor CREDEBL, the Terraform script provisions services on ECS Fargate, so you must update/add the following environment variables:# Fargate-specific script AFJ_AGENT_SPIN_UP=/agent-provisioning/AFJ/scripts/fargate.sh # AWS and infrastructure references AWS_ACCOUNT_ID=<your_aws_account_id> ECS_SECURITY_GROUP_ID=<ECS_CREDO_SG_ID_created_by_TF> S3_BUCKET_ARN=<credo_env_file_bucket_ARN> ECS_SUBNET_ID=<subnet_ID_for_CREDO_service> FILESYSTEMID=<CREDO_EFS_ID> AFJ_VERSION=ghcr.io/credebl/credo-controller:latest <optional: specify different image; default uses GHCR image> INBOUND_TG_ARN=<CREDO_inbound_target_group_ARN> ADMIN_TG_ARN=<CREDO_admin_target_group_ARN> # Service Connect URLs for ECS services NATS_URL=nats://nats-1-4222.dev-credebl-namespace:4222 REDIS_HOST=redis-sc.dev-credebl-namespace REDIS_PORT=6379Notes:
NATS_URLandREDIS_HOSTpoint to the ECS Service Connect endpoints. ECS Fargate services will use this for communication.
This ensures the CREDEBL ECS service can start properly with the correct network, security, and service connectivity configuration.
DEV-credo.env→ Refer to: https://github.com/credebl/platform/blob/main/agent.envDEV-keycloak.env→ Use the following template:
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin
KC_HTTP_ENABLED=true
KC_DB=postgres
KC_DB_URL=jdbc:postgresql://HOST:PORT/Database
KC_DB_USERNAME=DB_USERNAME
KC_DB_PASSWORD=DB_PASSWORD
KC_DB_URL_PORT=PORT
PROXY_ADDRESS_FORWARDING=true
KC_HOSTNAME_ADMIN_URL=https://KEYCLOAK_DOMAIN
KC_HOSTNAME_URL=https://KEYCLOAK_DOMAIN
KC_PROXY=edge
KC_HOSTNAME_STRICT=false
KC_LOG=console
KC_HOSTNAME_STRICT_HTTPS=false
KC_HTTPS_ENABLED=true
./kcadm.sh config credentials --server http://0.0.0.0:8080 --realm master --user admin
./kcadm.sh update realms/master -s sslRequired=NONENote: Replace placeholders like
HOST,PORT,Database, andKEYCLOAK_DOMAINwith actual values.
3. Upload Environment Files to S3
After creating the three .env files, upload them to the S3 bucket created by Terraform.
The bucket name is:
credebl-dev-env-file-bucketNote: Replace
devwith your environment name if different (e.g.,qa,prod).
4. Bastion and EFS Configuration
Create a Bastion Server
You can either create a new Bastion EC2 instance in the same VPC or use an existing one.
Ensure the Bastion host has access to both NATS EFS and CREDO EFS created by Terraform.
Mount EFS File Systems on Bastion
Mount both EFS volumes:
NATS EFS
CREDO EFS
Example command to mount EFS:
sudo mkdir /mnt/nats-efs /mnt/credo-efs sudo mount -t efs fs-xxxxxxx:/ /mnt/nats-efs sudo mount -t efs fs-yyyyyyy:/ /mnt/credo-efs
Configure the EFS Contents
In CREDO EFS:
Create a folder named
tokeninside the mounted directory:mkdir -p /mnt/credo-efs/token
In NATS EFS:
Add the
nats.configfile in the root path of EFS. Reference: https://github.com/credebl/platform/blob/main/nats-server.confIt’s recommended to add authentication to your NATS setup.
To create your Nkeys, you can refer NATS tool nk as per their official documentation
Create a
seedfolder and add thecredebl-master-table.jsonfile:mkdir -p /mnt/nats-efs/seedReference for JSON file: https://github.com/credebl/platform/blob/main/libs/prisma-service/prisma/data/credebl-master-table.json To fill data for your setup, refer to the credebl documentation.
5. Start Services
After completing the Bastion and EFS setup, the services can now be started in the correct order.
Step 5.1: Start Keycloak Service
Start the Keycloak service first.
Configure Keycloak following the guide: Keycloak Setup Documentation
After configuring Keycloak, update the
.envfile with the Keycloak credentials. Example incredebl.env:KEYCLOAK_DOMAIN=https://KEYCLOAK_DOMAIN/ KEYCLOAK_ADMIN_URL=https://KEYCLOAK_DOMAIN KEYCLOAK_MASTER_REALM=master KEYCLOAK_MANAGEMENT_CLIENT_ID=adminClient KEYCLOAK_MANAGEMENT_CLIENT_SECRET= KEYCLOAK_REALM=credebl-platform
Step 5.2: Start Seed Service
Start the Seed Service. This service is used only once to populate initial data in the database.
Monitor the logs. You should see a message like:
The seed command has been executedOnce the seed command completes, stop the seed service, as it is not required to run continuously.
Note: The Seed Service writes initial data to your database (CREDEBL tables) and will not be needed after this one-time execution.
Step 5.3: Start NATS, Redis, and Remaining Services
Start the NATS service first.
Ensure NATS is correctly configured with the
nats.configand NKeys authentication is enabled.
Once NATS is running, start the Redis service.
After Redis is running, you can start all other services except the following, which should not be started yet:
agent-serviceagent-provisioning-service
Note:
agent-serviceandagent-provisioning-servicewill be started later, after the main platform services are confirmed to be healthy.
This ensures the correct service startup order:
Keycloak → 2. Seed Service → 3. NATS → 4. Redis → 5. All remaining services (excluding agent-related services).
Step 5.4: Start Agent-Provisioning and Agent Services
Start the agent-provisioning service first.
Once the agent-provisioning service has started successfully, start the agent service.
This will executes a script that provisions the necessary agent resources in the CREDO Controller cluster. (creates credo-controller service)
6. Verification & Validation
After all services have been started successfully, perform the following checks to verify that the infrastructure and services are healthy and functioning correctly.
Check ECS Services Health: Verify that all ECS services are in the Running state.
Check Application Load Balancer (ALB):
The Target Groups are healthy.
Each registered target shows “Healthy” status.
✅ If all the above checks pass, your CREDEBL infrastructure is successfully provisioned and all services are up and running on AWS.
7. Troubleshooting & Firewall Configuration
If your existing VPC has a firewall or network restrictions that block outbound internet access, some resources may fail to pull container images or communicate between services.
7.1 Whitelist Required Domains
If services cannot connect to the internet to download images or access container registries, whitelist the following domains in your firewall policy or network access rules:
AWS ECR (Public)
*.ecr.aws dkr.ecr.us-east-1.amazonaws.com .cloudfront.net
.s3.amazonaws.com
GitHub Container Registry (GHCR)
ghcr.io
Redis Image Repository
registry-1.docker.io
NATS Image Repository
registry-1.docker.io
Keycloak Image Repository
quay.io
Hint: If your environment uses a proxy, ensure these domains are accessible via the proxy for ECS to successfully pull container images.
7.2 Fixing Service Connectivity Issues
If ECS services are unable to connect to NATS, Redis, or if the ALB health checks are repeatedly showing as unhealthy, review and update your Security Group (SG) configurations.
Identify the Security Groups created by Terraform:
credo-sgapplication-sg-(API-gateway SG)studio-sgkeycloak-sg
For each service, add an Inbound Rule:
Port: Service-specific port (for example, 8080 for keycloak service, 3000 for frontend, etc.)
Source: VPC CIDR range (e.g.,
10.0.0.0/16)
This allows internal traffic between ECS services and ALB within the same VPC.
7.3 Fixing NATS and Redis Service Connect Issues
If ECS Service Connect is not functioning properly or other services cannot communicate with NATS or Redis, add explicit inbound rules to their Security Groups:
NATS
4222
VPC CIDR (e.g., 10.0.0.0/16)
Allows ECS services to connect to NATS
Redis
6379
VPC CIDR (e.g., 10.0.0.0/16)
Allows ECS services to connect to Redis
✅ After updating the rules or firewall, re-run your ECS services and verify:
All container images are successfully pulledyouryouryouryour.
NATS, Redis, and ALB health checks are now Healthy.
Internal service-to-service communication works without timeouts or connection errors.
Last updated
