Skip to Content
RoadmapImplementation Guide

Implementation Guide

This guide provides a detailed, day-by-day implementation plan for building Earna AI’s backend infrastructure without disrupting the existing console application. The strategy focuses on building isolated services that can be gradually integrated.

Core Implementation Principle

Service Isolation Strategy

Console (unchanged) → Kong Gateway (new) → Backend Services (new) → Databases (new)

The console continues to work as-is while we build a complete backend infrastructure in parallel.

Implementation Timeline

Detailed Implementation Phases

Week 1: Core Infrastructure

Day 0-1: Development Environment Setup

Local Kubernetes Setup

# Install Kind for local K8s curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind # Create cluster kind create cluster --config=kind-config.yaml

Docker Development Environment

# docker-compose.yml version: '3.8' services: tigerbeetle: image: ghcr.io/tigerbeetledb/tigerbeetle ports: - "3000:3000" volumes: - ./data/tigerbeetle:/data temporal: image: temporalio/auto-setup ports: - "7233:7233" environment: - DB=postgresql - DB_PORT=5432 redis: image: redis:alpine ports: - "6379:6379"

Service Templates

// service-template/src/index.ts import express from 'express' import { logger } from './utils/logger' import { healthCheck } from './middleware/health' const app = express() app.use(express.json()) app.use('/health', healthCheck) // Service-specific routes app.use('/api/v1', routes) const PORT = process.env.PORT || 3000 app.listen(PORT, () => { logger.info(`Service running on port ${PORT}`) })

CI/CD Pipeline

# .github/workflows/deploy.yml name: Deploy Service on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Build and push Docker image run: | docker build -t $SERVICE_NAME:$GITHUB_SHA . docker push $SERVICE_NAME:$GITHUB_SHA - name: Deploy to Kubernetes run: | kubectl set image deployment/$SERVICE_NAME \ $SERVICE_NAME=$SERVICE_NAME:$GITHUB_SHA

Day 2-3: TigerBeetle Setup

Deploy TigerBeetle Cluster

# Deploy 3-node cluster kubectl apply -f tigerbeetle-statefulset.yaml # Initialize cluster kubectl exec tigerbeetle-0 -- tigerbeetle format \ --cluster=0 \ --replica=0 \ --replica-count=3 \ /data/0.tigerbeetle # Start replicas kubectl scale statefulset tigerbeetle --replicas=3

Configure Ledger Structure

// Account Types enum AccountType { USER_WALLET = 1, CREDIT_CARD = 2, CHECKING = 3, SAVINGS = 4, INVESTMENT = 5, BUSINESS = 6, TAX = 7, ESCROW = 8 } // Create accounts const createAccount = async (userId: string, type: AccountType) => { return await client.createAccounts([{ id: generateId(), ledger: 1, code: type, flags: 0, debits_pending: 0n, debits_posted: 0n, credits_pending: 0n, credits_posted: 0n, user_data: Buffer.from(userId) }]) }

Build REST API Wrapper

// tigerbeetle-service/src/api.ts import { Router } from 'express' import { TigerBeetleClient } from './client' const router = Router() const tb = new TigerBeetleClient() router.post('/accounts', async (req, res) => { const account = await tb.createAccount(req.body) res.json(account) }) router.post('/transfers', async (req, res) => { const transfer = await tb.createTransfer(req.body) res.json(transfer) }) router.get('/accounts/:id/balance', async (req, res) => { const balance = await tb.getBalance(req.params.id) res.json({ balance }) })

Day 4-5: Temporal Workflow Engine

Deploy Temporal Server

# temporal-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: temporal-server spec: replicas: 1 selector: matchLabels: app: temporal template: spec: containers: - name: temporal image: temporalio/auto-setup:latest ports: - containerPort: 7233 env: - name: DB value: postgresql - name: POSTGRES_SEEDS value: postgres-service

Create Workflow Templates

// workflows/sync-accounts.ts import { proxyActivities } from '@temporalio/workflow' const activities = proxyActivities({ startToCloseTimeout: '5 minutes' }) export async function syncAccountsWorkflow(userId: string) { // Step 1: Fetch from Plaid const accounts = await activities.fetchPlaidAccounts(userId) // Step 2: Process each account for (const account of accounts) { await activities.createOrUpdateAccount(account) await activities.syncTransactions(account.id) } // Step 3: Update balances await activities.updateBalances(userId) // Step 4: Trigger analytics await activities.triggerAnalytics(userId) }

Configure Worker Pools

// workers/index.ts import { Worker } from '@temporalio/worker' import * as activities from './activities' async function run() { const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), activities, taskQueue: 'financial-operations', maxConcurrentActivityTaskExecutions: 100, }) await worker.run() }

Day 6-7: Kong API Gateway

Deploy Kong Gateway

# kong-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kong-gateway spec: replicas: 3 template: spec: containers: - name: kong image: kong:3.4 env: - name: KONG_DATABASE value: postgres - name: KONG_PROXY_ACCESS_LOG value: /dev/stdout ports: - containerPort: 8000 - containerPort: 8443

Configure Routes and Services

# Add TigerBeetle service curl -X POST http://localhost:8001/services \ --data name=tigerbeetle \ --data url=http://tigerbeetle-service:3000 # Add route curl -X POST http://localhost:8001/services/tigerbeetle/routes \ --data paths[]=/api/v1/ledger # Add Plaid service curl -X POST http://localhost:8001/services \ --data name=plaid \ --data url=http://plaid-service:3000 # Add authentication curl -X POST http://localhost:8001/plugins \ --data name=jwt \ --data config.key_claim_name=iss

Set Up Rate Limiting

-- kong-rate-limit.lua return { name = "rate-limiting", config = { minute = 100, hour = 10000, policy = "local", fault_tolerant = true, redis_host = "redis-service", redis_port = 6379 } }

Service Dependencies Matrix

ServiceDepends OnProvidesCritical Path
TigerBeetleInfrastructureLedger APIYes
TemporalInfrastructureWorkflow EngineYes
Kong GatewayInfrastructureAPI RoutingYes
Plaid ServiceTigerBeetle, TemporalBanking DataYes
Transaction PipelineTigerBeetle, PlaidEnriched TransactionsYes
Account ServiceTigerBeetle, DatabaseAccount ManagementYes
Analytics ServiceClickHouse, KafkaAnalytics APINo
AI ServiceAnalytics, OpenAIAI FeaturesNo
Credit EngineTigerBeetle, Credit BureausCredit FeaturesNo
Payment ServiceTigerBeetle, HyperswitchPayment ProcessingNo

Testing Strategy

Unit Testing

// Example unit test describe('TransactionCategorizer', () => { it('should categorize restaurant transaction', async () => { const transaction = { merchant_name: 'STARBUCKS', amount: 5.50, mcc: '5814' } const category = await categorizer.categorize(transaction) expect(category.primary).toBe('Food & Dining') expect(category.secondary).toBe('Coffee Shops') expect(category.confidence).toBeGreaterThan(0.9) }) })

Integration Testing

// Integration test example describe('Account Sync Flow', () => { it('should sync accounts from Plaid to TigerBeetle', async () => { // Create test user const user = await createTestUser() // Mock Plaid response mockPlaidAccounts(user.id) // Trigger sync await syncAccountsWorkflow(user.id) // Verify in TigerBeetle const accounts = await tigerbeetle.getAccounts(user.id) expect(accounts).toHaveLength(3) // Verify in database const dbAccounts = await db.accounts.findAll({ where: { user_id: user.id } }) expect(dbAccounts).toHaveLength(3) }) })

End-to-End Testing

// E2E test describe('Bill Payment Flow', () => { it('should schedule and process bill payment', async () => { const user = await setupTestUser() const bill = await createTestBill(user.id) // Schedule payment const scheduled = await api.post('/api/v1/bills/schedule', bill) expect(scheduled.status).toBe('scheduled') // Wait for processing await waitForWorkflow(scheduled.workflow_id) // Verify payment const payment = await api.get(`/api/v1/payments/${scheduled.payment_id}`) expect(payment.status).toBe('completed') // Verify ledger const transfers = await tigerbeetle.getTransfers(bill.id) expect(transfers).toHaveLength(1) }) })

Monitoring & Observability

Key Metrics

Service Metrics: - Request rate (req/s) - Error rate (errors/s) - Latency (p50, p95, p99) - Availability (uptime %) Business Metrics: - Accounts synced/day - Transactions processed/day - Payments processed/day - Credit scores fetched/day Infrastructure Metrics: - CPU utilization - Memory usage - Disk I/O - Network throughput

Alerting Rules

Critical Alerts: - TigerBeetle down - API Gateway down - Database connection pool exhausted - Payment failure rate > 1% Warning Alerts: - API latency p99 > 1s - Error rate > 0.1% - Disk usage > 80% - Plaid sync failures > 10/hour

Rollback Strategy

Rollback Procedures

  1. Service Level: Each service can be rolled back independently
  2. Database Migrations: All migrations must be reversible
  3. Feature Flags: Disable features without deployment
  4. Traffic Shifting: Gradual rollback using Kong
  5. Data Recovery: TigerBeetle provides immutable audit trail

Success Criteria

Week 1 Success

  • All infrastructure services deployed
  • TigerBeetle processing test transactions
  • Temporal executing workflows
  • Kong routing API requests

Week 2 Success

  • Plaid successfully syncing sandbox data
  • Transactions flowing into TigerBeetle
  • Account balances updating in real-time
  • Transaction categorization working

Week 3 Success

  • Analytics pipeline processing data
  • AI generating insights
  • Credit scores being fetched
  • Payment processing working

Week 4 Success

  • Console connecting to new APIs
  • Real-time updates working
  • All tests passing
  • Ready for production deployment

Next Steps

After completing this implementation:

  1. Performance Testing: Load test all services
  2. Security Audit: Penetration testing and code review
  3. Documentation: Complete API documentation
  4. Training: Team training on new infrastructure
  5. Production Deployment: Gradual rollout with feature flags
Last updated on