Core Technologies
Earna AI’s financial platform is built on a carefully selected stack of best-in-class technologies, each chosen for specific capabilities that enable enterprise-grade financial services at scale.
Technology Selection Criteria
Every technology in our stack was evaluated against these criteria:
- Financial-grade reliability (99.99%+ uptime)
- Regulatory compliance capabilities
- Horizontal scalability for millions of users
- Developer experience and documentation
- Open-source availability when possible
Core Technology Stack
Technology Stack
- TigerBeetle - Immutable financial ledger with double-entry accounting
- Plaid - Banking data aggregation across 12,000+ institutions
- Supabase - Full-stack backend infrastructure and authentication
- Hyperswitch - Multi-processor payment orchestration platform
Detailed Specifications
TigerBeetle
TigerBeetle (Deployed)
Overview
TigerBeetle is our production financial ledger deployed on GKE, providing mission-critical safety and performance. It serves as the immutable double-entry accounting system with ACID guarantees for all financial transactions on the Earna AI platform.
Why TigerBeetle
Financial-Grade Reliability
- Double-entry accounting enforced at database level
- ACID guarantees for all transactions
- Immutable audit trail for compliance
- Zero financial data loss architecture
Extreme Performance
- 1+ million TPS throughput
- < 1ms latency for transactions
- Optimized for NVMe storage
- 10MB binary with minimal resource usage
Built for Finance
- Native debits and credits support
- Automatic balance calculations
- Multi-currency handling
- Designed for regulated environments
Architecture Integration
┌─────────────┐ ┌──────────────┐ ┌──────────────┐
│ Console │────▶│Credit Engine │────▶│ TigerBeetle │
│ (Frontend) │ │ (Backend) │ │ (Ledger) │
└─────────────┘ └──────────────┘ └──────────────┘
│ │
▼ │
┌──────────────┐ │
│ Supabase │◀────────────┘
│ (Metadata) │ (Analytics)
└──────────────┘
Implementation Details
Account Structure
interface TigerBeetleAccount {
id: bigint // 128-bit unique identifier
ledger: number // Ledger ID for multi-ledger support
code: number // Account type code
flags: AccountFlags // Account behavior flags
debits_pending: bigint // Pending debit amount
debits_posted: bigint // Posted debit amount
credits_pending: bigint // Pending credit amount
credits_posted: bigint // Posted credit amount
}
Transfer Operations
interface Transfer {
id: bigint // Unique transfer ID
debit_account_id: bigint
credit_account_id: bigint
amount: bigint // Amount in smallest currency unit
pending_id: bigint // For two-phase commits
timeout: number // Timeout for pending transfers
ledger: number
code: number // Transfer type code
flags: TransferFlags
timestamp: bigint // Nanosecond precision
}
Use Cases in Earna
- User Wallets: Track user balances and transactions
- Credit Tracking: Monitor credit utilization and payments
- Investment Ledger: Record investment transactions
- Business Accounts: Multi-account business banking
- Tax Tracking: Immutable tax-related transactions
- Audit Trail: Complete financial history
Production Deployment (September 2025)
GKE Cluster Configuration:
- Platform: Google Kubernetes Engine (GKE) Standard
- Region: us-central1
- Nodes: 3 × c3-standard-4-lssd (dedicated pool)
- Storage: 375GB local NVMe SSD per node
- Memory: 15GB RAM per node
- CPU: 4 vCPUs per node
- Network: External LoadBalancer (104.154.31.249:3003)
Monitoring Stack:
- Metrics: Prometheus + StatsD Exporter
- Dashboards: Grafana (http://34.172.102.114)
- Alerts: Configured for replica health, latency, storage
Performance Achieved:
- Throughput: 10,000+ TPS verified
- Latency: Sub-millisecond p99
- Availability: 99.99% uptime target
- Recovery: Automated with StatefulSet
Technology Integration Matrix
Component | TigerBeetle | Plaid | Supabase | Hyperswitch |
---|---|---|---|---|
Account Management | Ledger entries | Account sync | User profiles | Payment methods |
Transactions | Immutable record | Raw data | Enriched metadata | Payment processing |
Real-time Updates | Balance changes | Webhooks | WebSockets | Status updates |
Analytics | Financial reports | Spending patterns | User analytics | Payment analytics |
Compliance | Audit trail | Bank compliance | Data privacy | PCI compliance |
Infrastructure Requirements
Minimum Production Setup
TigerBeetle Cluster:
- 3 nodes (1 primary, 2 replicas)
- 8 vCPUs, 32GB RAM per node
- 1TB NVMe SSD per node
- 10Gbps network
Application Servers:
- 4 instances (auto-scaling 4-20)
- 4 vCPUs, 16GB RAM each
- Docker/Kubernetes deployment
Database (Supabase):
- Dedicated instance
- 8 vCPUs, 32GB RAM
- 500GB SSD storage
- 2 read replicas
Redis Cluster:
- 3 nodes for HA
- 4 vCPUs, 16GB RAM each
- Persistent storage
Monitoring:
- Prometheus + Grafana
- Sentry error tracking
- Custom dashboards
Scaling Considerations
Scaling Triggers
- TigerBeetle: Add nodes at 70% capacity
- Supabase: Add read replicas at 1000 QPS
- Redis: Scale at 80% memory usage
- App servers: Auto-scale on CPU/memory
Security Architecture
Defense in Depth
-
Network Security
- VPC isolation
- Private subnets
- WAF protection
- DDoS mitigation
-
Application Security
- JWT authentication
- Rate limiting
- Input validation
- SQL injection prevention
-
Data Security
- Encryption at rest (AES-256)
- Encryption in transit (TLS 1.3)
- Key management (AWS KMS)
- PII tokenization
-
Compliance
- SOC 2 Type II
- PCI DSS Level 1
- GDPR compliance
- CCPA compliance
Disaster Recovery
Backup Strategy
- TigerBeetle: Continuous replication + hourly snapshots
- Supabase: Daily automated backups with PITR
- Redis: AOF persistence + daily snapshots
- Documents: S3 versioning with cross-region replication
Recovery Targets
- RTO (Recovery Time Objective): < 1 hour
- RPO (Recovery Point Objective): < 5 minutes
- Availability SLA: 99.99% (52 minutes/year)
Related Documentation
- Implementation Phases - Development timeline
- Architecture Overview - System architecture
- Design Structure Matrix - Dependencies
- Implementation Guide - Execution plan
Last updated on