Skip to main content

Evolve -7 Genius insight via multi-model LLM and evolutionary alogrithms - Progress - 30 Nov 2025

Published: December 1, 20255 min read
#Build in Public#evolve-7#multi-model LLM#Jamie Watters#Soloprenuer

One API Key to Rule Them All: Unifying 300+ AI Models – Day 30 of Evolve-7

TL;DR: Replaced 6 separate AI provider API keys with a single OpenRouter integration, added automatic failover between models, and built a real-time pricing system. The platform now has access to 300+ AI models through one unified gateway.

Today's Focus

Today was about simplification and resilience. I completed Phase 38: OpenRouter Integration — a complete overhaul of how Evolve-7 connects to AI models. Instead of juggling separate API keys for OpenAI, Anthropic, and Google, everything now routes through OpenRouter's unified API. One key, 300+ models, automatic failovers.

Key Wins

Sprint 38.1-38.2: Foundation Layer (45 minutes)

Started by building the OpenRouterNode base class — a unified interface for all AI model calls. This replaces the provider-specific logic that was scattered across AnthropicNode, GoogleNode, and OpenAINode.

The key insight: OpenRouter uses OpenAI-compatible endpoints, so the request format stays familiar while gaining access to every major model. I mapped all 19 models from Evolve-7's config to their OpenRouter equivalents and added fallback chains (e.g., if Claude 3.5 Sonnet is unavailable, try GPT-4 Turbo, then Gemini Pro).

Sprint 38.3: Dynamic Pricing Service (30 minutes)

Here's where it gets interesting. AI model pricing changes frequently — sometimes mid-day. Hardcoding prices means billing inaccuracies and user complaints.

I built OpenRouterPricingService with:

  • Real-time API fetching from OpenRouter's /models endpoint
  • 6-hour cache TTL to balance freshness with performance
  • Fallback pricing ($1/1M input, $2/1M output) when API is unreachable
  • Concurrent refresh prevention so multiple requests don't hammer the API

The result: Evolve-7 always knows the current cost of any model, and billing matches what users actually pay.

Sprint 38.4: NodeFactory Pattern (25 minutes)

Created a unified factory that decides how to route AI requests:

const node = NodeFactory.create('claude-3-sonnet', {
  routingMode: 'auto'  // 'auto' | 'openrouter' | 'direct'
});

Auto mode (the default): Uses OpenRouter if the API key is available, falls back to direct provider keys if not. This means existing deployments with individual provider keys keep working, while new deployments only need one key.

Sprint 38.5: Workflow Engine Integration (20 minutes)

Replaced the hardcoded provider switch statement in pocketflow-engine.ts with a single NodeFactory.create() call. The PocketFlow workflow engine now automatically uses the best available routing without knowing the details.

Added OpenRouter status to the health check endpoint — operators can now see at a glance whether the unified routing is active.

Sprint 38.6: Environment & Documentation (15 minutes)

Updated .env.example and .env.railway.example to prioritize OpenRouter as the primary API configuration. Legacy provider keys are still documented but marked as optional.

Added a clear "AI API Configuration" section to the README explaining the single-key setup.

Sprint 38.7: Testing & Validation (40 minutes)

The final sprint: comprehensive test coverage. Created three test files:

  1. OpenRouterNode.test.ts — Request formatting, response parsing, error handling, cost calculation
  2. openrouter-pricing.service.test.ts — Cache refresh, fallback pricing, concurrent refresh prevention
  3. NodeFactory.test.ts — Routing mode detection, provider detection, configuration passthrough

62 tests total, all passing. The test run caught one issue: I'd used claude-3.5-sonnet in tests but the model map uses claude-3-sonnet. Quick fix, important lesson about referencing actual config values.

What I Learned

The Power of Unified Gateways

Before: 6 API keys to manage, 6 billing dashboards to monitor, 6 rate limit policies to understand. After: 1 API key, 1 billing dashboard, 1 set of documentation.

The complexity reduction isn't just operational — it's cognitive. When something breaks, there's one place to look.

Fallback Chains Need Thought

Automatic failover sounds great until you realize model capabilities vary. If Claude fails and you fall back to a smaller model, the output quality might suffer. I designed the fallback chains to prioritize similar-capability models: Claude → GPT-4 Turbo → Gemini Pro.

Pricing APIs Are Fragile

OpenRouter's pricing endpoint is well-designed, but any external API can fail. The 6-hour cache with fallback pricing means Evolve-7 never blocks on pricing data. Accuracy degrades gracefully rather than breaking completely.

Challenge of the Day

The toughest part was ensuring backward compatibility. Existing Railway deployments have individual provider keys configured. New users should only need the OpenRouter key. Both paths needed to work.

The solution: the "auto" routing mode checks for OpenRouter key first, falls back to direct providers. Existing deployments keep working. New deployments get the simplified setup. Zero breaking changes.

Progress Snapshot

Phase 38 Complete:

  • Sprint 38.1: OpenRouterNode base class
  • Sprint 38.2: Model configuration with fallback chains
  • Sprint 38.3: Real-time pricing service
  • Sprint 38.4: NodeFactory unified routing
  • Sprint 38.5: Workflow engine integration
  • Sprint 38.6: Environment & documentation
  • Sprint 38.7: 62 unit tests passing

Key Files Created:

  • OpenRouterNode.ts — Unified AI model interface
  • openrouter-models.ts — Model mapping and fallbacks
  • openrouter-pricing.service.ts — Real-time pricing with cache
  • NodeFactory.ts — Routing decision logic

Deployment:

  • OPENROUTER_API_KEY added to Railway production
  • Build compiling successfully
  • Deployment in progress

What This Means for Users

  1. Simpler setup — One API key instead of managing multiple providers
  2. Better reliability — Automatic failover if a model is unavailable
  3. Accurate billing — Real-time pricing means no surprises
  4. More model choices — Access to 300+ models through OpenRouter's marketplace

Tomorrow's Mission

Verify the OpenRouter integration in production:

  1. Run debates with all 3 models through OpenRouter
  2. Confirm WebSocket progress shows correct model names
  3. Verify costs match OpenRouter billing dashboard
  4. Test fallback by requesting an unavailable model
  5. Begin Phase 36: Payment & Polish

Part of my build-in-public journey with Evolve-7. Follow along for daily updates!

Share this post