The control plane for your AI stack

Intelligent LLM routing, cost management, and resilience. For Enterprise-Grade AI.

Works with

OpenAI Anthropic Gemini DeepSeek Cohere

Ship AI fast. With visibility and control.

We've built observability for everything else. It's time for LLM calls.

The invoice nobody can explain

Your AI spend is one of the fastest-growing line items in the P&L — and nobody knows which team, feature, or experiment is driving it. Majordomo attributes every dollar to every call.

Three teams, zero shared data

Engineering wants GPT-4 for quality. Finance wants costs cut 40%. Product wants three new features shipped. Without data, everyone is guessing.

Single provider, single point of failure

Route across providers, fail over automatically, and enforce policy — without changing application code.

Total darkness on your most expensive API

We trace every database query and log every microservice call. Your LLM calls — the ones that cost a dollar each — deserve the same.

The Suite

Three projects, one goal: make LLM operations observable and reliable.

Gateway

Go

LLM API proxy that routes requests to upstream providers, logs usage metrics, and calculates costs.

  • Transparent reverse proxy for any LLM API
  • Per-request cost calculation
  • Custom metadata via X-Majordomo headers
  • PostgreSQL request logging
  • API key management
go install github.com/superset-studio/majordomo-gateway@latest

LLM Library

Python

Unified async client for multiple LLM providers with built-in cost tracking and structured output.

  • Single interface for 5 providers
  • Streaming and structured output
  • Cascade failover across providers
  • Per-request cost and token tracking
  • PostgreSQL, MySQL, and SQLite logging
pip install majordomo-llm

Frameworks

Python

Framework adapters for routing LLM requests through the gateway with minimal integration effort.

  • Agno framework adapter
  • Pydantic AI adapter
  • Automatic base URL rewriting
  • Header injection for metadata
  • Zero config — just wrap your model
pip install majordomo-frameworks

How They Work Together

Use each project independently or combine them for full-stack observability.

Gateway Only

Deploy the gateway as a transparent proxy. Any HTTP client — curl, Python requests, Node fetch — sends requests through it.

Your App Gateway LLM Provider

Centralized logging, zero code changes

Library Only

Use majordomo-llm directly in Python. Built-in cost tracking per request and optional database logging.

Python App majordomo-llm LLM Provider

Unified API, cascade failover, structured output

Frameworks + Gateway

Use Agno or Pydantic AI adapters to route framework calls through the gateway automatically.

Framework Adapter Gateway LLM Provider

Minimal integration — just wrap your model

Full Stack

Point majordomo-llm through the gateway for centralized logging across all applications.

Python App majordomo-llm Gateway LLM Provider

Maximum visibility and resilience