Ship AI Features Without Building the Plumbing
Stop building AI infrastructure from scratch. AI Kit drops a production-ready AI platform into your SaaS — a multi-provider gateway that routes between OpenAI, Anthropic, Google, and more with automatic fallbacks; prompt management with versioning; a tool and agent framework for AI-powered workflows; guardrails to keep AI safe; cost analytics with per-org budgets; batch processing; and an interactive playground for testing. 9 dashboard pages and 30+ API endpoints, ready to ship.
Dashboard Pages
API Routes
Database Tables
LLM Providers
Route AI requests across OpenAI, Anthropic, Google, Groq, Together AI, and Ollama through a single endpoint. When a provider fails, the gateway automatically falls to the next in the chain. Built-in retries, response caching, and streaming support — your users never see downtime.
Your users can create, organize, and version prompt templates with variables. Every edit creates a new version they can roll back to. Folders, tags, per-version model parameters, and import/export across environments — dev to staging to production.
Build AI agents that call external tools. Register built-in JavaScript functions, webhook endpoints, or custom code as tools. Combine a model, system prompt, and toolset into reusable agents that handle multi-step tasks autonomously.
Protect your AI pipeline automatically. PII detection, content blocklists, regex pattern matching, and length limits run in the gateway pipeline on every request. Test guardrail rules against sample text before deploying to production.
Every AI request is logged with token counts, cost, model, and latency. Set daily or monthly budgets per organization — the gateway rejects requests when exceeded. Dashboards show cost by model, aggregated by period, with a searchable request log.
Process hundreds of AI requests at once. Upload a CSV or JSON batch, and the system runs them asynchronously via background jobs. Monitor progress in real time, cancel running jobs, and download completed results.
A built-in testing environment where your users can experiment with models, prompts, and tools. Create sessions, compare outputs side-by-side across different models, and share sessions with team members via tokens.
Replaces the basic Codapult AI chat with a full-featured interface. Tool calls and results appear inline, each message shows token count and cost, users can select which agent to chat with, and conversation history is fully searchable.
Building a product with AI chat, content generation, or copilot features? AI Kit gives you the gateway, cost controls, and guardrails from day one — so you focus on your product, not LLM infrastructure.
Need AI-assisted workflows for your team? Use the agent framework to build custom tools — document summarizers, data extractors, or support assistants — without managing provider APIs directly.
Use GPT-4o for complex reasoning and Claude for content generation. The gateway routes each request to the right provider with fallbacks, so you get the best model for each task without vendor lock-in.
Add to your Codapult project with a single command:
npx @codapult/cli plugins add @codapult/plugin-ai-kitOpenAI, Anthropic, Google (Gemini), Groq, Together AI, and Ollama. You can also connect any OpenAI-compatible endpoint as a custom provider via the gateway configuration.
It enhances it. The base Codapult template includes basic AI chat — AI Kit upgrades it with multi-provider routing, prompt management, tool call visualization, cost tracking per message, and agent selection. The core chat still works without the plugin.
Yes. The cost analytics system logs every request with token counts and cost. You can set daily or monthly budget limits per organization — when exceeded, the gateway rejects new requests automatically. All costs are visible in the analytics dashboard.
Get all 4 premium plugins for $129 instead of $176.