Overview
AI Kit is a premium plugin that adds a complete AI platform to your Codapult project. It replaces the basic built-in AI chat with a production-grade infrastructure: multi-provider gateway with fallback chains, prompt management with versioning, a tool & agent framework, input/output guardrails, cost analytics with budgets, batch processing, and an interactive playground.
Package: @codapult/plugin-ai-kit · Price: $49 (one-time)
Install with:
npx @codapult/cli plugins add @codapult/plugin-ai-kit
Features
AI Gateway
Route requests across multiple LLM providers through a unified API endpoint. The gateway pipeline handles:
- Provider routing — send requests to OpenAI, Anthropic, Google, Groq, Together AI, Ollama, or any OpenAI-compatible endpoint
- Fallback chains — if the primary provider fails or is unavailable, the request automatically falls to the next provider in the chain
- Retries — configurable retry count with backoff for transient failures
- Response caching — cache identical requests to reduce costs and latency
- Budget enforcement — check org-level budgets before processing; reject when exceeded
- Streaming & non-streaming — full pipeline support (guardrails, tools, logging) in both modes
Prompt Management
Create, organize, and version prompt templates:
- Template variables — define reusable prompts with
{{variable}}placeholders - Version history — every edit creates a new version; roll back to any previous version
- Folders and tags — organize prompts by use case or team
- Import/Export — move prompts between environments (dev → staging → prod)
- Model parameters — per-version settings for temperature, max tokens, top-p, etc.
Tool & Agent Framework
Build AI agents that can call external tools:
- Tool types — built-in (JavaScript functions), webhook (HTTP endpoints), or custom code
- Agent configuration — combine a model, system prompt, and set of tools into a reusable agent
- Multi-step execution — agents can call multiple tools in sequence to complete complex tasks
- Import/Export — share tool definitions across projects
Guardrails
Protect your AI pipeline with input/output filtering:
- PII detection — automatically flag or redact personally identifiable information
- Content blocklist — block messages containing specific words or phrases
- Regex rules — custom pattern matching for domain-specific filtering
- Length limits — enforce minimum/maximum token or character counts
- Output validation — verify AI responses meet format or content requirements
- Test endpoint — test guardrail rules against sample text before deploying
Cost Analytics & Budgets
Track and control AI spending:
- Per-request logging — token counts, cost, model, latency for every request
- Model pricing — configurable per-model pricing (admin-editable)
- Cost dashboards — summary, by-model breakdown, aggregated by period (daily/weekly/monthly)
- Budget alerts — set daily or monthly budgets per organization; gateway rejects requests when exceeded
- Request log — paginated, searchable log of all AI requests
Batch Processing
Process bulk AI requests asynchronously:
- CSV/JSON upload — submit a batch of prompts as a file
- Background execution — jobs run via Codapult's background job system
- Progress monitoring — track job status and completion percentage
- Cancel support — cancel running batch jobs
- Results download — retrieve completed batch results
AI Playground
Interactive environment for testing and experimenting:
- Session management — create, save, and resume playground sessions
- Side-by-side comparison — test the same prompt across different models
- Shareable sessions — enable/disable sharing via token for team collaboration
- Tool testing — test tool calls within the playground environment
Enhanced Chat
Replaces the core Codapult AI chat with advanced features:
- Tool call visualization — see tool calls and results inline in the chat
- Cost badges — token count and cost displayed per message
- Agent selection — choose which agent to chat with
- Conversation history — full chat memory with search
Supported Providers
| Provider | Package | Example Models |
|---|---|---|
| OpenAI | @ai-sdk/openai (in core) | GPT-4o, GPT-4o-mini, o1, o3 |
| Anthropic | @ai-sdk/anthropic (in core) | Claude Sonnet, Haiku |
@ai-sdk/google (optional) | Gemini 2.5 Pro/Flash | |
| Groq | @ai-sdk/groq (optional) | Llama, Mixtral |
| Together AI | @ai-sdk/togetherai (optional) | Open-source models |
| Ollama | ollama-ai-provider (optional) | Self-hosted models |
| Custom | — | Any OpenAI-compatible endpoint |
Dashboard Pages
The plugin adds 9 pages to the dashboard under /dashboard/ai/:
| Page | Path | Description |
|---|---|---|
| AI Chat | /dashboard/ai/chat | Enhanced chat with tool calls and agents |
| Prompts | /dashboard/ai/prompts | Prompt template management |
| Tools | /dashboard/ai/tools | Tool registry |
| Agents | /dashboard/ai/agents | Agent configuration |
| Playground | /dashboard/ai/playground | Interactive testing environment |
| Batch Jobs | /dashboard/ai/batch | Batch processing management |
| Guardrails | /dashboard/ai/guardrails | Guardrail rule management |
| Analytics | /dashboard/ai/analytics | Cost dashboards and request log |
| Settings | /dashboard/ai/settings | Model pricing configuration |
API Routes
All routes are served under /api/plugins/ai-kit/. The plugin registers 30+ API routes including:
POST /gateway— generate or stream AI response (main endpoint)GET|POST|PUT|DELETE /prompts— prompt CRUD + versioningGET|POST|PUT|DELETE /tools— tool CRUDGET|POST|PUT|DELETE /agents— agent CRUD +/agents/runGET /analytics/summary|by-model|aggregated|budget|requests— cost analyticsGET|PUT /pricing— model pricing managementGET|POST|DELETE /batch— batch job managementGET|POST|PUT|DELETE /guardrails+POST /guardrails/test— guardrail CRUD + testingGET|POST|PUT|DELETE /playground/sessions+POST /playground/share— playground sessionsPOST /export/prompts|tools|guardrails+POST /import/prompts|tools|guardrails— import/export
Database Tables
The plugin creates 9 tables prefixed with ai_:
| Table | Purpose |
|---|---|
ai_prompt | Prompt templates with folders and tags |
ai_prompt_version | Version history per prompt (model params, content) |
ai_tool | Tool definitions (builtin, webhook, javascript) |
ai_agent | Agent configurations (model, system prompt, tools) |
ai_request_log | Per-request metrics (tokens, cost, latency, model) |
ai_model_pricing | Configurable per-model pricing (input/output per token) |
ai_batch_job | Batch processing jobs with status and progress |
ai_guardrail | Guardrail rule definitions (type, config, severity) |
ai_playground | Playground sessions with sharing tokens |
Environment Variables
The plugin requires at least one AI provider API key (already configured if you use the built-in AI chat):
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY | One of | OpenAI API key |
ANTHROPIC_API_KEY | One of | Anthropic API key |
GOOGLE_AI_API_KEY | — | Google Gemini API key (optional) |
GROQ_API_KEY | — | Groq API key (optional) |
TOGETHER_API_KEY | — | Together AI API key (optional) |
OLLAMA_BASE_URL | — | Ollama server URL (default: localhost) |
Integration with Core Modules
| Module | How AI Kit Uses It |
|---|---|
Vercel AI SDK (ai) | Gateway pipeline, streaming, tool calls |
getAppSession() | Auth check on all API routes |
checkRateLimit() | Rate limiting on gateway endpoint |
| Background jobs | Batch processing execution |
| Database (Drizzle) | All data storage (prompts, tools, analytics, etc.) |