Codapult
ЦеныПлагиныДокументация
Codapult

SaaS-бойлерплейт для разработчиков

Продукт

  • Цены
  • Плагины
  • Документация

Компания

  • Контакты
  • GitHub

Правовая информация

  • Политика конфиденциальности
  • Условия использования

© 2026 Codapult. Все права защищены.

Все статьи

Getting Started

  • Introduction
  • Quick Start
  • Project Structure

Configuration

  • Environment Variables
  • App Configuration

Authentication

  • Authentication
  • OAuth Providers
  • Two-Factor & Passwordless
  • Enterprise SSO (SAML)

Database

  • Database
  • Migrations

Teams

  • Teams & Organizations
  • Permissions & RBAC

Payments

  • Payments & Billing
  • Stripe Setup
  • LemonSqueezy Setup
  • Polar Setup
  • Payment Webhooks

Api

  • API Layer
  • tRPC
  • GraphQL

Ai

  • AI Features

Email

  • Email
  • Email Templates

Infrastructure

  • Infrastructure
  • File Storage
  • Background Jobs

Ui

  • UI & Theming

I18n

  • Internationalization

Content Management

  • Content Management

Admin

  • Admin Panel

Security

  • Security

Monitoring

  • Analytics & Monitoring

Modules

  • Module Architecture

Plugins

  • Plugin System
  • AI Kit Plugin
  • CRM Plugin
  • Helpdesk Plugin
  • Email Marketing Plugin

Deployment

  • Deployment
  • Troubleshooting

Upgrading

  • Upgrading Codapult

Developer Tools

  • MCP Server
  • Testing
Plugins

AI Kit Plugin

Multi-provider AI gateway, prompt management, tool & agent framework, guardrails, cost analytics, batch processing, and playground.

Overview

AI Kit is a premium plugin that adds a complete AI platform to your Codapult project. It replaces the basic built-in AI chat with a production-grade infrastructure: multi-provider gateway with fallback chains, prompt management with versioning, a tool & agent framework, input/output guardrails, cost analytics with budgets, batch processing, and an interactive playground.

Package: @codapult/plugin-ai-kit · Price: $49 (one-time)

Install with:

npx @codapult/cli plugins add @codapult/plugin-ai-kit

Features

AI Gateway

Route requests across multiple LLM providers through a unified API endpoint. The gateway pipeline handles:

  • Provider routing — send requests to OpenAI, Anthropic, Google, Groq, Together AI, Ollama, or any OpenAI-compatible endpoint
  • Fallback chains — if the primary provider fails or is unavailable, the request automatically falls to the next provider in the chain
  • Retries — configurable retry count with backoff for transient failures
  • Response caching — cache identical requests to reduce costs and latency
  • Budget enforcement — check org-level budgets before processing; reject when exceeded
  • Streaming & non-streaming — full pipeline support (guardrails, tools, logging) in both modes

Prompt Management

Create, organize, and version prompt templates:

  • Template variables — define reusable prompts with {{variable}} placeholders
  • Version history — every edit creates a new version; roll back to any previous version
  • Folders and tags — organize prompts by use case or team
  • Import/Export — move prompts between environments (dev → staging → prod)
  • Model parameters — per-version settings for temperature, max tokens, top-p, etc.

Tool & Agent Framework

Build AI agents that can call external tools:

  • Tool types — built-in (JavaScript functions), webhook (HTTP endpoints), or custom code
  • Agent configuration — combine a model, system prompt, and set of tools into a reusable agent
  • Multi-step execution — agents can call multiple tools in sequence to complete complex tasks
  • Import/Export — share tool definitions across projects

Guardrails

Protect your AI pipeline with input/output filtering:

  • PII detection — automatically flag or redact personally identifiable information
  • Content blocklist — block messages containing specific words or phrases
  • Regex rules — custom pattern matching for domain-specific filtering
  • Length limits — enforce minimum/maximum token or character counts
  • Output validation — verify AI responses meet format or content requirements
  • Test endpoint — test guardrail rules against sample text before deploying

Cost Analytics & Budgets

Track and control AI spending:

  • Per-request logging — token counts, cost, model, latency for every request
  • Model pricing — configurable per-model pricing (admin-editable)
  • Cost dashboards — summary, by-model breakdown, aggregated by period (daily/weekly/monthly)
  • Budget alerts — set daily or monthly budgets per organization; gateway rejects requests when exceeded
  • Request log — paginated, searchable log of all AI requests

Batch Processing

Process bulk AI requests asynchronously:

  • CSV/JSON upload — submit a batch of prompts as a file
  • Background execution — jobs run via Codapult's background job system
  • Progress monitoring — track job status and completion percentage
  • Cancel support — cancel running batch jobs
  • Results download — retrieve completed batch results

AI Playground

Interactive environment for testing and experimenting:

  • Session management — create, save, and resume playground sessions
  • Side-by-side comparison — test the same prompt across different models
  • Shareable sessions — enable/disable sharing via token for team collaboration
  • Tool testing — test tool calls within the playground environment

Enhanced Chat

Replaces the core Codapult AI chat with advanced features:

  • Tool call visualization — see tool calls and results inline in the chat
  • Cost badges — token count and cost displayed per message
  • Agent selection — choose which agent to chat with
  • Conversation history — full chat memory with search

Supported Providers

ProviderPackageExample Models
OpenAI@ai-sdk/openai (in core)GPT-4o, GPT-4o-mini, o1, o3
Anthropic@ai-sdk/anthropic (in core)Claude Sonnet, Haiku
Google@ai-sdk/google (optional)Gemini 2.5 Pro/Flash
Groq@ai-sdk/groq (optional)Llama, Mixtral
Together AI@ai-sdk/togetherai (optional)Open-source models
Ollamaollama-ai-provider (optional)Self-hosted models
Custom—Any OpenAI-compatible endpoint

Dashboard Pages

The plugin adds 9 pages to the dashboard under /dashboard/ai/:

PagePathDescription
AI Chat/dashboard/ai/chatEnhanced chat with tool calls and agents
Prompts/dashboard/ai/promptsPrompt template management
Tools/dashboard/ai/toolsTool registry
Agents/dashboard/ai/agentsAgent configuration
Playground/dashboard/ai/playgroundInteractive testing environment
Batch Jobs/dashboard/ai/batchBatch processing management
Guardrails/dashboard/ai/guardrailsGuardrail rule management
Analytics/dashboard/ai/analyticsCost dashboards and request log
Settings/dashboard/ai/settingsModel pricing configuration

API Routes

All routes are served under /api/plugins/ai-kit/. The plugin registers 30+ API routes including:

  • POST /gateway — generate or stream AI response (main endpoint)
  • GET|POST|PUT|DELETE /prompts — prompt CRUD + versioning
  • GET|POST|PUT|DELETE /tools — tool CRUD
  • GET|POST|PUT|DELETE /agents — agent CRUD + /agents/run
  • GET /analytics/summary|by-model|aggregated|budget|requests — cost analytics
  • GET|PUT /pricing — model pricing management
  • GET|POST|DELETE /batch — batch job management
  • GET|POST|PUT|DELETE /guardrails + POST /guardrails/test — guardrail CRUD + testing
  • GET|POST|PUT|DELETE /playground/sessions + POST /playground/share — playground sessions
  • POST /export/prompts|tools|guardrails + POST /import/prompts|tools|guardrails — import/export

Database Tables

The plugin creates 9 tables prefixed with ai_:

TablePurpose
ai_promptPrompt templates with folders and tags
ai_prompt_versionVersion history per prompt (model params, content)
ai_toolTool definitions (builtin, webhook, javascript)
ai_agentAgent configurations (model, system prompt, tools)
ai_request_logPer-request metrics (tokens, cost, latency, model)
ai_model_pricingConfigurable per-model pricing (input/output per token)
ai_batch_jobBatch processing jobs with status and progress
ai_guardrailGuardrail rule definitions (type, config, severity)
ai_playgroundPlayground sessions with sharing tokens

Environment Variables

The plugin requires at least one AI provider API key (already configured if you use the built-in AI chat):

VariableRequiredDescription
OPENAI_API_KEYOne ofOpenAI API key
ANTHROPIC_API_KEYOne ofAnthropic API key
GOOGLE_AI_API_KEY—Google Gemini API key (optional)
GROQ_API_KEY—Groq API key (optional)
TOGETHER_API_KEY—Together AI API key (optional)
OLLAMA_BASE_URL—Ollama server URL (default: localhost)

Integration with Core Modules

ModuleHow AI Kit Uses It
Vercel AI SDK (ai)Gateway pipeline, streaming, tool calls
getAppSession()Auth check on all API routes
checkRateLimit()Rate limiting on gateway endpoint
Background jobsBatch processing execution
Database (Drizzle)All data storage (prompts, tools, analytics, etc.)
Plugin SystemCRM Plugin