Deploy
Deploy your Copilot backend to any platform
Your Copilot backend uses standard Web APIs (fetch, Response, ReadableStream), so the same code runs everywhere — Vercel, Cloudflare, Deno, AWS, or your own servers.
Vercel
Deploy to Vercel with Next.js. Supports both Serverless and Edge runtimes.
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
export async function POST(req: Request) {
const body = await req.json();
return runtime.stream(body).toResponse();
}Edge functions have faster cold starts (~25ms vs ~250ms) and run closer to users.
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
export const runtime = 'edge'; // Enable Edge Runtime
const rt = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
export async function POST(req: Request) {
const body = await req.json();
return rt.stream(body).toResponse();
}Edge functions have a 30-second execution limit. For long-running agent loops, use Serverless.
Non-Streaming
export async function POST(req: Request) {
const body = await req.json();
const result = await runtime.chat(body);
return Response.json(result);
}Deploy
npm i -g vercel
vercelSet your environment variable in the Vercel dashboard or CLI:
vercel env add OPENAI_API_KEYCloudflare Workers
Deploy to Cloudflare's edge network with Workers. Runs in 300+ locations worldwide.
import { createRuntime, createHonoApp } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
export interface Env {
OPENAI_API_KEY: string;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const runtime = createRuntime({
provider: createOpenAI({ apiKey: env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
return createHonoApp(runtime).fetch(request, env);
},
};import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
export interface Env {
OPENAI_API_KEY: string;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const runtime = createRuntime({
provider: createOpenAI({ apiKey: env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
if (request.method !== 'POST') {
return new Response('Method not allowed', { status: 405 });
}
const body = await request.json();
const result = await runtime.chat(body);
return Response.json(result);
},
};Configuration
name = "my-copilot-api"
main = "src/index.ts"
compatibility_date = "2024-01-01"
compatibility_flags = ["nodejs_compat"]Deploy
npm i -g wrangler
# Add your API key as a secret
wrangler secret put OPENAI_API_KEY
# Deploy
wrangler deployYour API will be available at https://my-copilot-api.<your-subdomain>.workers.dev
Deno Deploy
Deploy to Deno's global edge network with zero configuration.
import { createRuntime, createHonoApp } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: Deno.env.get('OPENAI_API_KEY') }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
Deno.serve(createHonoApp(runtime).fetch);import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: Deno.env.get('OPENAI_API_KEY') }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
Deno.serve(async (req: Request) => {
if (req.method !== 'POST') {
return new Response('Method not allowed', { status: 405 });
}
const body = await req.json();
const result = await runtime.chat(body);
return Response.json(result);
});Deploy
# Install Deno Deploy CLI
deno install -Arf jsr:@deno/deployctl
# Deploy
deployctl deploy --project=my-copilot main.tsSet environment variables in the Deno Deploy dashboard.
AWS Lambda
Deploy to AWS Lambda using SST, Serverless Framework, or AWS CDK.
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
import { Resource } from 'sst';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: Resource.OpenAIApiKey.value }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
export async function handler(event: any) {
const body = JSON.parse(event.body);
// Non-streaming (Lambda default)
const result = await runtime.chat(body);
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result),
};
}export default $config({
app(input) {
return { name: 'my-copilot', region: 'us-east-1' };
},
async run() {
const api = new sst.aws.Function('Chat', {
handler: 'packages/functions/src/chat.handler',
url: true,
});
return { url: api.url };
},
});import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
import type { APIGatewayProxyHandler } from 'aws-lambda';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
export const chat: APIGatewayProxyHandler = async (event) => {
const body = JSON.parse(event.body || '{}');
const result = await runtime.chat(body);
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result),
};
};service: my-copilot
provider:
name: aws
runtime: nodejs20.x
environment:
OPENAI_API_KEY: ${env:OPENAI_API_KEY}
functions:
chat:
handler: handler.chat
events:
- http:
path: /chat
method: postStreaming on Lambda
AWS Lambda requires Function URL with streaming for SSE responses. Standard API Gateway does not support streaming.
import { createRuntime, createHonoApp } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
import { streamHandle } from 'hono/aws-lambda';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
});
export const handler = streamHandle(createHonoApp(runtime));Deploy
# SST
npx sst deploy
# Serverless
serverless deployExpress / Node.js
Deploy to any Node.js hosting (Railway, Render, Fly.io, DigitalOcean, etc.).
import express from 'express';
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const app = express();
app.use(express.json());
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
app.post('/api/chat', async (req, res) => {
await runtime.stream(req.body).pipeToResponse(res);
});
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});import express from 'express';
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const app = express();
app.use(express.json());
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
app.post('/api/chat', async (req, res) => {
const result = await runtime.chat(req.body);
res.json(result);
});
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});import express from 'express';
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const app = express();
app.use(express.json());
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
// Streaming endpoint
app.post('/api/chat/stream', async (req, res) => {
await runtime.stream(req.body).pipeToResponse(res);
});
// Non-streaming endpoint
app.post('/api/chat', async (req, res) => {
const result = await runtime.chat(req.body);
res.json(result);
});
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});Deploy to Popular Platforms
Docker
Self-host your Copilot backend with Docker.
import { serve } from '@hono/node-server';
import { createRuntime, createHonoApp } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
const port = Number(process.env.PORT) || 3000;
serve({ fetch: createHonoApp(runtime).fetch, port }, (info) => {
console.log(`Server running on http://localhost:${info.port}`);
});import { serve } from '@hono/node-server';
import { Hono } from 'hono';
import { createRuntime } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
const app = new Hono();
app.post('/api/chat', async (c) => {
const body = await c.req.json();
const result = await runtime.chat(body);
return c.json(result);
});
const port = Number(process.env.PORT) || 3000;
serve({ fetch: app.fetch, port });Dockerfile
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "dist/server.js"]Docker Compose
services:
copilot:
build: .
ports:
- "3000:3000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stoppedRun
# Build and run
docker compose up -d
# Or without compose
docker build -t my-copilot .
docker run -p 3000:3000 -e OPENAI_API_KEY=sk-... my-copilotBun
Deploy with Bun for faster startup and better performance.
import { createRuntime, createHonoApp } from '@yourgpt/llm-sdk';
import { createOpenAI } from '@yourgpt/llm-sdk/openai';
const runtime = createRuntime({
provider: createOpenAI({ apiKey: Bun.env.OPENAI_API_KEY }),
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
});
const app = createHonoApp(runtime);
export default {
port: 3000,
fetch: app.fetch,
};Run
bun run server.tsConnect Frontend
Point your Copilot SDK frontend to your deployed API:
'use client';
import { CopilotProvider } from '@yourgpt/copilot-sdk/react';
export function Providers({ children }: { children: React.ReactNode }) {
return (
<CopilotProvider
runtimeUrl="https://your-api.example.com/api/chat"
// For non-streaming:
// streaming={false}
>
{children}
</CopilotProvider>
);
}| Mode | Server Method | CopilotProvider |
|---|---|---|
| Streaming | .stream(body).toResponse() | streaming={true} (default) |
| Non-streaming | await runtime.chat(body) | streaming={false} |
CORS
If your frontend and backend are on different domains, add CORS headers:
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
};
export async function OPTIONS() {
return new Response(null, { headers: corsHeaders });
}
export async function POST(req: Request) {
const body = await req.json();
const response = runtime.stream(body).toResponse();
// Add CORS headers
Object.entries(corsHeaders).forEach(([key, value]) => {
response.headers.set(key, value);
});
return response;
}import { cors } from 'hono/cors';
const app = createHonoApp(runtime);
app.use('*', cors());import cors from 'cors';
app.use(cors());Next Steps
- Server Setup — Full runtime configuration and options
- Tools — Add function calling to your Copilot
- Providers — Configure different LLM providers