CodeGym /Courses /ChatGPT Apps /Deploy on Vercel: repository, env variables, preview → pr...

Deploy on Vercel: repository, env variables, preview → production

ChatGPT Apps
Level 7 , Lesson 3
Available

1. Why we use Vercel for a ChatGPT App

In previous lectures we ran GiftGenius locally and connected it to ChatGPT via Dev Mode and a tunnel. Now it’s time to take another step toward a “grown‑up” production and move the same code to Vercel.

By this point you already have a working GiftGenius (our training app). Locally it runs on Next.js 16 with an MCP endpoint (for example, /api/mcp) and is based on the official ChatGPT Apps SDK Next.js Starter.

You could go the “I rent a VPS, install Node and Nginx by hand, and configure everything myself” route, but for Next.js that’s roughly like writing front‑end with plain document.write in 2025. It works, but you’re definitely making life harder.

Vercel is great for us for several reasons.

First, it natively understands Next.js: it automatically configures builds, SSR, static assets, the edge layer, and serverless functions. For a ChatGPT App this is especially convenient because the widget and the MCP endpoint are deployed with one click and live in the same infrastructure.

Second, Vercel provides CI/CD out of the box: you connect a Git repository — and every push creates a new immutable deployment with a unique URL. Deployments from the main branch are considered production, from other branches — preview.

Third, Vercel has a pleasant story around environments and secrets. It explicitly separates env variables into Development, Preview, and Production, stores them encrypted, and makes it easy to pass them into Next.js. That’s exactly what a ChatGPT App needs, where keys and the MCP server URL must vary by environment.

Fourth, Vercel has convenient rollbacks: if a new release goes wrong, you can quickly promote a previous successful deployment and return the system to a working state. This reduces “deployment fear” and encourages small frequent releases.

And finally, Vercel is the company behind Next.js. They tuned Next.js to their servers, and their servers to Next.js. Using Vercel you’ll feel how smoothly everything works in just a couple of clicks. I guarantee you’ll like it.

2. Starting point: GiftGenius project structure

Per the course plan, our GiftGenius lives in a single repository. There are two organization options, and both are fine for Vercel:

1) Monorepo with multiple apps — for example:

giftgenius/
  apps/
    web/   # Next.js (widget + MCP)
    mcp/   # separate MCP server (if you extracted it)

2) A single Next.js project where both the widget and MCP live together (this is simpler at the first stage and is exactly how the official starter is set up):

giftgenius/
  app/
    page.tsx         # Widget
    api/
      mcp/route.ts   # MCP endpoint
  next.config.mjs
  package.json
  ...

In Module 2 lectures you already cloned the Apps SDK Starter, installed dependencies, and ran npm run dev. We now assume that:

  • the project is already in Git (GitHub / GitLab / Bitbucket);
  • locally you use .env.local with keys (OPENAI_API_KEY, etc.);
  • ChatGPT Dev Mode is connected to your tunnel.

Our goal is to make the same code build and run on Vercel, and to have ChatGPT call a stable HTTPS domain like https://giftgenius.vercel.app instead of a tunnel.

3. Preparing the repository for deployment

Before clicking “New Project” in Vercel, it’s worth tidying up the repository a bit. These are simple steps that will save a lot of time later.

First, make sure .env.local and .vercel do not get into the repository. In .gitignore of the Next.js Starter this is usually already there, but better double‑check:

node_modules
.next
.env.local
.vercel

.env.local is your local config and secrets. It should never be in Git, especially if it contains OPENAI_API_KEY or database keys. On Vercel we will store secrets separately in the UI.

Second, look at package.json. The correct scripts are important for Vercel:

{
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "start": "next start"
  }
}

By default Vercel will call npm run build (or pnpm build if you use pnpm). This must build the project without errors.

Third, ensure that the Node version is specified and suitable for Next.js 16. In the Next.js 16 release notes the minimum version is 18.18.0. Most often it’s enough to have a field in package.json:

{
  "engines": {
    "node": ">=18.18.0"
  }
}

Vercel will pick an LTS Node version compatible with your app.

If all this is done, push the latest code to Git and move on to Vercel.

4. First project import into Vercel

Now go to the Vercel web interface. If you’re not registered there yet — now is the time.

You log in to Vercel, click “New Project”, and choose your giftgenius repository from the list. At this stage, Vercel checks the repository contents under the hood and almost always detects that it’s a Next.js project, applying the corresponding preset.

In the project settings, Vercel will suggest:

  • Framework = Next.js;
  • Build Command = npm run build (or pnpm build/yarn build);
  • Output Directory — the standard .next (no need to change).

For the first deployment you can skip env variables for now (we’ll add them in a separate step). Click “Deploy” — Vercel clones the repository, installs dependencies, runs npm run build and, if successful, creates the first deployment with an address like https://giftgenius-xyz.vercel.app.

It’s important to understand one thing: each deployment is immutable. If you later push changes, a new deployment with a new URL is created, and the old one remains in history. The production domain (for example, giftgenius.vercel.app or your custom domain) points to a specific deployment and can be switched back when doing a rollback.

Schematically, it looks like this:

flowchart LR
    A[GitHub repo
giftgenius] -->|git push| B[Vercel build] B --> C[Preview Deploy #1
unique URL] B --> D[Preview Deploy #2
unique URL] D --> E[Production Alias
giftgenius.vercel.app]

The Git branch main is usually considered the production branch; everything else is preview. But you can reconfigure this.

5. Environment variables on Vercel

Right now your first deployment is likely not very functional: there’s no OPENAI_API_KEY, the MCP server can’t call external APIs, etc. It’s time to handle env variables.

In Vercel, env variables are stored in Settings → Environment Variables. There you also see three scopes: Development, Preview, and Production.

A quick mental model table:

Scope Where it is used Local equivalent
Development vercel dev and local dev via the Vercel CLI .env.local
Preview all deployments from branches other than the production branch staging / test
Production deployments from the production branch (usually main) the “live” .env.prod

The difference from local .env.local is that Vercel stores values encrypted and automatically injects them as process.env.MY_VAR into Next.js code.

It’s very important to understand the NEXT_PUBLIC_ prefix. Anything that starts with NEXT_PUBLIC_ ends up in the browser bundle and is visible to any user (you can see it through DevTools). That’s good for public configuration (NEXT_PUBLIC_ENV=preview, NEXT_PUBLIC_API_BASE_URL=https://giftgenius.vercel.app) but is categorically bad for secrets like OPENAI_API_KEY.

For secrets we use names without NEXT_PUBLIC_ and read them only on the server side: in route handlers, MCP tools, etc.

6. Env setup for GiftGenius: example

Let’s see which env variables our training GiftGenius needs.

A minimal set might look like this:

  • OPENAI_API_KEY — key for calling models / the MCP client;
  • APP_BASE_URL — the app’s base URL (https://giftgenius.vercel.app or a preview URL);
  • possibly GIFTDATA_API_URL or PRODUCTS_API_URL if you have an external catalog.

In local development this lives in .env.local:

OPENAI_API_KEY=sk-local-...
APP_BASE_URL=http://localhost:3000
PRODUCTS_API_URL=https://dev-api.gifts.example.com

On Vercel, go to Settings → Environment Variables and add the same keys and values to the corresponding scopes.

An example of how this looks in the MCP endpoint code:

// app/api/mcp/route.ts
import { NextRequest } from 'next/server';

const apiKey = process.env.OPENAI_API_KEY!; // do not do this without checks in real code :)

export async function POST(req: NextRequest) {
  if (!apiKey) {
    return new Response('Missing OPENAI_API_KEY', { status: 500 });
  }
  // Call OpenAI or another service with apiKey...
}

The widget can use APP_BASE_URL on the server side, for example to build absolute links, taking into account the ChatGPT iframe and the assetPrefix/basePath configuration from the starter template.

If it needs a public API URL (for example, for window.fetch to your backend), you can introduce NEXT_PUBLIC_API_BASE_URL for that. But under no circumstances NEXT_PUBLIC_OPENAI_API_KEY.

7. Preview deployments: staging on steroids

Now the fun part: preview deployments. When you connect a Git repository, Vercel automatically creates a preview deployment for every push to a non‑production branch or for every Pull Request. Each such deployment has a unique URL, like:

https://giftgenius-git-feature-new-layout-username.vercel.app

These deployments use the Preview scope for env variables, so you can set, for example:

# Preview env on Vercel
APP_BASE_URL=https://giftgenius-staging.vercel.app
PRODUCTS_API_URL=https://staging-api.gifts.example.com

and not mix it up with production.

From the ChatGPT Dev Mode perspective, a preview URL is a perfect staging candidate. In your Dev App settings you can temporarily change the endpoint from the tunnel URL to the preview URL and see how the built version of GiftGenius behaves before it becomes a production deployment.

A common approach: for a feature you create a branch feature/smart-recommendations, push changes — Vercel provides a preview link. You go to Dev Mode, change the URL to that link, test flows with GPT (gift selection, card display, MCP tool calls). Only when everything is OK do you merge into main. Production continues its quiet life.

A mental pipeline diagram:

flowchart TD
    A[Local dev
localhost + tunnel] --> B[git push
feature/*] B --> C[Preview Deploy
preview-URL] C --> D[ChatGPT Dev Mode
App → preview-URL] C --> E[Code review / tests] E --> F[Merge into main] F --> G[Production Deploy
prod-URL] G --> H[ChatGPT Prod App
App → prod-URL]

8. Production deployment and rollback

When you merge changes into main (or another branch you’ve chosen as the production branch), Vercel creates a production deployment and assigns the production alias: giftgenius.vercel.app or your own domain.

At this point the ChatGPT Prod App (which you’ll create a bit later) should be configured to use the production URL. In Dev Mode you continue to experiment with the tunnel or a preview URL; regular users in the ChatGPT Store will hit production.

The advantage of immutable deployments is that rollback becomes very simple. If a new release turns out to be unsuccessful (for example, an MCP tool crashes on live data), you don’t need to urgently fix production. You open the list of deployments in Vercel, pick the previous successful one, and click something like “Promote to Production” — K8s and Lambda switch behind the scenes, and your domain points to a stable version again.

In the CLI you can automate this via the vercel rollback command, but for our course it’s enough to understand the idea: each deployment is a separate artifact, and the production alias can point to any of them.

9. Specifics of Next.js 16 + MCP on Vercel

From Vercel’s point of view, your MCP endpoint in Next.js is a serverless function (or an edge function if you configured it that way). It lives briefly: wakes up on a request, handles it, and dies. You cannot store state between calls unless you use an external DB or some storage.

This is critical for MCP: if you suddenly decide to store conversation history in a global array let history = [] in route.ts, it will be reset on every cold start. To store state you need an external system (KV, Postgres, etc.), but that’s for future modules.

The second aspect is execution timeouts. On free plans, Vercel serverless functions have a time limit (at the time of writing — around 10 seconds on Hobby, more on Pro). For LLM requests and especially for chains of MCP tools, this may be insufficient.

In Next.js 16 for route handlers you can set maxDuration to explicitly ask Vercel for more time (within your plan’s limits):

// app/api/mcp/route.ts
export const maxDuration = 60; // seconds; on Pro you can go up to 300

export async function POST(req: Request) {
  // long operation: request to OpenAI, external DB, etc.
}

This isn’t a magic “run forever” switch, but it’s the correct way to tell Vercel: “this function may run longer; please don’t kill it too early.”

Finally, don’t forget the specifics of the ChatGPT iframe. In the Apps SDK Starter, assetPrefix and basePath are already configured so that static assets and routes work correctly inside nested iframes on web-sandbox.oaiusercontent.com. Thanks to this, all requests go to your domain, not to the sandbox. When deploying to Vercel this config stays in place, so you get correct widget behavior out of the box.

10. Integration with ChatGPT after deployment

Although formally this belongs to the modules about the Store and production, the app’s lifecycle and the integration logic with ChatGPT after deployment are simple and fit well here.

First you deploy GiftGenius to Vercel and get a production URL. Then in ChatGPT in Dev Mode you create a separate app, for example GiftGenius Prod, and in its settings specify this URL as the endpoint (more precisely, the MCP endpoint like https://giftgenius.vercel.app/api/mcp according to the OpenAI Apps SDK Deploy guide).

For development you continue to use a Dev App pointing at your tunnel or a preview URL. For testing daily/weekly builds you can create a Staging App, linking it to a permanent preview alias. As a result, you get a three‑stage scheme:

Dev App     → local tunnel or dev URL (unstable)
Staging App → stable preview/staging URL on Vercel
Prod App    → production URL on Vercel

For reference, let’s summarize in one table:

What URL / deployment on Vercel Scope on Vercel Who uses it
Dev App local tunnel / vercel dev Development you / team
Staging App stable preview alias Preview team / QA
Prod App giftgenius.vercel.app / custom domain Production users

It’s the same local / staging / prod model we discussed at the beginning of the module, now tied to Vercel and ChatGPT Apps. This is the architecture of a mature project, not an eternal localhost.

11. Common mistakes when deploying to Vercel

Mistake #1: secrets are only in .env.local, but not on Vercel.
A very common scenario: locally everything works, you confidently click “Deploy”, the app builds, but MCP tools in production return 500 with the text “Missing OPENAI_API_KEY”. The reason is simple: Vercel doesn’t know about your local .env.local. You need to add the same variables separately in the project settings on Vercel (and in the correct scopes: Preview, Production).

Mistake #2: using NEXT_PUBLIC_ for sensitive data.
The desire to “just make it work” sometimes wins, and a developer writes NEXT_PUBLIC_OPENAI_API_KEY to access the key from client code. As a result the key ends up in the JS bundle and is available to anyone. This isn’t just bad practice; it’s a direct path to leakage and key revocation. Keep all secrets without the prefix and only on the server side.

Mistake #3: environment mismatch between local and Vercel.
Locally you may have one URL for products (http://localhost:4000), on Vercel — another (https://api.gifts-staging.com), and in production — a third. If you don’t keep a careful list of env variables and don’t verify that they’re correctly set in Preview/Production, it’s easy to end up with the production widget calling the staging backend and the staging widget calling prod. Simple discipline helps: document every necessary variable and verify them in each environment.

Mistake #4: ignoring execution time limits for MCP endpoints.
Locally you might wait 30 seconds for a slow external system and not notice issues. On Vercel the same function can time out after 10–15 seconds, and ChatGPT will see an error. If you haven’t set maxDuration and don’t monitor the runtime of MCP tools, this can turn into intermittent failures in production.

Mistake #5: trying to keep MCP state in serverless function memory.
Sometimes it’s tempting to store conversation history or a recommendation cache in a global variable let cache = {} right in the route handler file. Locally, while the dev server runs for a long time, this may even “work”. But on Vercel each serverless function lives briefly and is frequently recreated. As a result some requests “see” an old cache, some — a new one, and some — an empty one. This causes weird bugs that are hard to reproduce. For state you need an external DB or KV store; at the level of this lecture it’s better to treat the MCP endpoint as stateless.

1
Task
ChatGPT Apps, level 7, lesson 3
Locked
Health endpoint + environment badge (local / preview / production)
Health endpoint + environment badge (local / preview / production)
1
Task
ChatGPT Apps, level 7, lesson 3
Locked
Type-safe env configuration (Zod) with public/private split
Type-safe env configuration (Zod) with public/private split
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION