1. Two paths out: navigation and data
If a typical Next.js developer hears “we need to hit the server,” the hand automatically reaches for fetch or a favorite HTTP client. In the world of ChatGPT Apps, that reflex leads to pain.
In the course section on widget security in ChatGPT Apps, we suggest breaking that old reflex from the very start. A widget does not live on the open internet: it sits in strict isolation, and its network access is filtered and restricted by host policies.
A widget has only three basic windows to the outside:
- Navigation: send the user somewhere out into the world. For that, use openExternal.
- Data exchange: receive/send JSON, talk to the backend. This is done via fetch, but there can be strong constraints.
- MCP tool call: calling tools (MCP / backend) that don’t have those limitations.
In this lecture we focus on the first and safest path (navigation) and carefully introduce controlled fetch. In the following modules, we’ll cover MCP and tools as the primary way to do serious server-side work.
2. openExternal: a safe “teleport” for the user
Why you can’t just call window.open
In a regular web app, you’d do something like this:
window.open("https://example.com", "_blank");
In the ChatGPT sandbox this will either not work or behave very strangely. A widget is an isolated iframe with a strict sandbox and does not have the same privileges as a browser tab.
In addition, the ChatGPT host wants to control where and when you take the user, in order to:
- prevent hidden tracking;
- show a clear confirmation UI (especially in mobile/desktop clients);
- ensure consistent link behavior across environments (web, desktop, mobile app).
That’s why there is a dedicated openExternal API available via window.openai or the more convenient React hook useOpenExternal.
What useOpenExternal looks like
In the official Apps SDK examples, the useOpenExternal hook is implemented roughly like this:
export function useOpenExternal() {
const openExternal = useCallback((href: string) => {
if (typeof window === "undefined") return;
if (window?.openai?.openExternal) {
try {
window.openai.openExternal({ href });
return;
} catch (error) {
console.warn("openExternal failed, falling back to window.open", error);
}
}
window.open(href, "_blank", "noopener,noreferrer");
}, []);
return openExternal;
}
The key idea is simple. First we try to use the native ChatGPT mechanism (window.openai.openExternal). If the widget happens to be rendered outside ChatGPT (e.g., you just opened it in a browser during development), we gracefully fall back to window.open.
In your application this hook already exists in the template (if you took the standard repository from OpenAI), and you should use it this way — not by manually reaching into window.openai.
Example: “View in store” button in GiftGenius
Imagine that the toolOutput of our GiftGenius contains recommendations with a productUrl field. Let’s add a button to each card that opens the product on your site:
import { useWidgetProps } from "../hooks/use-widget-props";
import { useOpenExternal } from "../hooks/use-open-external";
export function GiftListWidget() {
const { toolOutput } = useWidgetProps<{
recommendations: { id: string; title: string; price: string; url: string }[];
}>();
const openExternal = useOpenExternal();
if (!toolOutput) return <p>No recommendations yet…</p>;
return (
<div>
{toolOutput.recommendations.map((gift) => (
<div key={gift.id} className="flex justify-between gap-2">
<div>
<div>{gift.title}</div>
<div className="text-sm text-muted-foreground">{gift.price}</div>
</div>
<button onClick={() => openExternal(gift.url)}>
Open
</button>
</div>
))}
</div>
);
}
From the user’s perspective: they click the button, ChatGPT may show a system dialog “Open external site?”, and then it will open your page in a new tab or in the default browser. You’re not passing any secrets, tokens, etc.; you’re simply sending the person “from the chat to the site.”
3. window.fetch in the sandbox: this isn’t the fetch you’re used to
What a frontend developer typically expects
Usually the logic is: “Since this is a browser, I can safely call any URL that has CORS set up. Worst case I’ll get an error, but I can at least try.”
In the ChatGPT Apps ecosystem, that’s a dangerous misconception. The sandbox around the widget isn’t just a “minor nit,” it’s a fundamental security requirement: to prevent a widget from tracking the user, calling arbitrary domains, scanning the local network, and generally behaving like a mini-browser inside the browser.
Security docs also emphasize that in an Apps SDK widget, arbitrary network access is either absent or heavily limited — and that’s not a bug, it’s an intentional architectural decision.
What this looks like in practice
In a typical ChatGPT environment:
- fetch may be available, but only to a restricted list of domains (usually your app’s domain where the App runs, and possibly a couple of explicitly allowed APIs);
- requests may go through a special host proxy that filters headers and URLs;
- some methods (PUT, DELETE) or nonstandard headers may be blocked by security policies.
At the same time, you still have a convenient path: if your widget and backend live on the same domain (as in the Next.js template where both the MCP server and UI are served by one app), internal requests fetch("/api/...") are usually allowed.
The main thing is not to rely on the widget being able to hit any API on the internet. All “heavy” interactions with external services (Stripe, Notion, CRM, etc.) should happen on the MCP/backend side, which ChatGPT treats as a trusted resource.
Insight
In a ChatGPT widget you should immediately forget about relative paths and stick to absolute URLs. The reason is simple: your HTML does not run on the same domain as the backend. ChatGPT reads your HTML, places it on its own host, and renders it inside an isolated iframe. Any "/api/..." or "/static/logo.png" suddenly resolves relative to the ChatGPT domain rather than your application — and everything breaks.
<base> almost doesn’t save you here. Empirically, if the widget does not have widgetCSP set, you can add <base href="https://my-app.dev/">: resources will be pulled from your domain, but scripts, due to the sandbox rules, still won’t run. But this works only in Dev Mode.
As soon as you set a proper openai/widgetCSP (and in production you’ll have to set it anyway for review), the platform resets <base>, and the game is over: resources and scripts load only from CSP-allowed domains, and via absolute links.
Recommendation: in a ChatGPT widget, everything that goes out — fetch, images, CSS, your pages for openExternal — should always be built as a full URL from the application’s base domain that you control via config/ENV, not via relative paths and <base>.
4. Architecture: thin UI, thick backend
The constraints of fetch and the overall sandbox imply a broader architectural principle that matters for the entire course. We’ve said this mantra several times, but now’s the time to reinforce it: the widget is a thin UI layer. It renders what the backend has already prepared (via MCP/tools), shows reactions to user actions, and at most makes a couple of small public requests.
Everything involving auth, access to personal data, secrets, and nontrivial business logic must live on the server side. The course’s security docs emphasize: the frontend (React widget) is a “public place,” a zero-trust zone where secrets must not live.
All my research on this topic frames the goal bluntly: to “hammer the final nail into the coffin of the ‘fat client’ idea” for ChatGPT Apps. The widget is only the head; the body and the brain live in the MCP/backend.
Therefore:
- openExternal — for navigating the user to your “normal” site where you can run a conventional SPA, account area, and more;
- callTool (next module) — the main way to pass a job to the model that your backend will execute;
- fetch from the widget — a rare hero for auxiliary, safe, and preferably public requests to your own application.
5. Practice: openExternal in our GiftGenius
Let’s integrate openExternal into our training app a bit more carefully and think about UX at the same time.
A mini UX rule
If you’re sending the user out, it helps to:
- explicitly indicate where exactly they’ll land;
- avoid unexpected “jumps” without explanation (either the GPT message says “I’ll open the store’s website…,” or you label the button accordingly).
Sample heading and label:
<button onClick={() => openExternal(gift.url)}>
Open on the store website
</button>
The user understands they’re about to leave the cozy chat and go into the real world with a cart and checkout.
A small refactor of the list component
We previously built a simple GiftListWidget. Suppose in earlier lectures you already implemented a widget that shows a list of gifts from toolOutput. Now we’ll make a slightly more polished version: add a Gift type with a url field and an openExternal button.
type Gift = {
id: string;
title: string;
priceLabel: string;
url: string;
};
export function GiftListWidget() {
const { toolOutput } = useWidgetProps<{ gifts: Gift[] }>();
const openExternal = useOpenExternal();
if (!toolOutput || toolOutput.gifts.length === 0) {
return <p>Nothing found yet. Try adjusting your request.</p>;
}
return (
<div>
{toolOutput.gifts.map((gift) => (
<div key={gift.id} className="flex justify-between gap-2">
<div>
<div>{gift.title}</div>
<div className="text-sm text-muted-foreground">
{gift.priceLabel}
</div>
</div>
<button onClick={() => openExternal(gift.url)}>
View
</button>
</div>
))}
</div>
);
}
We still don’t work directly with window.openai, but use the convenient hook — it already knows how to fall back to window.open when ChatGPT isn’t present. The Gift structure here is illustrative — in your app you’ll adapt it for your backend.
6. Practice: a careful fetch to our backend
Now let’s deal with fetch. A quick reminder: complex or sensitive operations are better done via tools/MCP. But sometimes you may want the widget to pull something light and public from your own server — for example, a list of popular gift categories.
A simple public API route in Next.js
Add the following handler to our Next.js project:
// app/api/public/popular-tags/route.ts
import { NextResponse } from "next/server";
const tags = ["For kids", "For travelers", "For gamers"];
export async function GET() {
return NextResponse.json({ tags });
}
This route doesn’t know anything about the user, doesn’t require tokens, and doesn’t call external services — it simply returns a static array. Such code can be shipped to production and the sandbox with minimal risk.
Calling this route from the widget via fetch
Now let’s add loading these tags in a widget component. Given the sandbox restrictions, it’s most convenient to request an absolute URL: the same domain where your app runs — the one you tunnel and register in ChatGPT Dev Mode (we set this up in the Dev Mode and tunneling module).
Important: your widget’s domain will be something like https://genius.web-sandbox.oaiusercontent.com, so don’t use relative paths for data loading, only absolute. Example:
import { useEffect, useState } from "react";
export function PopularTags() {
const [tags, setTags] = useState<string[] | null>(null);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
let cancelled = false;
async function loadTags() {
try {
const res = await fetch("https://giftgenius.app/api/public/popular-tags");
if (!res.ok) throw new Error("Bad status");
const data: { tags: string[] } = await res.json();
if (!cancelled) setTags(data.tags);
} catch (e) {
if (!cancelled) setError("Failed to load popular categories");
}
}
loadTags();
return () => {
cancelled = true;
};
}, []);
if (error) return <p>{error}</p>;
if (!tags) return <p>Loading popular categories…</p>;
return (
<div className="flex flex-wrap gap-2 text-sm">
{tags.map((tag) => (
<span key={tag} className="rounded border px-2 py-1">
{tag}
</span>
))}
</div>
);
}
It’s important that:
- we handle errors carefully and show a clear message to the user;
- we don’t assume that fetch “will definitely work” — sandbox policies can cut off access at any time if you change the domain or start making unusual requests;
- we don’t pass any tokens/secrets here; if authentication is needed — that’s the job of MCP and the Auth modules.
7. openExternal vs fetch vs tools (callTool): who does what
To avoid confusion, it’s helpful to keep the following “responsibility matrix” in mind:
| Scenario | What to use | Why this way |
|---|---|---|
| Open a landing page/product/account | openExternal | Explicit user navigation controlled by the host |
| Get public data from the App | fetch("my.com/api/...") | Lightweight JSON, same domain, no secrets |
| Retrieve user data, DB | callTool/MCP | Needs authorization, logic, a secure backend |
| Call external APIs (Stripe…) | MCP/server | No secrets in the frontend; comply with policies |
In this module it’s important to learn to choose the tool intentionally. Move away from the mindset “a widget is a frontend, so we can do everything via fetch,” toward the architecture “a widget is a managed UI layer on top of an LLM+MCP backend.”
Insight
Server interaction in a ChatGPT App is naturally split into two levels:
- ChatGPT ↔ MCP server: the model invokes MCP tools. Each tool call launches or switches a business scenario (gift selection, creating an order, cost calculation, etc.). This is where the “heavy” logic lives — data work, external APIs, and authorization.
- Widget ↔ server: the widget makes light fetch() requests to its own backend and/or invokes the same MCP tools via callTool() already within an active scenario. These are local steps: fetch auxiliary data, update a piece of UI, refine state.
So MCP tool = launching/controlling a business process, while fetch()/callTool() from the widget are small operations within the already chosen scenario, not claiming to change the overall “story” of the dialog.
8. A small practical exercise
To cement the topic with practice, add a small feature to GiftGenius.
Suggested scenario:
- In the gift list, add a “Proceed to checkout” button that uses openExternal to open the checkout page on your dev site.
- Above the gift list, render PopularTags from the example above to show popular categories. If loading fails, provide fallback text and don’t break the entire widget.
- Pay attention to UX: in the GPT response text or in the widget UI, explain to the user that “when you click the button, I’ll open the store page in a new tab.”
This feature showcases both channels in miniature:
- openExternal for explicit navigation;
- fetch for a small public API living next to your App.
9. Common mistakes when using window.fetch and openExternal
Error #1: trying to use the widget as a full-fledged SPA client to all your APIs.
Old habits push toward “let’s just call our REST/GraphQL directly from React.” In ChatGPT Apps this leads to a head-on collision with the sandbox: some requests simply won’t go through, some will be blocked by policies, and your project’s security will be in question. Complex logic and access to user data should go through MCP/tools, not directly from the widget.
Error #2: storing secrets and tokens in widget code.
It’s tempting to “prototype quickly” and hardcode an API key into the frontend (“I’m just testing”). That’s a bad idea even for a regular SPA, and for ChatGPT Apps it’s a hard no. The widget is a public environment; secrets must live in the server config or secret managers (Vercel env, KMS, etc.).
Error #3: assuming that fetch to any domain will “just work.”
Even if something passes in Dev Mode (e.g., because the tunnel is configured unusually), it will almost certainly break in production: ChatGPT restricts outgoing requests, and arbitrary external domains are not available to the widget. Assume the widget can reliably call only its own domain and a very small whitelist of explicitly allowed resources.
Error #4: using window.open instead of openExternal.
Technically window.open may sometimes work, especially in a browser preview, creating the illusion that “everything’s fine.” But in a real ChatGPT environment, especially native clients, behavior will be unpredictable. The user may not see the navigation at all or may get a strange error. The correct path is to use openExternal (via the useOpenExternal hook), which knows how to open a link correctly in the current environment.
Error #5: not handling fetch errors and not showing a loading state.
In the sandbox, network errors are not exceptions but the norm: the tunnel may drop, the domain may change, policies may block something. If you just do await fetch(...) and then render UI assuming the data exists, you’ll get a weird half-broken interface that “sometimes works, sometimes doesn’t.” Always add try/catch, check res.ok, show “Loading…” and a polite error message.
Error #6: turning openExternal into a hidden redirect.
Sometimes there’s a temptation to send the user to an external site on any click, especially to checkout, without any context in the text. This looks odd both to the user and to Store reviewers. Good practice is to explicitly state what’s about to happen: either the GPT model says “I will open the store page…,” or the button label is transparent enough (“Proceed to checkout on the store site”).
Error #7: forgetting that the widget isn’t the only “owner” of the dialog.
If your UI tries to force a complex scenario with lots of its own links and network requests while ignoring the chat and follow-ups, you’ll get worse UX and poorer model performance. Remember the architecture: GPT decides when to show the App and how to use its results; the widget merely guides and visualizes. Navigation and network calls must be designed to fit the overall dialog rather than steal the show.
GO TO FULL VERSION