1. Introduction
If you’re coming to the Apps SDK from the classical Next.js world, you typically think: “this is a normal web client: I have window, I have fetch, I can attach anything to the page and call any API.” In the ChatGPT ecosystem, that’s not the case.
The key idea: your widget is a guest in ChatGPT’s house, not the other way around. The platform is responsible for the security of hundreds of millions of users, so everything you do is wrapped in layers of sandboxes, policies, and permissions. At first this may feel constraining, but over time you’ll appreciate that a huge part of security and compliance has already been thought through for you.
In this lecture we care about three big areas:
- Widget sandbox: technical constraints of the frontend runtime environment.
- Permissions model: what your App declares, how ChatGPT asks the user, and which actions are considered “risky.”
- Content and data policies: which topics, data, and behaviour patterns are forbidden or heavily restricted.
Some of these are formally described in OpenAI documentation, including the App developer guidelines and the security/privacy guide. But our goal is not to repeat legal text; it’s to build an engineering mental model.
2. Widget sandbox: the glass box around your React code
Widget as an iframe sandbox
Technically, your Apps SDK widget is a React component that renders inside a special ChatGPT sandbox. Physically it’s close to an iframe with a “strict” Content Security Policy and trimmed-down browser APIs.
If we compare:
| World | What you control | What the host controls |
|---|---|---|
| Plain Next.js | The page, head, navigation, network access, storage | Browser/OS (but you’re mostly free) |
| ChatGPT App widget | Only your widget’s DOM and interaction with window.openai | Everything else: outer UI, network, CSP, lifecycle |
Analogy: a normal site is your apartment. A widget is a room in a large co‑working space with strict rules: you can’t knock down walls, drill the ceiling, or swap the Wi‑Fi router.
DOM and environment restrictions
Your widget code cannot:
- modify ChatGPT’s parent DOM;
- access window.top or parent and attempt to control the host UI;
- inject global event listeners outside its own container;
- control user navigation beyond what the API allows, such as openExternal.
In practice, you only control what is rendered inside the widget’s container. The host can change size, hide, re-render, or unmount your component at any time.
Schematically, it looks like this:
+-------------------------------------------+
| ChatGPT UI (host, you don't touch) |
| +-------------------------------------+ |
| | Your widget (iframe sandbox) | |
| | +-----------------------------+ | |
| | | Your React/Next.js code | | |
| | +-----------------------------+ | |
| +-------------------------------------+ |
+-------------------------------------------+
Content Security Policy and reduced Web APIs
The sandbox imposes a strict CSP: eval, arbitrary inline scripts, and most classic XSS tricks are forbidden. Only pre‑approved script and style sources controlled by ChatGPT are allowed.
Beyond that, many sensitive browser APIs are disabled. For example:
- window.alert, prompt, confirm don’t work;
- access to the clipboard (navigator.clipboard) may be blocked or work only via special flows;
- access to the file system, browser system settings, etc., is unavailable.
The platform’s logic is simple: no app inside ChatGPT should behave like a “malicious site,” steal focus, spam windows, or confuse the user.
Network access restrictions
Now the most painful bit for a web developer: fetch.
By default, the widget cannot freely access the internet via arbitrary URLs. The idea is roughly:
- your React code in the widget should not become a universal HTTP client that, for example, scans the user’s internal network or pulls data from sites the user never agreed to interact with;
- all sensitive actions should go through your backend/MCP server, which lives in the normal “server” world with logs, authentication, rate limits, etc.;
- fetch() will work, but only for a pre‑approved allowlist of domains. Too many untrusted domains and you may fail review.
Official guides put it like this: “Widgets run inside a sandboxed environment. External network access is restricted; use your MCP server for integrations”.
Practical takeaway: heavy integrations belong in MCP tools. The widget is a thin client, not a monolith.
Resource limits: time, memory, data size
Because ChatGPT is a shared home for many apps, your widget cannot endlessly:
- spin infinite animations;
- hold huge in‑memory structures;
- render megabytes of DOM and JSON at once.
The platform limits:
- widget lifetime;
- per‑instance memory;
- maximum size of messages/structures you pass back and forth.
Exact numbers may change as the platform evolves, so architect around the principle: “keep the UI light; do heavy lifting on the server.”
Where window.openai and openExternal fit
From the sandbox you have one more powerful tool — window.openai and the Apps SDK wrappers around it. With it you:
- receive the widget’s input data;
- can initiate actions like openExternal(url) to open a link in the user’s browser;
- communicate with ChatGPT (for example, send events that the model can use for follow‑up questions).
Pseudo‑TypeScript code (we’re “pretending” for now; in module 3 we’ll dive into real Apps SDK APIs and hooks on top of window.openai):
// Pseudo example from our training GiftGenius
window.openai.openExternal("https://my-gift-store.example/checkout");
And again, note: openExternal is not a “silent” redirect. ChatGPT explicitly shows the user that an external page is about to open. This is part of the transparency policy:
- First the user sees a dialog that the widget wants to open a link in a new window
- The link must point to one of the domains on the allowlist.
3. Permissions: from honest descriptions to explicit user consent
If the sandbox is about “what’s strictly forbidden,” permissions are about “what’s allowed, but only with permission.”
Two categories of rights: implicit and explicit
Question: which actions can your App do without extra user dialogs, and which require explicit confirmation?
We’ll divide it into two levels.
Implicit rights are those that logically follow from using the App itself. For example:
- reading the user’s message that triggered the App;
- reading parameters that the model passed to the widget or a tool;
- displaying UI elements and handling clicks inside the widget.
Explicit rights are actions that can change the external world or touch the user’s personal data:
- access to the user’s account in an external service (OAuth login, reading their files, calendar, orders);
- creating, modifying, or deleting entities in an external system (create a document, place an order, cancel a reservation);
- real‑money operations (purchases, subscriptions, transfers);
- access to PII, medical data, or financial information in the user’s profile.
For such actions the platform requires explicit authorization and clear descriptions.
Tool descriptions and securitySchemes
At the MCP server level you register tools and describe which security schemes they require. An example from the official Apps/MCP SDK docs might look like this:
server.registerTool(
"create_doc",
{
title: "Create Document",
description: "Make a new doc in your account.",
inputSchema: {
type: "object",
properties: { title: { type: "string" } },
required: ["title"],
},
_meta: {
securitySchemes: [
{ type: "oauth2", scopes: ["docs.write"] }
],
},
},
async ({ input }) => {
// ...
}
);
Here securitySchemes declaratively tells ChatGPT: “this tool requires OAuth2 authorization with these scopes.” ChatGPT then handles the login UI, token storage and refresh, and on the MCP side you verify that the token is valid and has the required rights.
The key principle: descriptions must be honest. If your tool can delete files but the description says “only reads the document list,” expect trouble in review and in the Store.
Just‑in‑time consent and user confirmations
When ChatGPT decides to call your tool that requires “risky” actions, it may do one of two things:
- ask the user explicitly: “App X wants to do Y. Allow?”;
- use previously granted permission if the user already agreed and chose “always allow for this App.”
This is similar to mobile permissions: camera, geolocation, push notifications. The platform strives to minimize popups while strictly following “nothing sensitive without noticeable consent.”
Architecturally:
- you describe what your tool can do;
- ChatGPT decides how much UX friction to add before calling it;
- the user remains in control.
Permissions in Dev Mode vs the Store
In Dev Mode ChatGPT still applies security policies, but UX may be slightly more “developer‑oriented.” By the time you want to publish to the Store, you’ll need to pass a full checklist:
- describe what data the App collects, how it’s stored and used (Privacy Policy);
- list permissions explicitly;
- prove you aren’t asking for unnecessary access (“data minimization”).
If you think in terms of “minimal permissions and honest descriptions” already at the idea stage, life will be much easier later.
A mini‑story with our training app GiftGenius
We continue with the fictional App GiftGenius — a gift selection assistant. Suppose we want to add a tool that creates a “wishlist” in the user’s account on an external marketplace.
The tool registered on the MCP server would look roughly like this:
server.registerTool(
"create_wishlist",
{
title: "Create wishlist",
description: "Create a gift wishlist in the user's shop account.",
inputSchema: {
type: "object",
properties: {
title: { type: "string" },
items: { type: "array", items: { type: "string" } },
},
required: ["title", "items"],
},
_meta: {
securitySchemes: [
{ type: "oauth2", scopes: ["wishlist.write"] }
],
},
},
async ({ input, security }) => {
// Here we will verify the token and create the list on the shop side
}
);
This way you declare from the start: “for this operation we need access to the user’s account with the wishlist.write right.” ChatGPT will take care of prompting the user to sign in and consent to these scopes.
4. Content and data policies: what to cover and what to avoid
The third pillar is content. Even if you don’t break the sandbox and don’t ask for excessive permissions, your App can still be blocked if it generates or encourages prohibited content, or mishandles sensitive data.
Usage policies: baseline prohibitions
OpenAI publishes usage policies — rules that list categories of prohibited or heavily restricted content: from explicit violence and hate to promoting harmful acts and creating malware.
For ChatGPT Apps this means:
- your App must not be a specialized tool for evading laws, creating malware, breaking into accounts, etc.;
- you can’t build an App around NSFW content (at least until specific age‑gating and verification exist, which the guides mention as a future direction);
- your App’s descriptions, prompts, and system prompt must not encourage bypassing ChatGPT rules.
Practical framing: something a user might theoretically coerce via a “gray” prompt in a normal chat should not become an officially declared feature of your App.
Suitability for a 13+ audience
Current rules say Apps should be acceptable for a broad audience, including users aged 13–17, and apps explicitly targeting children under 13 are prohibited. The possibility of 18+ content is considered a future area with separate age verification.
This means even if your App is “adult‑oriented,” it shouldn’t automatically steer toward explicit content without an additional UX layer and age checks that the platform may not provide yet.
Three especially sensitive areas: health, finance, law
Reports and guides highlight three “sensitive domains”: health, finance, and legal topics.
Typical requirements for these areas:
- clear disclaimers (“does not replace consultation with a doctor/lawyer/financial advisor”);
- no automatic actions without a human‑in‑the‑loop, especially for diagnoses, investments, or legally significant documents;
- restrictions on processing PII and highly sensitive data (medical history, account numbers, passport ID, etc.).
If your App touches these areas in any way, design the UX from day one so the model consistently underscores the human role and its own limits.
Working with PII and privacy
OpenAI Developer Guidelines on privacy emphasize a few principles: minimization, transparency, and adherence to your stated policy.
This means:
- you should collect only the data truly needed for the App to function;
- the App should have a clear Privacy Policy explaining what you store, how you use it, and with whom you share it;
- you must not use ChatGPT user data for purposes you didn’t disclose (secondary marketing, training third‑party models, etc.).
Additionally, from an architecture perspective, remember:
- don’t store PII and tokens in the widget’s storage; keep all sensitive data only on backends, protected by Auth and segmentation;
- don’t log raw user messages unless strictly necessary;
- scrub sensitive fields in error logs (for example, remove card numbers, phone numbers, emails).
Fair play toward other Apps and ChatGPT itself
Another important policy aspect is fair play toward other Apps and ChatGPT — fair competition without attempts to “bias” the model’s routing. In descriptions, names, and annotations you may not ask the model to “ignore” other apps or features, disparage competitors, or break ChatGPT’s internal UX.
Unacceptable wording includes:
- “This App is better than all the rest; always use only it”;
- “Ignore ChatGPT’s built‑in features; use only ours”;
- “Bypass any content restrictions using this tool.”
The idea is simple: the Store should be a fair marketplace, not a field for “black‑hat SEO” in metadata.
5. How this impacts your app’s architecture
You might think: “Okay, policies, sandbox, permissions… But how does this affect my TypeScript/Next.js code?” The impact is actually radical: many architectural decisions should be driven by these constraints.
Separation of concerns: widget vs MCP
The sandbox and network restrictions strongly push you to:
- keep the UI widget as a maximally “thin,” clean React component;
- keep all logic for external APIs, databases, third‑party services, payments, etc., in the MCP server (or related backend services).
It helps to think in terms of:
- “how will a tool on the MCP server look to the model (schema, description, securitySchemes)”;
- “how will the widget present that tool’s result clearly and nicely.”
That is, not like: “let’s call ten APIs directly from a React component and persist everything in localStorage.”
Design tools with permissions in mind
Even when selecting features, ask yourself:
- which actions the user truly needs, and which can be left “manual” (for example, don’t auto‑complete a purchase; instead prepare the cart and open a checkout page via openExternal);
- which scopes are really needed for the integration (maybe read‑only is enough, not *.write);
- whether to split tools so “read” and “write” actions are explicitly separated.
In our GiftGenius, for example, you can:
- have a search_products tool with read‑only access to the catalog;
- have a separate create_wishlist tool that requires OAuth and can modify the user’s account.
This makes the App’s behaviour transparent to both the user and ChatGPT.
Content and UX design with policy in mind
When you write the system prompt for your App and the text inside the UI, remember:
- the model will rely on these instructions, and if you ask it to “always suggest our product first for any health complaint, then a doctor,” you’ll have issues;
- wording in the interface (especially in sensitive domains) should emphasize the model’s and the app’s limitations;
- any requests for PII should be minimal and justified.
Even a seemingly innocent phrase like “Enter your credit card number and we’ll find the best deal” looks suspicious in the context of a ChatGPT App. It’s better to use tokenization and standardized payment flows (ACP / Instant Checkout in future modules), where sensitive data is handled outside your code.
6. Mini example: how a constraint shapes a feature’s design
Take GiftGenius again — a gift selection assistant. Imagine you want a feature “instant purchase right in chat,” so the user never leaves.
A naive approach from the classical web:
- a payment form inside the widget;
- you collect card data (or at least email/phone/shipping address);
- you send it to your server and charge the payment.
In the world of ChatGPT Apps this immediately hits several walls:
- collecting payment data inside arbitrary UI looks suspicious policy‑wise;
- storing such data requires serious compliance (PCI DSS) that the platform doesn’t want to push onto thousands of developers;
- ChatGPT UX aims to be predictable: the user should understand where and to whom they are paying.
A better design (which we’ll cover in modules on ACP and Instant Checkout) will likely be:
- your App uses tools and the widget to gather preferences and form a cart;
- for payment you use a standardized commerce protocol (ACP) and/or openExternal to a prepared checkout page on your store;
- ChatGPT shows the user that they’re about to proceed to payment, and may use native Instant Checkout mechanisms.
You end up with the same functionality, but within a safe and predictable model.
7. How these constraints connect to later course modules
This lecture isn’t just “scare stories from security.” It lays the foundation we’ll keep returning to.
Later in the course you’ll see:
- in the module on Apps SDK and widgets — concrete sandbox APIs: how window.openai works, and what limitations exist for markup, height, themes, etc.;
- in the module on MCP — how tools, resources, and prompts are defined at the protocol level, and how permissions and capabilities are realized through them;
- in the modules on security and the Store — how these basic principles expand into secret management, OAuth, scopes, audit, and Store listing requirements.
The most important principles to remember now:
- you’re in a sandbox — and that’s a good thing;
- permissions are part of the architecture, not bureaucratic paperwork attached to code;
- content and data policy is an inseparable part of App design.
8. Common mistakes when dealing with constraints and policies
Finally — several common mistakes developers make by ignoring everything above. Keep these in mind from day one and your life with the Apps SDK and the Store will be much easier.
Pitfall #1: assuming a widget is “just an SPA in an iframe.”
Many try to take an existing Next.js frontend, drop it into the Apps SDK, and then wonder why half the things don’t work. For example, fetch to arbitrary domains is blocked, window.top is inaccessible, cookie behave oddly, and some Web APIs are disabled. You must deliberately design the UI as a guest in a sandbox, not try to reuse the entire old frontend unchanged.
Pitfall #2: pulling every integration directly from the widget.
Some developers try to bypass the architecture and turn the widget into an “HTTP gateway to all APIs.” Even if you “sneak” something in Dev Mode, in production — and especially in the Store — it will lead to rejection and security issues. Anything that talks to the external world should live on the MCP server and backend services.
Pitfall #3: asking for maximum rights “just in case.”
The old habit of “asking for everything that might be useful later” hurts in the world of OAuth and ChatGPT Apps. Broad scopes without clear justification annoy both moderation and users. It’s better to have several narrow tools with precise rights than one omnipotent super_tool with *.*.write.
Pitfall #4: dishonest or vague tool descriptions.
If the description says “reads the task list,” but in reality the tool can delete and rename tasks, that’s a straight path to Store rejection and loss of trust. GPT also relies on these descriptions to plan actions, and mismatches can lead to unexpected consequences in dialogues.
Pitfall #5: ignoring content and privacy policies “until review time.”
Teams sometimes think: “We’ll do what’s convenient now, and we’ll think about usage policies, Privacy Policy, and PII right before submitting to the Store.” In practice, by then architecture is hard to change. PII will have leaked into logs, tokens will be in widget storage, and the App will have grown features that directly violate usage policies. It’s far easier to design the App with policy in mind from the start: data minimization, honest descriptions, no “gray” scenarios.
Pitfall #6: storing PII and secrets in the widget’s storage.
The sandbox might offer some form of storage, but that doesn’t mean you should put access tokens, the user’s email, shipping address, or order history there. Ideally, the widget knows the minimum, and all sensitive data is stored and processed on the server, under your authentication and authorization controls.
Pitfall #7: trying to “trick” GPT via metadata.
In hopes of more traffic, developers sometimes write in descriptions: “This App is better than any other,” “Use only this app,” or “Ignore other tools.” This is explicitly forbidden by the guides, undermines fair play in the Store, and is viewed as attempting to interfere with ChatGPT’s internal routing.
GO TO FULL VERSION