CodeGym /Courses /ChatGPT Apps /Instructions around UX: announcing

Instructions around UX: announcing App, opting out of App, and behavior in the dialog

ChatGPT Apps
Level 5 , Lesson 1
Available

1. Why manage UX via instructions at all

From ChatGPT’s point of view, your App is just extra tools and widgets. But for the user it’s a whole separate interface that suddenly appears in the middle of a conversation. If you don’t govern the model’s behavior, you can get two extreme scenarios.

In one case, GPT ignores the App and tries to “solve everything with words.” The user asks to pick a gift, and instead of launching GiftGenius, the model produces a long text essay with recommendations “from itself.” Sometimes that’s fine, but you didn’t build the App for it to gather dust on a shelf.

In the other case, GPT, on the contrary, overuses the App. At the slightest prompt like “What can your service do?” it launches a widget, renders a confusing form, the user gets spooked, and closes all that splendor. From a UX perspective it feels very pushy.

So the logic is simple: you need to codify the behavior explicitly, just like you fix a tool’s JSON schema or a React component’s props. The system prompt (and accompanying instructions) here is your “UX protocol.” In it you describe when and how the assistant:

  • announces launching the App;
  • deliberately does not launch the App and replies in text;
  • behaves after the widget has already shown a result;
  • respects user requests for “no apps.”

This isn’t about marketing or tone of voice. It actually affects how often your App will be invoked and how comfortable the user will feel.

Next, step by step, we’ll cover how the assistant should announce launching the App, in which cases it’s better to deliberately not suggest a widget, how to behave after the app has done its work, how to respect explicit user requests about dialog format, and how to neatly encode all these rules in the system prompt.

2. Announcing the App: how the model should “warn” about a widget

When ChatGPT decides to use an App, the interface changes: a widget card appears in the chat, sometimes full-screen, with buttons and other UI elements. If the assistant simply shows a widget without explanation, the user may have no idea what happened or where that block came from.

Therefore, good practice is to first explain in text what’s about to happen, and only then launch the App. It’s similar to a browser asking “Open a new window?” or a mobile app warning “We’re about to ask for camera permission.”

Announcement styles

You can roughly distinguish three announcement styles.

First — a soft offer. The assistant says something like: “I can open the GiftGenius app to pick gifts based on your parameters. Open it?” and waits for a yes/no. This works well when the user is just getting acquainted with the service or may be sensitive to an interface change.

Second — a confident recommendation. If your App is the product’s primary interface, you can say: “I’ll launch the GiftGenius app now and show several gift options as cards.” The assistant can still accommodate refusal, but by default acts more decisively.

Third — a neutral notification. Here the assistant simply states: “Launching the GiftGenius app to pick gifts…,” without lengthy explanations. This style is appropriate when the user has already seen your App many times and expects it to appear.

Importantly, all these variants can and should be baked into the system prompt. The model won’t invent UX phrasing from scratch if you give it the skeleton in advance.

Mini code example: announcement section in the system prompt

Imagine your Next.js template has a file appDefinition.ts where you set the system prompt for the App:

// app/appDefinition.ts
export const systemPrompt = `
# Role
You are the ChatGPT App GiftGenius; you help pick gifts.

# Dialog and UX — App announcement
If you decide to launch the GiftGenius widget,
first explain in one or two sentences
that an app with a curated list of gifts will open
and how it will help the user.
`;

This isn’t a full contract yet, but even this small insert greatly increases behavioral predictability.

When an announcement is especially important

The more complex your UI, the more important it is to announce its launch in advance. If a widget just shows three gift cards, that’s a relatively soft context switch. But if you open a multi-step wizard with filters, budget, categories, etc., the user needs to understand why the dialog suddenly turned into a “small web app inside the chat.”

Official UX guidelines also emphasize that the assistant should explicitly bridge text and UI, not silently tack a widget onto the answer.

3. When to deliberately not offer the App

The most common mistake early in App development is the classic effect: “when you have a hammer, everything looks like a nail.” Since we have a shiny GiftGenius, the model tries to drag it into every dialog. The user asks, “What does your app do?”, and ChatGPT is already like, “Launching GiftGenius…,” even though the person just wanted a two-line explanation.

To avoid this, the system prompt needs to describe situations where the App is better not suggested. Below are a few typical scenarios.

  • First, introductory questions. If someone writes something like “What does GiftGenius do?” or “How do you work?”, the instructions should require giving a concise text explanation first, without launching a UI. A widget only distracts here.
  • Second, requests that are too general or vague. The user writes “Tell me about gifts for New Year’s” — that’s more of an educational question than a specific selection. The assistant can briefly explain general principles, ask clarifying questions, and only when concrete parameters appear (budget, recipient, category) suggest the App.
  • Third, requests outside the App domain. If someone says: “Help me write a resume,” and your App is focused on gifts, the right behavior is to honestly answer as regular ChatGPT and launch nothing. Sometimes you can gently mention what the App is for, but you shouldn’t push it when it’s clearly irrelevant.
  • Fourth, an explicit refusal of UI. If the user writes: “Please don’t open any apps, just explain in text,” the model must obey, even if it sees a perfect scenario for the App.

Table: request type and assistant behavior

Request scenario What the assistant should do
“What can your service do?” Explain briefly in words, without launching the App
“Pick a gift for a colleague up to $50” Suggest launching the App and explain what it will do
“Tell me popular New Year’s gifts” Discuss in text; ask clarifying questions if needed
“Help with a resume” Answer as regular ChatGPT; do not suggest the App
“No apps, please” Honor the request; don’t launch a widget

Extending the system prompt with “don’t launch the App” rules

Let’s continue the same systemPrompt by adding a block about when not to launch the App:

export const systemPrompt = `
# Role
You are the ChatGPT App GiftGenius; you help pick gifts.

# When NOT to launch the widget
If the user asks only what the service can do,
or asks a general, theoretical question about gifts,
first answer in text and do not launch the app.

If the request is unrelated to gift selection,
answer as regular ChatGPT and do not suggest GiftGenius.

If the user explicitly asks not to use apps,
you must respect that and work only in chat.
`;

This text turns into concrete model decisions in edge cases where otherwise it might “pull the blanket” toward the UI. We’ve fixed when the App isn’t needed. Now it’s important to describe the flip side: what the assistant should do once the widget has run and the user sees the result.

4. Behavior after using the App: follow-up and closing the scenario

In the module about the widget you already saw how follow-up messages help continue the dialog after the UI has run. The widget shows cards, and underneath the assistant writes something like: “I found these gift options for a colleague with a budget up to $50. Want me to show cheaper ones or change the category?” and offers buttons with popular actions.

Now our task is to codify this behavior in instructions rather than relying on the model’s “intuition.”

What the assistant should do after the widget

In the ideal scenario, several things happen.

  • First, the assistant briefly summarizes the result of the App in words. Even if the widget showed ten cards, it’s helpful to write: “I picked 4 gift options for a colleague with a budget up to $50. They include a custom-printed mug, a desktop plant, a set of good coffee, and a stylish notebook.”
  • Then it suggests next steps. This is where pre-thought follow-up phrases help: “Want to see cheaper options?”, “Need to narrow down by interests?”, “Show only those available in your region?” You can use these phrases in sendFollowUpMessage in the widget and also recommend them to the model in the system prompt.
  • And finally, if the user clearly closes the scenario (“Thanks, that’s enough”), the assistant gently “closes” the topic: acknowledges the task is done and offers help with something else.

Flow diagram: question → widget → follow-up

To visualize it, you can think of the assistant’s behavior as a simple state machine.

flowchart TD
    U[User provides a task] --> G[GPT decides: launch the App?]
    G -->|Yes| A[Announces launching the App]
    A --> W[GiftGenius widget selects options]
    W --> S[Assistant summarizes the result]
    S --> F[Assistant offers follow-up options]
    F -->|User chooses an action| G
    G -->|No, don't launch the App| T[Text-only answer without UI]
    F -->|"User says \"Thanks\""| E[Assistant closes the scenario and offers additional help]

We effectively describe this flow in words in the system prompt.

Code example: follow-up from the widget

On the UI side, you already know how to send follow-up messages. For completeness, here’s a simple component example that asks the model to “expand the budget” after a button click:

// components/ExpandBudgetButton.tsx
export function ExpandBudgetButton() {
    const onClick = () => {
        window.openai?.sendFollowUpMessage(
            "Show options with a slightly higher budget"
        );
    };

    return <button onClick={onClick}>Show pricier options</button>;
}

Now we’ll add text to the system prompt that will hint to the model how to handle such follow-up messages.

// continuation of systemPrompt
const followUps = `
# Behavior after launching the app
After the widget shows a list of gifts,
briefly describe the result in text.

Then suggest 1–3 clear next steps
(e.g., show cheaper ones, change the budget, switch category).
If the widget sends a follow-up message,
use it as a hint for the next step.
`;

Technically it’s just a string. From a UX standpoint — it’s the basis for a predictable scenario.

5. Respecting user intent

Everything we discussed above reflects your product expectations for App behavior. UX instructions won’t work well if the model can’t “listen” to the user. Even a perfectly designed App must yield if a person explicitly asks not to change the interaction format.

There are several characteristic situations.

  • If the user directly says they don’t want apps launched (“No UI, just explain what to buy”), the assistant should treat this as a hard constraint and not try to bypass it. You can politely say: “Okay, I’ll answer in text only,” and then actually keep that promise.
  • If the user fears something will launch automatically, it helps to give them a sense of control. For example: “I can open an app to pick gifts, but if you prefer, we can discuss options right here in chat. What’s more convenient for you?” Here you explicitly offer a choice.
  • If the user writes “I’m on my phone, don’t launch complex forms” — that’s also part of the context. The assistant should accept it and, for example, stick to a short list of ideas and clarifying questions.

Bake respect into the contract

You can compactly reflect all of this in the system prompt:

export const respectBlock = `
# Priority of user intent
Always honor explicit user requests
about the format of interaction.

If they ask not to launch apps or widgets,
do not suggest or launch GiftGenius,
even if it would help solve the task.
Instead, help in text.
`;

This way you clearly fix “who’s in charge” in the dialog. Spoiler: it’s not your pride in a beautiful UI, but the human on the other side of the screen.

6. How to format UX instructions inside the system prompt and in the documentation

We’ve already drafted quite a few behavior rules — from announcing the App to follow-up messages and respecting the dialog format. Now it’s important not only what we tell the model, but also how it’s organized in the system prompt and docs.

In a real App, the system prompt grows quickly. If you write it as continuous prose, nobody will be able to find anything a week later. So treat it like a technical spec or README: structure it.

A good practice is to break the prompt into several logical sections with headings. For example, “Role and responsibilities,” “When to use the App,” “When not to use the App,” “Dialog and UX,” “Security and constraints.” Inside each section, write simple, unambiguous sentences.

Even better — put the system prompt in a separate file next to the code instead of stuffing it into a string literal in the middle of a component. That makes it easier to review, diff, and discuss with product or legal.

Example of organizing the system prompt in code

One option is to store parts of the prompt in separate strings and assemble them into a whole:

// app/prompt/role.ts
export const roleSection = `
# Role
You are the ChatGPT App GiftGenius.
You help the user pick gifts based on their task and budget.
`;

// app/prompt/ux.ts
export const uxSection = `
# Dialog and UX
Before launching the GiftGenius widget,
briefly explain that an app with gift cards is about to open.
Do not launch the app for general or theoretical questions
unless the user explicitly asks for a selection.
After the widget runs, summarize the result in text
and suggest 1–3 next steps.
`;

// app/appDefinition.ts
import { roleSection } from "./prompt/role";
import { uxSection } from "./prompt/ux";

export const systemPrompt = `
${roleSection}
${uxSection}
`;

This decomposition helps you think of instructions as separate modules: UX, safety, tool usage, etc. It’s especially useful when you add new capabilities and need to align behavior with multiple teams.

Additionally, it makes sense to synchronize the App documentation (internal README, Confluence, Notion) with these sections. There you can explain in human terms why you announce the App the way you do and why you don’t launch it for exploratory requests. Separately fix what the follow-up lines should be. Then new team members won’t try to “fix” the prompt without understanding what you were doing.

7. Practice: rewrite the UX part for our GiftGenius

Let’s put everything together into a more or less cohesive system prompt example. Suppose we had a very modest system prompt:

export const systemPrompt = `
You are the GiftGenius app.
Pick gifts for the user.
`;

This says nothing about when to launch the App, how to announce it, or what to do after the widget. Let’s add UX instructions step by step.

First, clearly state scope and working format:

const role = `
# Role
You are the ChatGPT App GiftGenius.
Your task is to help the user select 3–7 relevant gifts
for a specified budget, recipient, and occasion.
You can use the GiftGenius widget for visual selection.
`;

Then describe how to announce the launch:

const announce = `
# App announcement
If you believe the GiftGenius widget will help,
first explain in one or two sentences
that an app with gift cards will open
and that the user will be able to browse and filter them.
Only after that, launch the app.
`;

Add rules for when not to launch the App:

const noApp = `
# When not to use the app
If the user is only asking what the service can do
or wants general theoretical information about gifts,
answer in text and do not launch GiftGenius.

If the request isn’t about gifts (e.g., a resume or code),
answer as regular ChatGPT and do not suggest the app.

If the user asks not to use apps,
treat that as a hard constraint.
`;

And finish with behavior after using the widget:

const afterWidget = `
# Behavior after the widget
After the widget shows gift options,
briefly describe the result in your own words.
Suggest 1–3 next steps
(e.g., change the budget, filter by interests,
show only cheaper options).

If the widget sent a follow-up message,
use it as the primary signal for the next step.
`;

The final system prompt might look like this:

export const systemPrompt = `
${role}
${announce}
${noApp}
${afterWidget}
`;

This already resembles a behavior specification rather than a “wish to the universe.” In the next modules you’ll augment this contract with instructions about safety, hallucinations, commerce, and other joys of grown-up App life, but the UX part is already a foundation.

8. Common mistakes when setting up UX instructions

Mistake #1: “The App is always better than text.”
Sometimes developers are so proud of their widget that they require the model to invoke it at every opportunity. As a result, the user gets the App even where they just wanted to ask “what is this, anyway?” The model becomes pushy, and people start ignoring the app. The right approach is to explicitly codify scenarios where the App isn’t needed and respect those cases.

Mistake #2: no explicit announcement before launching the App.
If the assistant silently launches a widget, the user doesn’t understand where the UI block came from or what to do with it. OpenAI’s guidelines and practical experience show: one or two phrases like “I’m about to open an app that does X” dramatically improve UX and reduce confusion.

Mistake #3: overly aggressive repeated suggestions of the App.
Sometimes after every answer the App offers to launch the widget again: “Want to open the app again? How about now? And now?” That quickly turns into spam. It’s better to codify in the instructions that after the first use of the App, you should watch the context: suggest it again only if the user clearly changes task parameters or asks to “show more.”

Mistake #4: ignoring an explicit refusal of apps.
Phrases like “no apps, please” or “it’s inconvenient to use forms from my phone” should be treated as hard constraints. If the model keeps pushing the App, the user loses trust in both the assistant and your product. This is easy to fix in the system prompt with two or three sentences, but many forget to do it.

Mistake #5: no summary and follow-up messages after the widget.
Sometimes the widget faithfully shows options, and the assistant goes silent. The user sees the UI but doesn’t know what to do next. No text, no question, no buttons with popular actions. This scenario feels unfinished and breaks dialog coherence. Always specify that a brief text summary and 1–3 clear next steps should follow the widget.

Mistake #6: mixing product UX and “general ChatGPT style” in one paragraph.
Sometimes the system prompt turns into a long artistic text: “Be friendly, use emoji, joke sometimes if appropriate. And yeah, maybe launch the App once in a while.” It’s very hard to spot real UX rules in such text. It’s better to highlight separate sections with clear headings: “Role,” “Dialog and UX,” “When to use the App,” “When not to use the App.” This helps both the model and the people who will work with this prompt after you.

1
Task
ChatGPT Apps, level 5, lesson 1
Locked
Three widget announcement styles + system-prompt preview
Three widget announcement styles + system-prompt preview
1
Task
ChatGPT Apps, level 5, lesson 1
Locked
Text-only mode as a user intent (widgetState + follow-up)
Text-only mode as a user intent (widgetState + follow-up)
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION