AI Prompting Techniques

Learn to use Markdown, XML Tags, and how to

Once you have mastered the Context / Task / Format Framework, the next step is to make outputs cleaner, more reusable, and easier to plug into workflows.

When drafting your AI Prompts, there are patterns can help you turn your basic prompts into more advanced prompts:

  • using Markdown,

  • using variables

  • tagging variables with XML.

These make your prompts readable for humans, reliable for the model, and predictable for downstream blocks.


Using Markdown

Markdown is a lightweight formatting language. It uses simple symbols like #, -, and * to add structure to text — things like headings, bold/italics, bullet lists, numbered lists, code blocks, and tables.

Why it matters in prompting:

  1. Prompts become more readable. You can break up long instructions into clear sections instead of one big block of text.

  2. Responses become more structured. You can tell the AI to output answers as Markdown so they’re consistent, readable, and easy to reuse in docs, tickets, or other tools.

Markdown formatting Quick Guide:

# H1 Header
## H2 Header
### H3 Header

**bold text**

*italic text*

***italic and bold text***

- bullet point item
- bullet point item

1. numbered list item
2. numbered list item

[linked text](<https://www.example.com>)

![image](<https://image.jpg>)

For more information, check out this full markdown cheat sheetarrow-up-right to learn about different ways you can format text using Markdown.

Using Markdown in Prompts (for readability)

When you write a longer prompt, Markdown makes it easier for you (and teammates) to scan later. Adding Headings, lists, and code blocks turn your prompt into a mini instruction document rather than a wall of text.

Example:

Using Markdown in AI Responses (for template formatting)

You can also tell the AI to return its output in Markdown. This enforces a repeatable structure and prevents free-form answers that are hard to work with.

Example:

NOTE: Notice how in this example we use Markdown in both ways.


Using Variables

Variables let you insert dynamic values into prompts without rewriting them each time. In MindStudio, variables are always written in double curly braces: {{varName}}.

When your workflow runs, these variables are replaced with live data.

Why Variables Matter

  • Personalization: Insert user inputs, or large data automatically.

  • Reusability: Add the same variable as context in multiple places in your AI Agent.

  • Flexibility: Variables can be used in across all blocks in MindStudio.

How They Look in a Prompt

When the workflow runs, {{customerName}} is replaced with the actual name and {{meetingNotes}} is filled with content from a transcript or notes block.

Good Habits

  • Use descriptive names:

    • ❌ Bad variable names: var1 , a , xyz

    • ✅ Good variable names: renewalDate , firstName , summary

  • Keep long text blobs (like transcripts) in separate sections, not mixed into a single instruction.

Tagging Variables with XML Tags

Sometimes you’ll have multiple variables or large chunks of text. To keep things clear, make sure to wrap them in XML tags using <>.

Tagging allows you as a way to label different pieces of information for the AI model and shows the AI model where each label begins and ends.

How to Use XML Tags:

  • Use opening tags: <tagName>

  • Whatever content you want to label with tags goes in the middle. This can be a {{variable}} or plain text.

  • At the end, make sure to close your tags with </tagName> . Notice the / before the tag name.

NOTE: This is standard practice across prompting across major model providers.

See Anthropics docs to learn more.arrow-up-right

Why Tagging Your Variables Helps

  • Separates multiple variables cleanly.

  • Makes it easy for the AI to ground its answers to the right source.

  • Helps AI Models can reliably extract specific values from data.

How XML Tags look in an AI Prompt

NOTE: Notice how in this example we use XML tags to label the example output in addition to labeling contextual information.


Context Engineering

Basic prompting is about clarity, and context engineering is about giving the AI the right information to work with. The more relevant and well-structured the context you provide, the more accurate and useful the AI’s answers will be.

If you don’t give the AI model the background materials they need to complete the task, it will just make things up. On the other hand, if you give the AI model too much information with no guidance, they’ll get overwhelmed.

Context engineering is how you give just the right amount of background, in the right way.

Why Context Matters:

  • Grounding: Prevents the AI from guessing by supplying the facts it should rely on.

  • Relevance: Keeps answers tied to your data, not general internet knowledge.

  • Control: Lets you shape the “memory” of the model so it stays on task.


Context Engineering Techniques

1. System Prompt

In the System Prompt Tab, can include a prompt to guide the AI’s behavior or provide global information that you’d like your AI Agent to know about. This acts like the intern’s “job description.”

Example:

2. Reference Materials

You can inject documents, transcripts, notes, or snippets directly into the prompt. This ensures the AI bases its answer on your content, not what it happens to know.

3. Instruction Hierarchy

Make it clear which rules take priority.

Example:

“Always base your answer on the supplied transcript, even if you know other information.”

4. Variables + Tags

Use {{variables}} to pass in dynamic context like customer names, transcripts, or notes. Wrap large chunks in <tags> so the AI knows where they begin and end.


Prompt Example (with Context Engineering)

Good Practices:

  • Keep context relevant — don’t paste in entire documents if only one section matters.

  • Segment long text with clear tags so the AI doesn’t confuse sources.

  • Be explicit: tell the AI to “only use the supplied material.”


Prompt Writing Checklist

  • Did you provide enough background info for the AI to understand the request?

  • Did you limit context to only what’s relevant?

  • Did you tag variables or long text so the AI can clearly separate them?

  • Did you tell the AI which context takes priority?

  • Did you define the task clearly, with no ambiguity about what should be done?

  • Did you specify the format for the response so it comes back structured and easy to use?

  • Did you keep instructions concise and avoid burying the key ask inside a wall of text?

  • Did you check for consistency — e.g., if you asked for Markdown output, did you state “return Markdown only”?

  • Did you handle edge cases (e.g., “if no data is found, say ‘No results available’”)?

  • Did you include constraints where needed (e.g. length, tone, audience)?

Last updated