AI Prompting Techniques

Learn to use Markdown, XML Tags, and how to

Once you have mastered the Context / Task / Format Framework, the next step is to make outputs cleaner, more reusable, and easier to plug into workflows.

When drafting your AI Prompts, there are patterns can help you turn your basic prompts into more advanced prompts:

  • using Markdown,

  • using variables

  • tagging variables with XML.

These make your prompts readable for humans, reliable for the model, and predictable for downstream blocks.


Using Markdown

Markdown is a lightweight formatting language. It uses simple symbols like #, -, and * to add structure to text — things like headings, bold/italics, bullet lists, numbered lists, code blocks, and tables.

Why it matters in prompting:

  1. Prompts become more readable. You can break up long instructions into clear sections instead of one big block of text.

  2. Responses become more structured. You can tell the AI to output answers as Markdown so they’re consistent, readable, and easy to reuse in docs, tickets, or other tools.

Markdown formatting Quick Guide:

# H1 Header
## H2 Header
### H3 Header

**bold text**

*italic text*

***italic and bold text***

- bullet point item
- bullet point item

1. numbered list item
2. numbered list item

[linked text](<https://www.example.com>)

![image](<https://image.jpg>)

For more information, check out this full markdown cheat sheet to learn about different ways you can format text using Markdown.

Using Markdown in Prompts (for readability)

When you write a longer prompt, Markdown makes it easier for you (and teammates) to scan later. Adding Headings, lists, and code blocks turn your prompt into a mini instruction document rather than a wall of text.

Example:

# Context
Topic: AI Agents in the workforce
Audience: AI beginners who want practical tips

# Task  
Write a professional but friendly LinkedIn post.  

# Response Format  
- Keep it under 150 words  
- Use a conversational tone  
- Include 3 bullet points for key takeaways    

Using Markdown in AI Responses (for template formatting)

You can also tell the AI to return its output in Markdown. This enforces a repeatable structure and prevents free-form answers that are hard to work with.

Example:

# Context
Product Team Meeting Transcript

#Task
Summarize the transcript and extract all action items with the person responsible and the due date.

# Response Formatting
Present the results as a that include: 
- an easily scannable TL;DR list of items discussed
- a table of action items that correspond to the people they were assigned to

Your response should look like this:

## Meeting Summary
- 3-5 bullet point

## Decisions Made
- Bullet list

## Action Items
| Assignee | Task | Due Date |  
|----------|------|----------|  

NOTE: Notice how in this example we use Markdown in both ways.


Using Variables

Variables let you insert dynamic values into prompts without rewriting them each time. In MindStudio, variables are always written in double curly braces: {{varName}}.

When your workflow runs, these variables are replaced with live data.

Why Variables Matter

  • Personalization: Insert user inputs, or large data automatically.

  • Reusability: Add the same variable as context in multiple places in your AI Agent.

  • Flexibility: Variables can be used in across all blocks in MindStudio.

How They Look in a Prompt

## Context
Customer Name: {{customerName}}
Meeting Notes: {{meetingNotes}}

## Task: 
Draft a follow-up email to  after their onboarding call.

## Format: 
Return Markdown with:
# Subject
# Email Body (3–4 sentences, friendly tone)
# Next Steps (based on meeting notes)

When the workflow runs, {{customerName}} is replaced with the actual name and {{meetingNotes}} is filled with content from a transcript or notes block.

Good Habits

  • Use descriptive names:

    • ❌ Bad variable names: var1 , a , xyz

    • ✅ Good variable names: renewalDate , firstName , summary

  • Keep long text blobs (like transcripts) in separate sections, not mixed into a single instruction.

Tagging Variables with XML Tags

Sometimes you’ll have multiple variables or large chunks of text. To keep things clear, make sure to wrap them in XML tags using <>.

Tagging allows you as a way to label different pieces of information for the AI model and shows the AI model where each label begins and ends.

How to Use XML Tags:

<tagName>Content or {{variables}}</tagname>
  • Use opening tags: <tagName>

  • Whatever content you want to label with tags goes in the middle. This can be a {{variable}} or plain text.

  • At the end, make sure to close your tags with </tagName> . Notice the / before the tag name.

NOTE: This is standard practice across prompting across major model providers.

See Anthropics docs to learn more.

Why Tagging Your Variables Helps

  • Separates multiple variables cleanly.

  • Makes it easy for the AI to ground its answers to the right source.

  • Helps AI Models can reliably extract specific values from data.

How XML Tags look in an AI Prompt

<customerName>{{customerName}}</customerName>
<meetingNotes>{{meetingNotes}}</meetingNotes>

## Task: 
Draft a follow-up email to <customerName> after their onboarding call.

## Format: 
Reply with the content of the email and nothing else.
 
<exampleOutput>
Subject: (Subject line of the email)

Body: 
(3–4 sentences, friendly tone)

Next Steps:
(4-6 bullet points based on <meetingNotes>)
</exampleOutput>

NOTE: Notice how in this example we use XML tags to label the example output in addition to labeling contextual information.


Context Engineering

Basic prompting is about clarity, and context engineering is about giving the AI the right information to work with. The more relevant and well-structured the context you provide, the more accurate and useful the AI’s answers will be.

If you don’t give the AI model the background materials they need to complete the task, it will just make things up. On the other hand, if you give the AI model too much information with no guidance, they’ll get overwhelmed.

Context engineering is how you give just the right amount of background, in the right way.

Why Context Matters:

  • Grounding: Prevents the AI from guessing by supplying the facts it should rely on.

  • Relevance: Keeps answers tied to your data, not general internet knowledge.

  • Control: Lets you shape the “memory” of the model so it stays on task.


Context Engineering Techniques

1. System Prompt

In the System Prompt Tab, can include a prompt to guide the AI’s behavior or provide global information that you’d like your AI Agent to know about. This acts like the intern’s “job description.”

Example:

## Role
You are a Customer Success AI Assistant.  

Your role is to help Customer Success Managers (CSMs) by drafting summaries, emails, and action items that save them time and ensure accuracy.  

## Info you should always remember:  
- Prioritize clarity and professionalism in every response.  
- Always keep answers concise, focusing on key details rather than long explanations.  
- Never invent information. If something is missing from the context provided, state it clearly.  
- When presenting information, use Markdown formatting with clear headings and bullet points.  
- When referencing customers, always use the provided {{customerName}} variable.  
- Base recommendations only on the supplied transcripts, notes, or variables, not on outside assumptions.  

2. Reference Materials

You can inject documents, transcripts, notes, or snippets directly into the prompt. This ensures the AI bases its answer on your content, not what it happens to know.

3. Instruction Hierarchy

Make it clear which rules take priority.

Example:

“Always base your answer on the supplied transcript, even if you know other information.”

4. Variables + Tags

Use {{variables}} to pass in dynamic context like customer names, transcripts, or notes. Wrap large chunks in <tags> so the AI knows where they begin and end.


Prompt Example (with Context Engineering)

<meetingTranscript>
{{meetingNotes}}
</meetingTranscript>

## Task:
Extract all action items with assignees and due dates.

Always base your answer on the <meetingTranscript>, even if you know other information.

## Format: 
Return Markdown with:
- One paragraph summary of the call
- Action Items Bullet list with [Assignee]: [Task] (Due: Date)

Good Practices:

  • Keep context relevant — don’t paste in entire documents if only one section matters.

  • Segment long text with clear tags so the AI doesn’t confuse sources.

  • Be explicit: tell the AI to “only use the supplied material.”


Prompt Writing Checklist

  • Did you provide enough background info for the AI to understand the request?

  • Did you limit context to only what’s relevant?

  • Did you tag variables or long text so the AI can clearly separate them?

  • Did you tell the AI which context takes priority?

  • Did you define the task clearly, with no ambiguity about what should be done?

  • Did you specify the format for the response so it comes back structured and easy to use?

  • Did you keep instructions concise and avoid burying the key ask inside a wall of text?

  • Did you check for consistency — e.g., if you asked for Markdown output, did you state “return Markdown only”?

  • Did you handle edge cases (e.g., “if no data is found, say ‘No results available’”)?

  • Did you include constraints where needed (e.g. length, tone, audience)?

Last updated