Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This guide explains key evaluation criteria such as price, latency, quality, context window, and response size—along with how to use the Profiler tool to compare models side by side.
Choosing the right AI model in MindStudio is essential to balancing cost, performance, and quality. This guide walks through the key considerations and demonstrates how to use the Profiler tool to compare models directly.
When selecting an AI model, consider the following factors:
Each model has a different cost per token for input (prompt) and output (response).
Token cost is measured per million tokens (MTOK).
Tokens roughly equate to words (1 token ≈ 0.75 words).
Cheaper models are suitable for automations and utility tasks. More expensive models often yield better reasoning and generation quality, ideal for final outputs.
Latency refers to how long the model takes to generate a response.
Lower-latency models are preferable for interactive or real-time use cases.
Evaluate the coherence, tone, and style of responses.
Some models produce more creative outputs, while others are better for concise summaries or factual tasks.
Quality is best assessed by comparing outputs in the Profiler.
Determines how much information the model can ingest at once.
Ranges from 4,000 tokens to over 1,000,000 tokens depending on the model.
Larger windows are useful for document summarization, legal analysis, or full-site scraping.
Examples:
GPT-4 Mini: 128K tokens
Claude 3.5 Haiku: 200K tokens
Gemini 2.0 Flash: 1M tokens
Controls how long the model’s output can be.
Some models are capped at 4,000 tokens while others can produce 8,000–16,000 tokens or more.
Useful when generating long-form articles, reports, or stories.
MindStudio’s Profiler tool lets you test models side by side:
Open the Model Settings tab.
Click the Profiler button in the top-right corner.
Select two or more models for comparison.
Standardize settings like temperature and max tokens.
Example Comparison:
Claude 3.5 Haiku: More expensive, shorter output, faster start.
GPT-4 Mini: Slightly cheaper, longer and more detailed output.
Gemini 2.0 Flash: Fastest response, low cost, huge context window.
You can open any Generate Text block inside your AI agent and run its prompt through the Profiler to preview output differences across models without altering your workflow.
To select the best model:
Use cheaper models for fast, repetitive tasks.
Choose more capable models for final outputs, reasoning-heavy, or creative tasks.
Evaluate models across:
Cost per token
Choosing the right model ensures your AI agents are both effective and efficient—tailored precisely to your needs.
Input your prompt (e.g., “Write a long-form blog post about space”).
Observe:
Start and finish times
Output length and style
Token usage and cost
Latency
Quality of response
Context capacity
Output size
Use the Profiler tool to directly test and compare models in real time.
Write effective system and task prompts, use templating, control tone, apply markdown formatting, and create structured outputs like JSON.
Writing clear, effective prompts is a foundational skill when building workflows in MindStudio. Whether you're using AI to generate text, images, audio, or video, how you instruct the model makes a significant impact on the result. This guide walks through prompt types, strategies, formatting, and structure.
System prompts define the overall role and expectations of the AI agent. They're usually written once per workflow and serve as a blueprint for how the AI should behave.
A good system prompt includes:
Role of the assistant (e.g., "You are a blog post generator.")
General responsibilities (e.g., "Research, write, and optimize content.")
Formatting guidelines (e.g., "Use markdown.")
Tone/style preferences (e.g., "Professional and concise.")
Use the "Generate Prompt" feature in MindStudio to create a structured draft, and edit it to suit your workflow.
Tip: Use
/* */to add comments in the system prompt that the AI will ignore.
Task prompts are written inside individual blocks like Generate Text, Generate Image, etc. These should be specific to the action that block is performing.
For example:
Add structure and context with templating or example output to improve quality.
Use custom tags like <example_output> to show the AI what the desired output looks like.
Example:
Embed variables using {{ }}:
Markdown lets you structure AI-generated text for easy display in apps or websites.
Common markdown syntax:
# H1 header
## H2 header
**bold**, *italic*
Markdown is especially useful in display blocks or when instructing the AI to output structured content.
Add tone instructions to shape the AI’s voice:
Professional: formal, structured
Casual: friendly, conversational
Spartan: brief, to-the-point
You can mix tones too:
Compare results by previewing the AI's response with and without tone instructions to see the impact.
For advanced workflows, use structured outputs to extract usable data from AI responses. Switch the output schema to JSON.
Example JSON template:
Now the output can be parsed and reused in other blocks by referencing specific fields.
Good prompts lead to better AI behavior. Keep these best practices in mind:
Use system prompts for agent-wide behavior.
Use task prompts for specific actions.
Include example outputs and formatting instructions.
Use variables to make prompts dynamic.
Mastering prompt writing will significantly improve the precision and performance of your AI agents in MindStudio.
Learn the essential techniques for testing and debugging AI agents in MindStudio.
This guide introduces key tools and practices for testing and debugging your AI agents as you build them in MindStudio. You’ll also learn how to identify and fix common errors like variable mismatches, high temperature settings, and missing prompts.
The first step when building any agent is to check the Errors tab. This will surface:
Misconfigured blocks
Referenced variables that don’t exist
Missing required fields
If the tab shows no errors or warnings, proceed to running your agent using the debugger.
The Debugger tracks every step in your agent’s run:
Click Preview, then choose Run in Debugger or open a Draft Agent.
If your agent includes user input, it will prompt you for a value (e.g., topic = “dogs”).
The debugger will log:
Each block’s execution
This lets you verify whether the variables are passed correctly and prompts are resolving as expected.
If an agent runs but doesn’t include user input (e.g., the AI says, “Let me know what you want to write about”), you likely forgot to include the variable in your prompt.
Fix:
Edit the Generate Text block.
Use your variable with opening/closing tags:
Re-run the agent and confirm the input is correctly inserted.
Another common error is referencing a variable that hasn’t been defined—usually due to a typo.
Fix:
Check the Errors tab.
Error will indicate something like: Variable 'top' is referenced but does not exist.
Click the error to highlight the block.
Correct the variable name to topic
You can test the fix in the debugger using a test value set in the user input configuration.
If your AI outputs gibberish or chaotic text, it's likely caused by a temperature setting that is too high.
Fix:
Go to Model Settings or directly edit the temperature in the Generate Text block.
Lower the temperature to a mid-range value (recommended default).
Rerun your test.
MindStudio displays a warning when a temperature setting is too high and may lead to unstable outputs.
If the prompt field in a Generate Text block is empty, the Errors tab will show:
Fix:
Click the error to highlight the block.
Fill in a prompt like:
Use test values for user inputs when debugging without launching a draft.
Use the debugger expansion panel to trace logic, outputs, and costs.
Regularly check for spelling errors and missing prompt content.
Always validate changes with a test run.
If you run into issues:
Click Help and Support in the sidebar.
Access:
Support forum
Video tutorials
The MindStudio community is active and helpful—many common issues have already been discussed and resolved.
Testing and debugging are essential steps in building reliable AI agents. By mastering these basics—especially using the Errors tab and debugger—you’ll streamline your development process and catch problems early.
AI agents can support partnerships, sales, and business development teams
We show how real AI agents can support partnerships, sales, and business development teams. Led by our Director of Partnerships, Dannielle Sakher, and CEO, Dmitry Shapiro, this practical session explains how to use AI agents that scale your GTM motion, with no costs or setup.
✅ Personalize outreach ✅ Prep for meetings ✅ Surface red flags in deals …and more.
Constraints (e.g., "Today’s date is June 1, 2025.")
- Bullet list
[link text](https://example.com)
Apply markdown for clean, structured results.
Experiment with tone and JSON outputs for flexibility.
Variable values at each step
Inputs and outputs
Total runtime and cost
Documentation
Quick help tabs
Learn how to scrape web content and use it dynamically inside your AI workflows
This guide walks through the process of scraping webpage content in MindStudio and using that content in a custom AI agent. The example agent extracts article content from a URL and turns it into a LinkedIn post.
We’ll build a URL to LinkedIn Post agent that:
Collects a URL from the user.
Scrapes the content from that page.
Uses AI to generate a LinkedIn post based on the page content.
Add a User Input block to your workflow.
Choose the Short Text input type.
Name the variable: url
Set the label: Enter the URL you'd like to write a LinkedIn post about
Add a Scrape URL block.
In the URL field, use the variable: {{ url }}
Set the output variable name: scraped_content
The block will now extract and store webpage content into the scraped_content variable.
Add a Generate Text block.
Write your prompt, including the scraped content:
Choose an appropriate model (e.g., Claude 3.5 Haiku).
Click Preview and open the draft agent.
Try inputting an invalid value (like not a URL) to confirm validation works.
Enter a valid URL or use the test value.
The AI will:
Use the Scrape URL block to pull live content from any webpage.
Always validate user input when collecting URLs.
Store scraped data in a clearly named variable for easy reuse.
Keep the output format as “Text only” for general analysis or “JSON” for structured use cases.
You can further extend this workflow by adding post-processing steps or integration blocks to share or save the generated content.
Every industry needs AI efficiency. EVERY. SINGLE. ONE.
🚀 Every company is DESPERATE to leverage AI, but they don't know HOW.
That's where YOU come in. 👈
Imagine walking into a business and saying: "I can make your team 3x more productive with AI agents"
Here's what I'm teaching in the Monetizing AI Agents Masterclass:
✅ How to approach companies ✅ Packaging your AI services ✅ Pricing your expertise ✅ Building an AI consulting business ✅ Landing high-paying AI roles (if that's what you want)
No technical background required. Just: • Ability to listen • Ability to observe how customers operate • Ability to translate needs into AI solutions
This isn't the future. This is NOW.
Every industry needs AI efficiency. EVERY. SINGLE. ONE.
Learn how to build powerful AI agents
Learn how to build powerful AI agents using MindStudio in this comprehensive tutorial. From basic concepts to advanced techniques, this session covers everything you need to get started with building your own AI applications.
Whether you're completely new to AI agent building or looking to level up your skills, this tutorial provides practical examples and best practices you can immediately apply to create powerful AI applications.
Go from zero to hero in an hour 🚀 💪 🦄
No technical skills required -- you will learn everything you need in the class.
Deep Topic Research: Do hundreds of hours of research and analysis with the click of a button!
Deep People / Entity Research: AI-powered OSINT -- uncover deep insights that can help you understand individuals and organizations.
Get Contact Info: Identify key people mentioned in articles, documents, and web pages, and get their contact info (email, phone, etc).
Fact Checking: Identify inaccuracies, and get the facts!
And over a dozen other AI Agents that can help you get things done and get insights that you've never been able to have before. No mere mortal will be able to compete with you!!!
🦄 How to easily Customize AI Agents for your specific purposes. 👉 How to easily Build New AI Agents in minutes -- no coding required. 👉 How you can build AI Agents for other people / orgs and make $$$
Learn the fundamentals of MindStudio and AI agents in this introductory video
Welcome to the first lesson in the MindStudio Fundamentals course! This video lays the foundation for everything you’ll be learning in upcoming modules. Whether you're new to AI or just new to MindStudio, this guide will help you understand the core concepts you need to get started building powerful AI agents.
MindStudio is an integrated platform for building and deploying AI agents. It provides:
Learn to build AI Agents
Anyone can build AI Agents—no experience required. Whether it's your first time or you're just looking for a quick refresher, this video will walk you through the basics of building your first AI Agent with MindStudio.
In just a few minutes, you’ll learn how to launch, customize, and deploy your own intelligent tools—no coding needed.
Write a long-form blog post about the following topic: {{topic}}<example_output>
# Title of the Post
Opening paragraph with a compelling hook.
## Section Header
Several paragraphs of information.
- Key point one
- Key point two
## Conclusion
Summarize the article.
</example_output>Write a blog post about: {{topic}}Use a Spartan, professional tone.{
"game": "Chess",
"book": "1984",
"dog": "Beagle"
}cssCopyEditWrite a blog post about the following topic:
<topic>{{ topic }}</topic>scssCopyEditMessage in generate text block cannot be emptycssCopyEditWrite a blog post about the following topic:
<topic>{{ topic }}</topic>Add placeholder text:
e.g., https://www.theverge.com/...
Enable URL validation to ensure the input is a proper URL.
(Optional) Set a test value for debugging, like a real article URL.
Text onlyEnable Auto-enhance to improve scraping reliability.
Keep the Default scraper selected (Firecrawl is also available if needed).
Leave Screenshot disabled unless required.
Scrape the page.
Analyze the content.
Generate a LinkedIn post for you to copy or repurpose.
Auto-enhance improves scraping accuracy on dynamic or complex websites.
Tools to test, debug, and iterate until the output is just right.
Deployment options across web apps, browser extensions, APIs, and more.
Already used to launch over 200,000 agents, MindStudio supports everyone from individuals to large enterprises.
At its core, an AI agent is:
Something that uses an AI model to perform a task on your behalf.
AI agents in MindStudio:
Leverage 90+ models from OpenAI, Google, Meta, Anthropic, and others.
Execute tasks using structured workflows.
Can collaborate with other agents to complete complex objectives.
MindStudio agents can be deployed in many ways:
AI-Powered Web Apps: Shareable web apps you can bookmark, embed, and reuse.
Chrome Extension: Trigger agents contextually while browsing.
Scheduled Automations: Run background tasks on a recurring schedule.
Email Trigger: Forward threads to a unique email to auto-trigger an agent.
Webhooks: Trigger agents from external tools like Zapier or your own apps.
API Integration: Programmatically call agents to add intelligence to software.
MindStudio agents range from simple to highly advanced. Here’s how to think about their complexity:
Example: Ask an AI to write an email.
Pattern: One AI block that sends a message to a model and returns an output.
Use case: Quick, context-light tasks.
Example: Personalize emails for a list of leads.
Pattern: Multi-step blocks that enrich context before calling AI.
Use case: Tasks requiring background data, logic, or structured input.
Example: Auto-generate, check, and improve content based on rules.
Pattern: Includes logic blocks for decision-making and validation.
Use case: Fully autonomous systems needing high accuracy and quality control.
Goal: Simplify a YouTube transcript.
Type: Level 1
Deployment: Chrome extension
Structure: Single block calling an AI model.
Goal: Generate personalized sales documents.
Type: Level 2
Structure: Form input → AI enrichment → Final generation.
Goal: Analyze a product URL and suggest alternatives.
Type: Level 2
Features: Web scraping, competitor analysis, HTML output.
Goal: Generate a research paper with sources, images, and even podcasts.
Type: Level 3+
Highlights: Logic checks, data enrichment, multimedia generation.
AI agents are just workflows — step-by-step processes that accomplish tasks. With MindStudio:
You can start small and grow into more advanced designs.
The platform supports a wide range of use cases, from everyday productivity to enterprise-grade automation.
Subscribe to our YouTube channel to follow along and level up your AI agent-building skills.
Thanks for watching!
Learn how to navigate the workspace, configure blocks, manage models, use debugging tools, and access advanced agent settings
This guide provides a comprehensive overview of the AI Editor interface in MindStudio. You'll learn how to navigate the workspace, configure resources, manage agent settings, and debug workflows efficiently.
When you create an AI agent, you’re placed inside the AI Editor. The UI is divided into two main sections:
Left Panel (Explorer): Contains resources like data sources, custom functions, user inputs, and workflows.
Right Workspace: Displays contextual editing tools and configuration panels based on what you're working on.
Use the explorer to navigate between tabs such as automations, user inputs, and functions. The main workflow typically starts in main.flow, under the automations tab.
Inside the automations tab:
The start block initializes the workflow.
The terminator block ends the workflow.
You can insert additional logic using the + button to add new blocks.
The canvas supports:
Vertical and horizontal scrolling (use Shift to scroll horizontally).
Zooming, reset view (R key), and auto-arranging blocks for a clean layout.
Quick tools for panning (H key) and selecting (V
Useful tools in the canvas include:
Sticky notes for adding comments or reminders.
Diagnostics tool to validate block linkages and optimize workflows (recommended for advanced users).
Errors tab to highlight issues such as missing inputs.
Debugger tab to monitor input/output and cost at each step during agent execution.
Key tabs along the top of the editor:
System Prompt: Set overall instructions for the agent (e.g., blog post generator). You can also use the Generate Prompt tool to automatically create a structured prompt.
Model Settings: Choose a default AI model from over 90 options. This acts as a fallback if a block doesn't specify a model.
Evaluations: Allows you to batch test scenarios (optional for beginners).
Each block has a configurable panel on the right-hand side. For example:
A Generate Text block lets you enter the prompt.
A Run Function block switches the configuration panel to support code-based execution.
You can preview your agent and inspect each step’s behavior through the debugger at the bottom.
Select the root agent folder to access:
General Settings: Name, description, icon, and landing page for sharing.
Sharing & Access: Set visibility (public/private), enable remixing, and configure API access (business plans only).
Advanced Settings: Includes onboarding workflows, global variables, and persistent user data (covered in later videos).
Version History: Revert to previous published versions of your agent.
Most of your work will happen inside the workflow builder, where you’ll add and configure blocks. This video covers the full UI so you know where to find everything as you start building. If you have questions, check the in-app documentation or leave a comment on the video.
Thanks for watching!
Learn how to understand, manipulate, and apply structured data (JSON) in MindStudio workflows
JSON (JavaScript Object Notation) is a powerful and widely used format for storing and exchanging structured data. In MindStudio, understanding how to parse, generate, and utilize JSON is essential for building flexible and powerful AI workflows.
JSON is a structured data format made up of key-value pairs. Each key is a string (in quotes), and its associated value can be:
A string ("Alice")
A number (30)
A boolean (true)
An array (["user", "admin"])
Another object ({"age": 30, "active": true})
JSON also supports nesting, allowing you to build complex hierarchies of data.
All keys must be strings (in double quotes).
Key-value pairs must be comma-separated.
Do not include a trailing comma at the end of the last key-value pair.
Many blocks, like Search Google News, return data as JSON. This allows you to access structured results, such as article titles, links, and thumbnails.
You can:
Display the full JSON
Extract specific values using path expressions
Iterate through lists using each blocks
To get a specific value from JSON, use the JSON path:
This extracts the title of the first article from the GoogleNews variable.
Use the #each tag to loop through arrays in JSON:
This will render each article's title, link, and thumbnail using Markdown formatting.
You can use the Generate Text block to ask AI to return structured JSON. For example, extracting all URLs from search results:
Use an output schema to define the format and simplify complex JSON into more usable lists.
JSON can also be applied to generate HTML pages dynamically using the Generate Asset block. You can:
Iterate over content sections
Conditionally render data like headings, key takeaways, and entities
Use extracted structured data to build full HTML pages for summaries or reports
Example usage:
Extract structured information from a Verge article
Use that data to populate an HTML summary page
Present it with images, headlines, and lists using embedded JSON paths
Understanding JSON enables you to:
Parse and manipulate structured responses
Extract only the values you need
Iterate through complex arrays
Generate clean structured output from unstructured content
Once you master JSON in MindStudio, you'll be able to build significantly more advanced and powerful agents with flexible output formatting and robust data handling.
Pull in or send out data to and from external services
Integration blocks in MindStudio let your AI agents connect seamlessly to third-party services. These blocks either bring data into your workflow or send data out to external tools, enabling powerful automation use cases.
You can add integration blocks like any other block using the “+” menu and browsing View All Blocks. They appear slightly differently in the editor to visually distinguish them from native blocks.
Integration blocks fall into two categories:
Input blocks: Pull data into your workflow (e.g. YouTube captions, Google Docs).
Output blocks: Push data out to external services (e.g. LinkedIn posts, Google Sheets, emails).
Each block has its own configuration panel with parameters specific to the connected service.
This workflow extracts captions from a YouTube video and automatically creates and publishes a LinkedIn post summarizing the content.
Blocks Used:
User Input: Accepts a YouTube URL.
Fetch YouTube Captions: Retrieves the video transcript.
Generate Text: Uses AI to summarize the transcript into a LinkedIn post.
Create LinkedIn Post: Publishes the AI-generated post.
Requires signing into LinkedIn and setting post visibility. AI content is passed using a variable to this block.
This setup automates content repurposing from YouTube into social media.
This workflow pulls financial text from two Google Docs, extracts numerical data using AI, and formats the comparison into a Google Sheet.
Blocks Used:
Fetch Google Doc: One block per document (doc1 and doc2).
Generate Text: Parses both documents, extracts values, and formats as CSV.
Create Google Sheet: Inserts the CSV into a new sheet.
Display Content: Displays a link to the generated sheet.
All blocks require signing into your Google account. CSV format enables structured spreadsheet output.
This flow is ideal for automating competitive or financial comparisons.
This workflow searches Google News, extracts top headlines, creates a styled email digest using HTML, and sends it via email.
Blocks Used:
Google News Search: Queries for a specific keyword (e.g. "AI agents").
Generate Text: Converts JSON-formatted news into HTML.
Generate Subject Line: Summarizes the news as a catchy email subject.
Send Email: Delivers the digest to a recipient using Markdown or HTML.
HTML formatting lets you include images, links, and structure. This is a great use case for daily or weekly digests.
Integration blocks enhance AI workflows by connecting to external tools.
Data can flow in or out, depending on the block.
Each service requires setup: You’ll need to sign in to third-party accounts (e.g. Google, LinkedIn).
Variables pass data between blocks, allowing AI-generated outputs to trigger external actions.
MindStudio supports hundreds of integration blocks, allowing for creative automation across marketing, operations, research, and more.
Use them to transform your agents into powerful, connected tools.
Learn how to bulk generate, run, and analyze test cases efficiently to validate your AI agents' behavior across multiple scenarios.
The Evaluations feature in MindStudio allows you to test AI workflows at scale using autogenerated or manually defined test cases. This is especially helpful for validating workflows like moderation filters, where consistent logic must be applied across many inputs.
Manually testing workflows via the preview debugger becomes inefficient as the number of test cases grows. Evaluations allow you to:
Autogenerate test cases with AI
Specify expected outputs
Run tests in bulk
Compare actual vs. expected results
Use fuzzy matching for flexible validation
In this example, an AI workflow is designed to detect spam comments and flag violations based on defined community guidelines. The workflow takes in a comment via a launch variable and outputs:
A boolean indicating whether it's spam
An array of flags indicating types of violations
Navigate to the top-level "Evaluations" tab in your project.
Click New Test Case to manually add a test or use Autogenerate to let AI create test cases for you.
Input guidance like “generate five test cases that are in violation of our guidelines.”
AI will produce sample comments with the correct input structure.
Add expected results (e.g., "is_spam": true, "flags": ["hateful", "off-topic"]).
Click Run All to test all cases in parallel.
MindStudio will show which tests pass or fail based on comparison with expected results.
Each test can be inspected in the debugger.
Repeat the process with a new prompt: “generate five comments not in violation.”
Provide expected results (e.g., "is_spam": false, "flags": []).
Run the new set and verify accuracy.
MindStudio supports two types of result matching:
Literal Match: Requires the actual output to exactly match the expected value.
Fuzzy Match: Allows minor differences or variations in phrasing. Useful for outputs with dynamic AI wording.
Run many test cases at once
Easily edit and rerun failing cases
Debug individual results
Improve the reliability of your AI workflows
Evaluations are a key tool for ensuring your AI behaves as expected at scale. Whether you're building content filters, classifiers, or other deterministic logic, this feature helps you confidently validate your workflows.
A prompt is simply the set of instructions you give an AI. It’s the way you tell the AI what you want — whether that’s an explanation, a summary, a creative draft, a detailed analysis, or anything else you can think of.
The quality of its output depends on the clarity of your request. If you’re vague, you’ll get vague answers. If you’re specific, you’ll get much closer to what you are actually looking for.
A simple way to write better prompts is to follow this three-step structure. If you include all three, you’ll almost always get clearer, more useful results.
Context gives the AI the key information it needs in order to follow your instructions. It sets the scene, explains the situation, and helps the AI understand what you’re asking it to do. Without context, the AI has to guess — and its answers will usually be too broad or off-target.
The task is the instruction itself. This is where you tell the AI exactly what you want it to do. The more precise and direct you are, the better the result. Since the AI takes language literally, a well-defined task removes ambiguity and keeps the answer focused on the outcome you actually need.
The format is how you want the response to be delivered. By specifying format, you control the structure and style of the answer, making it easier to read, compare, or use. Without it, the AI decides for you — and that may not align with your goals. Format is what turns a raw answer into something you can apply right away.
❌ Bad Prompt:
✅ Good Prompt:
❌ Bad Prompt:
✅ Good Prompt:
This guide walks you through building a website monitoring agent that checks for changes daily and sends an email if updates are detected.
MindStudio enables you to schedule AI agents to run automatically at specific times. This is useful for automating repetitive workflows like monitoring web content, generating reports, or triggering notifications without manual input.
In this example, we’ll build an AI agent that checks for changes on a website (e.g. OpenAI’s news page) each morning and emails a summary of those changes.
Create a new workflow and add the Track Website Changes module.
In the configuration panel:
Enter the target URL (e.g., https://openai.com/blog).
Use the default variable names unless you need to customize them.
You’ll configure two outcomes:
No Changes: Route to an End Workflow block.
Changes Detected: Route to a Send Email block.
In the Send Email block:
Connect your email account via the integrations menu (gear icon).
Set a subject like:
Changes Detected on OpenAI News
Use the {{changes}} variable as the email body to show the summary of detected content updates.
Click the Start Block, and in the Trigger section on the right:
Change the Run Mode from “On Demand” to Scheduled.
Click Add to define your schedule.
Use natural language or presets like:
Every morning at 9:00 AM
Set your Time Zone
The agent will now run automatically each morning.
Use the Preview button and run the agent in the Debugger:
On the first run, it will detect all visible content (no baseline exists yet).
On subsequent runs, it will only email you if changes are detected.
Scheduled AI agents are powerful for automating:
News monitoring
Report generation
Notifications
Daily or weekly workflows
To configure:
Use the Track Website Changes module.
Send detected changes via Email.
Set the schedule from the Start block.
You now have a self-running AI agent that keeps you informed—automatically, every day.
Learn how deep research works
In this deep-dive session, we break down how Deep Research works, how it’s built, and how you can use it to automate competitive analysis, summarize vast sources, generate insights, and more — all with minimal human input.
✅ What makes Deep Research a next-gen AI Agent ✅ The architecture behind its autonomous workflow system ✅ How it searches, filters, evaluates, and compiles information ✅ Real demos of Deep Research in action ✅ Tips to build or customize agents like this for your own use cases
AI won’t replace great marketers — but great marketers who use AI will replace those who don’t.
In this masterclass led by Dannielle Sakher (Director, Partnerships) and Dmitry Shapiro (CEO), you’ll learn to use these AI Agents that do the content grunt work for you:
✅ Case Study Generator: Turn customer call transcripts into polished success stories ✅ Testimonial Extractor: Pull quotes from customer calls for social, blogs, landing pages, and more. ✅ Webinar → Blog Agent: Turn your Zoom/YouTube webinars into full blog posts, with research-baked content added in. ✅ Webinar → Socials Agent: Generate social content from webinars in seconds. ✅ ICP Rewriter: Instantly rewrite a blog post or email for a different audience (Enterprise, SMB, Developers etc.)
→ How to customize these Agents for your brand → How to build your own (no code required) → How to make $$$ building AI Agents for others
AI won’t replace great marketers — but great marketers who use AI will replace those who don’t.
Learn how to use AI to generate text, images, audio, video and and more.
This guide walks through the process of generating four types of AI content in a single MindStudio agent:
Text
Image
Audio
Learn how to design user input forms in MindStudio to collect contextual information for your AI workflows.
This guide walks you through how to build structured forms in MindStudio that allow your AI agents to collect valuable information from users. You’ll learn how to create user input blocks, define variable names, configure form fields, and reference them inside prompts.
In this example, we’re building a blog post generator. To create a more personalized AI output, we’ll collect the following information from the user:
Learn how to install the MindStudio Chrome extension and build your first AI agent specifically for use within it.
This guide shows you how to create and deploy an AI agent designed to work within the MindStudio Chrome extension. You’ll learn how to configure the run mode, use webpage content as input, and set up a working summarizer agent from scratch.
Visit and click Install Chrome Extension.
This guide covers uploading and managing document-based data sources, then querying them for relevant AI context.
MindStudio allows you to create internal data sources directly within your projects. These data sources are ideal for uploading documents—like support guides or product manuals—that your AI agents can reference to generate accurate, contextual responses.
MindStudio supports several types of data sources:
Integration data sources
Learn how to group independent tasks for concurrent execution and measure efficiency gains.
Running independent workflow blocks in parallel can significantly reduce execution time for your AI agents. This approach is especially beneficial when multiple tasks can be performed simultaneously without waiting for each other’s results.
An AI agent is designed to:
Scrape multiple news websites
cssCopyEditWrite an attention-grabbing LinkedIn post based on the following article:
<content>{{ scraped_content }}</content>Plan me a trip.I’m planning a 3-day weekend in Paris on a $500 budget.
Create a travel itinerary.
Present it as a day-by-day schedule with morning, afternoon, and evening activities. Write a blog postTopic: Dogs
Subtopic: Dog Care for new pet owners
Tone: Encouraging and Educational
Write a blog post using the information above.
The blog post should be properly formatted with headers and should include "Quick tips:" bullet points for each section. Publishing: Click the “Publish” button at the top right to make changes live.
Build advanced outputs like HTML templates dynamically
You can chain multiple integration blocks for more complex workflows (e.g. fetch > analyze > send).
Set a Detection Prompt, such as:
Any changes to the main content of the website. Specifically, we are looking for new news stories.
America/Los_AngelesClick Generate Schedule, then Save.
You’ll also learn how to structure your prompts, connect blocks, and display all content together using a unified layout.
MindStudio supports generation of:
Text: Articles, emails, scripts, summaries, and more.
Image: AI-generated visuals from prompts.
Audio: Text-to-speech conversion using voice models.
Video: Short clips based on prompt descriptions.
Each content type has its own dedicated block, model settings, and display method.
We’ll create an agent that takes a single topic input and produces:
A long-form article
A relevant cover image
Audio narration of the article
A related short video
Add a User Input block.
Use Long Text.
Variable name: topic
Label: “What would you like your long-form article to be about?”
Add a Generate Text block.
Prompt:
Save the output to variable: text
Add another Generate Text block.
Prompt:
Save to variable: image_description
Add a Generate Image block.
Use {{ image_description }} as the image prompt.
Output variable: image
Use your preferred model (e.g., Ideogram V2)
Optional: Set aspect ratio (e.g., 16:10)
Add a Text to Speech block.
Input: {{ text }}
Output variable: audio
Choose a model and voice (e.g., ElevenLabs → Callum, Turbo 2.5)
Add another Generate Text block.
Prompt:
Save to variable: video_description
Add a Generate Video block.
Use {{ video_description }} as the prompt.
Output variable: video
Select a video model (e.g., Ray 2)
Add a Display Content block and use the following syntax:
Alternatively, copy snippets from QuickHelp in the editor for image, audio, and video.
When previewed:
The agent will collect a topic.
AI will generate a markdown-formatted article.
An image, audio narration, and video will be produced from the article.
All outputs are combined into a clean, unified display.
To generate rich AI media in MindStudio:
Use content-specific generation blocks.
Structure prompts clearly and use variables throughout.
Save outputs to variables.
Use the Display Content block with proper syntax to render media.
This pattern allows for powerful, engaging AI experiences from just a single input. Experiment with other media types, models, and formatting to further customize your AI agents.
Desired tone/style
Target length (optional for this demo)
Add a User Input block to your workflow.
On the right-hand panel, click the + button to create your first input field.
Input Type: Short text
Variable Name: topic
Label Text: “What is the topic you'd like to write about?”
Help Text: “Hint: Keep it short”
Placeholder: e.g., dogs, cats, space, race cars
Test Value: dogs (useful for debugger testing)
Only the variable name and label are required. All other fields are optional but help guide the user experience.
To use the user input in your AI prompt:
Always wrap variables in double curly braces and surround them with custom tags for clarity and consistency.
To collect more information:
Click the + next to the User Inputs folder in the left Explorer.
Create a new variable named tone.
Set the input type to Multiple Choice.
Provide options such as:
Professional
Scientific
Playful
Somber
Configure the label: “What is the preferred tone of the article?”
By default, new inputs created in the folder aren't added to the User Input block. To include them:
Open your User Input block
Click the + button
Select the tone variable and add it to the form
Update your prompt with the tone variable:
This helps tailor the AI’s style based on user preference.
Click Preview to open a draft of your agent.
All inputs will appear on a single screen.
To split inputs across multiple screens, use multiple User Input blocks.
Enter test data (e.g., topic = space, tone = scientific) and proceed to see the generated blog post.
User Inputs are essential for gathering context that improves the relevance and quality of AI outputs. Key tips:
Define clear variable names
Use intuitive labels and help text
Include placeholder and test values where applicable
Always reference variables in prompts using {{ variable_name }} syntax and descriptive tags
User inputs can be created directly from blocks or from the folder. For multi-step forms, use multiple blocks to separate screens.
Keep experimenting with input types and layout to make your AI agents more interactive and tailored to user needs.
You’ll be taken to the Chrome Web Store where you can click Add to Chrome.
After installation, click the MindStudio icon in your browser toolbar.
Sign in or create an account — you’ll receive $5 in free credits to explore agents.
Open the sidebar extension, browse the agent store, and run agents directly from your browser.
You can pin favorite agents to access them quickly and view useful metrics like average runtime and cost before triggering them.
Open a site like theverge.com.
Launch the MindStudio extension.
Choose an agent (e.g., TLDDR summarizer), view its details, and click Run.
The agent will summarize the page or video transcript instantly.
You can pin agents to your extension for one-click access.
To create your own agent:
Go to your MindStudio workspace and click Create New Agent.
You'll be directed to the AI Editor.
Switch to the Automations tab to view the canvas.
Each workflow includes a:
Start block (triggers the agent)
End block (marks completion)
Additional logic in between
Click on the Start block.
Change the Run Mode to Browser Extension.
This exposes launch variables such as:
page_url
metadata
page_content (full text of the current page)
user_selection
html
For summarizing content, use the page_content variable.
Click the + button to add a Generate Text block.
In the prompt field, write:
Choose a model (default: Claude 3.5 Haiku) or select from 50+ available options.
Save your changes.
Click the root folder in the left Explorer.
Rename the project (e.g., Web Page Summarizer).
Add a short description.
Click Publish.
Refresh your current webpage.
Open the MindStudio extension and navigate to My Agents.
Find your newly published agent and click Run.
The summarizer will display results instantly.
Use the Debugger tab in MindStudio to:
View step-by-step execution
See input/output details
Track runtime and cost metrics
This helps verify your agent is functioning as expected and offers insight into performance.
You can explore the agent store to:
Browse public agents
View performance details
Duplicate any agent via the three-dot menu to inspect or customize it
Some agents are simple and rely on strong prompts, while others are more complex workflows.
You now have a working AI agent deployed in your browser. Continue experimenting by:
Creating new Chrome agents with different purposes
Enhancing prompts with formatting or logic
Exploring and remixing agents in the store
Internal databases: Custom backends or structured tables, supported via advanced connections.
Document-based project data sources: Files uploaded directly into your project’s "Data Sources" folder—this is the focus of this guide.
To demonstrate how document-based data sources work, we'll create a support bot that answers questions about MindStudio using uploaded documentation.
Begin your AI agent with a user input block. This block captures the user's question and stores it in a variable, typically called query.
Navigate to the Data Sources section on the left-hand panel. Click the plus button to create a new data source:
Name it (e.g., Mind Studio Docs)
Add a description
Upload documents (up to 150 files, each ≤50MB)
Tip: Use a free PDF compression service if your documents are too large.
As the document uploads, it will be processed into a vector database:
You’ll see a word count and chunk count.
Review the extracted text to ensure formatting looks clean.
Check the chunk preview to understand how the document is split.
Use the index snippet to reference the full document, or query it with natural language.
Insert the Query Data Source block into your workflow:
Select your uploaded data source.
Set the output variable (e.g., query_result)
Use the query variable (from user input) to trigger the search.
Optionally adjust the number of chunks retrieved (default is 3, max is 5).
Use a Generate Text block to create your AI’s response:
This setup ensures the AI receives relevant context before answering.
Use the Draft Agent preview to test your support bot. As users ask questions, the system:
Queries the vectorized document.
Retrieves relevant text chunks.
Uses those chunks as context to generate an answer.
If your model has a large enough context window (e.g., Claude 3.5 Haiku supports 200k tokens), you can pass the entire document to the AI using the index snippet.
Caution: Passing full documents may reduce performance or make the AI less precise. Use only when full context is necessary.
Data sources in MindStudio allow AI agents to query long-form documents with natural language.
Use them to build agents like knowledge bases, support bots, or product Q&A tools.
Choose between querying small chunks for relevance or referencing full documents for completeness.
Always validate uploaded files by checking the extracted text for formatting issues.
Data sources are a powerful way to give your AI agents domain-specific expertise—using the same documentation your team already relies on.
Query Google News
Generate an email digest with the top stories
By default, each block runs sequentially, causing unnecessary delays if the blocks aren't interdependent.
Look for blocks that:
Perform independent operations
Don’t rely on each other’s output
In this example, four blocks are scraping or querying different news sources. None depend on the output of the others, making them perfect candidates for parallel execution.
Select Blocks Highlight the blocks you want to run in parallel.
Create a Group Right-click one of the blocks and select Create Group.
Change to Parallel Execution Click the group label to toggle from Sequential to Parallel.
Now, these blocks will execute at the same time instead of waiting on one another.
Sequential Run: ~60 seconds
Parallel Run: ~25 seconds
Parallelizing the scraping and news querying tasks led to a 58% reduction in workflow execution time.
Speed Improvements: Reduce total run time significantly.
Better User Experience: Faster responses for time-sensitive tasks.
Scalability: Makes workflows more efficient as complexity grows.
To optimize your workflows:
Identify independent tasks
Group them into a parallel execution block
Re-run and measure the performance impact
Using parallel execution in MindStudio can lead to major gains in efficiency, especially for content aggregation, multi-source processing, and automation workflows.
Learn foundational concepts like workflows, variables, prompt structuring, markdown formatting, and AI model selection as you build and publish agents from scratch.
This guide walks you through creating two AI agents in MindStudio: a blog post generator and a Chrome extension-based summarizer. Along the way, you'll learn about workflows, variables, prompt design, and agent publishing.
To start building:
Navigate to the Build tab in MindStudio.
Click Create New Agent.
You'll land inside the AI Editor on the automations canvas, where the workflow begins with a Start and End block.
Workflows consist of functional blocks connected in sequence. Key block types include:
User Input: Creates form fields and stores input as variables.
Generate Text: Sends prompts to an AI model and returns text.
Display Content: Displays output within the workflow (for testing or UI).
Variables are created in form blocks and referenced using {{ double curly braces }}.
Best practice for prompt formatting:
Separate variables from instructions using custom tags.
Example:
This improves clarity and avoids formatting or grammatical issues when users input long or complex text.
Use markdown formatting to improve the structure of the AI's output.
Add instructions:
Make sure to use markdown formatting in your response.
Provide a template using example tags:
This gives the AI a clear outline to follow and leads to more consistent, polished outputs.
To publish your agent:
Click the root folder to open agent settings.
Provide a name, and optionally add a description, icon, or landing page.
Click Publish.
You can preview and re-publish at any time. Agents are private until explicitly shared.
Under Model Settings, you can:
Choose from over 90 AI models by Anthropic, OpenAI, Google, Meta, and more.
Adjust the temperature to control randomness:
Higher values = more variation
Lower values = more deterministic
Start with defaults and refine based on behavior and output quality.
To build a Chrome extension-based summarizer:
Create a new agent.
In the Start block, change the Run Mode to Browser Extension.
Use the launch variable page_content.
Name and publish your agent (e.g., Summarize Anything).
Once published, this agent can summarize:
Website content
YouTube transcripts
PDFs and documents
It becomes available inside the MindStudio Chrome extension automatically.
You’ve now built:
A blog post generator using structured markdown prompting.
A content summarizer for the browser using launch variables.
Along the way, you learned:
How to use variables and form inputs
How to structure prompts for better output
How to configure and select AI models
How to publish and test your agents
These techniques will serve as your foundation for building more complex AI agents. Continue experimenting, iterating, and publishing as you expand your skills in future lessons.
Learn how to chain multiple blocks together in a workflow to pass data from one step to the next.
This guide covers how to build more advanced AI workflows by chaining blocks together. You’ll learn how to take the output from one block, save it as a variable, and use it downstream to enrich prompts, create media, and display final outputs.
We’re building a multi-step blog post generator that does the following:
Collects a topic from the user.
Uses AI to generate an outline and key talking points.
Writes a full blog post using those points.
Generates a cover image based on the article.
Displays the final post and image together.
Add a User Input block.
Input type: Short text.
Variable name: topic
Label: “What topic would you like to write about?”
Add a Generate Text block.
Prompt:
In Output Behavior, select “Save to Variable”.
Name the variable: key_points
Add another Generate Text block.
Prompt:
Save the output to a new variable: blog_post
Add another Generate Text block.
Prompt:
Save output to: image_description
Add a Generate Image block.
Use {{ image_description }} as the image prompt.
Save image output to: image
Add a Display Content block.
Use this markdown-style format to render both the image and blog content:
This combines everything into one nicely formatted response.
Click Preview to open a draft and enter a test topic (e.g., “F1 cars and tech”). The AI will:
Generate an outline and key points.
Write the full blog post.
Generate an image description.
Create an image.
Use the Debugger to monitor each step:
Track inputs and outputs.
View how variables are resolved.
Analyze each block’s execution.
Chaining blocks means passing one block’s output as a variable to another block’s prompt.
Use the Save to Variable setting to store data for reuse.
Wrap variables with tags (e.g., <blog_post>{{ blog_post }}</blog_post>) to clarify structure.
Use
Chaining is a core MindStudio principle that enables powerful, dynamic workflows. Practice chaining in small steps to build confidence and scale up to more complex agents.
Keep experimenting and expanding your workflows by connecting logic, generation, and media blocks together!
Learn how to structure JSON content, design HTML templates, and render polished webpages with dynamic data.
Generating HTML assets enables your AI agents to produce professional, styled web outputs—ideal for reports, articles, or visual summaries.
The example workflow builds a long-form article page through the following steps:
User Input: Captures a topic from the user.
User Context: Gathers additional context via AI-generated questions.
Generate Queries: Uses that context to generate relevant Google search queries.
Run Research Subworkflow:
Searches Google
Scrapes each result
Summarizes each page
Compile Results: Collects all structured data into a JSON object.
Generate Images: Runs a subworkflow that:
Creates image prompts from each article section
Generates images with a model
Combine Content: Uses a custom function (e.g., add images to report) to merge article JSON and images into a final, structured variable (e.g., updated report).
Set the Source Type to HTML.
Output variable: e.g., HTML
Format: HTML
Expand the Source Document field to edit and preview your HTML. Use Handlebars-style syntax to bind JSON variables:
Use {{#each}} and {{/each}} to loop through arrays.
Nest variables properly based on your JSON structure.
Paste in your sample JSON and variable name (e.g., updated_report) to preview the output live in the editor.
If you don’t want to hand-code HTML:
Use the Generate Asset Helper agent.
Provide:
Your variable name (e.g., updated_report)
Sample JSON
The helper will:
Ask design questions (layout, font, colors, spacing)
Generate a complete HTML template
Provide instructions to paste it into your asset block
Add a Display Content block:
Set the type to HTML
Connect it to the output of your Generate Asset block
Your AI agent will now generate and render a fully styled web page as the final output.
The HTML template can include embedded CSS using <style> tags. You can:
Adjust fonts, spacing, and layout
Change image border radius or alignment
Modify heading sizes (e.g., h1, h2, h3)
Edit and preview changes instantly in the asset editor.
When inserting dynamic content:
Use full paths (e.g., {{updated_report.title}})
Don’t reference standalone keys (e.g., {{title}}) unless the variable is defined globally
The Generate Asset block allows you to:
Display highly customized HTML pages
Combine AI-generated content and imagery
Provide end users with a rich, professional experience
For advanced use cases, explore how other MindStudio agents (like “Generate LinkedIn Carousel” or “Generate Podcast”) implement this technique.
Learn the difference between global variables and launch variables in MindStudio workflows to tore data across runs or pass external inputs into an AI agents via API or automation platforms.
MindStudio supports two powerful types of variables that extend the functionality of your AI agents: Global Variables and Launch Variables. Each serves a distinct purpose and can help you build more advanced, persistent, and externally-driven workflows.
Global variables allow you to store values between workflow runs. This is useful when you want to retain state, keep a running record, or reference previous output in future executions.
Prefixed with global.
Stored project-wide and persist between agent runs
Configurable from the Global Variables tab in the root project folder
A story-writing agent appends new chapters to a global variable called global.story.
Each run checks if a story exists:
If it does: generates the next chapter.
If not: starts a new story.
This approach is ideal for accumulating content, maintaining histories, or building AI memory-like features.
Launch variables allow you to inject data into your workflow at runtime, often via an external integration (e.g. API, webhooks, Make.com).
Declared in the Start Block
Replaces the need for user input blocks when running workflows programmatically
Supports structured automation flows like onboarding forms, CRMs, or lead collection
A Google Form collects company data (e.g. name, representative, company info).
A Make.com automation sends these inputs into MindStudio as launch variables.
The MindStudio workflow uses these variables to generate personalized sales content.
The generated output is passed back and saved to a Google Sheet.
Launch variables streamline integrations and automate personalized content generation at scale.
By using global and launch variables strategically, you can create more intelligent, dynamic, and automated AI agents in MindStudio.
Learn how to control the flow of your AI workflows using Menu, Jump, Logic, and Checkpoint blocks.
MindStudio provides four powerful block types that enable dynamic routing and decision-making in workflows. These are ideal for tailoring responses, segmenting processes, and incorporating human feedback.
Purpose: Presents users with a selectable menu to route them to different parts of a workflow.
How It Works:
Learn to dynamically render user choices, gather additional context interactively, and enhance decision-making within your AI agents.
Dynamic user inputs allow your AI workflows to adapt and respond to earlier outputs by presenting users with choices or prompts that reflect prior data. This approach makes your AI agents more interactive, relevant, and powerful.
In a basic setup, you might scrape a URL and extract entities (e.g., people, organizations) mentioned in an article. The next step could involve presenting the user with these entities to choose one for further research.
To do this:
Learn how to run sub-workflows within a parent workflow in MindStudio to process structured data iteratively.
Using sub-workflows in MindStudio enables you to break out a specific task (like scraping a URL or generating summaries) and apply it repeatedly across a list of items. This approach is highly effective when dealing with variable-length structured data such as JSON arrays.
When dealing with dynamic lists (like search results or multiple URLs), manually duplicating logic is inefficient and error-prone. Instead, sub-workflows allow you to:
{{GoogleNews.articles[0].title}}{{#each GoogleNews}}
### {{title}}
[Read Article]({{url}})

---
{{/each}}{
"links": [
"https://example.com/1",
"https://example.com/2"
]
}Write a long-form article about the following topic:
<topic>{{ topic }}</topic>
Make sure to use markdown formatting.
<example>
# Title
A compelling hook for the article.
## Section Header
Multiple paragraphs about the section.
- Key takeaway 1
- Key takeaway 2
## Conclusion
</example>Based on the following content, write a simple image prompt for an AI image model:
<content>{{ text }}</content>Based on the following content, write a simple video prompt for an AI video model:
<content>{{ text }}</content>
<audio controls>
<source src="{{ audio }}" type="audio/mpeg">
</audio>
{{ text }}
<video controls>
<source src="{{ video }}" type="video/mp4">
</video>cssCopyEditWrite a blog post about the following topic:
<topic>{{ topic }}</topic>cssCopyEditMake sure to use the following tone:
<tone>{{ tone }}</tone>cssCopyEditSummarize all text on the page:
<content>{{ page_content }}</content><context>
{{query_result}}
</context>
Use the info above to answer the following question:
{{query}}Set a response size limit, which defines the max output length.
A clear description of the page (e.g., “a long-form article page with images above each section”)
The story is stored and updated in the global.story variable.
You can view and edit global variable values under the Global Variables tab.
Global Variables
Save data across runs
Persistent
Internal (within MindStudio)
Launch Variables
Inject data from external systems
One-time use
External (via API or automation)
Define a label (e.g., "What would you like to do?").
Add options such as "Generate Text", "Generate Image", and "Generate Video".
For each option, connect it to a corresponding block using the output node.
Use Case: Allows the end-user to choose an action or path, similar to a multi-choice interface.
Purpose: Transfers control from one workflow to another, optionally passing variables between them.
How It Works:
Add a Jump Block at the end of a workflow.
Select the destination workflow.
Variables (e.g., topic) from the original workflow are automatically passed to the destination.
Use Case: Ideal for reusing workflows across multiple agents or modularizing large projects.
Purpose: Allows the AI to make a decision between multiple branches using its own reasoning.
How It Works:
Add a Logic Block with instructions (e.g., "Decide whether the comment is positive or negative").
Define your conditions (e.g., "The comment is positive", "The comment is negative").
Pass input (like a comment variable) and route based on AI's decision.
Use Case: When you want AI to evaluate inputs and choose an appropriate response path automatically.
Purpose: Inserts a human-in-the-loop approval or revision step in the workflow.
Modes:
Approve/Reject: Route based on user approval.
Revise Variable: Let users manually or interactively revise the AI’s output.
How It Works:
Use after a generation block (e.g., a LinkedIn post draft).
If revision is enabled:
Display the generated result.
Allow manual editing or chat-based revision with the AI.
Once satisfied, the user can approve to continue the workflow or reject to halt it.
Use Case: Perfect for QA workflows, content approvals, or publishing pipelines where manual oversight is needed.
Menu
Let users choose between actions
Multiple choice UI
Jump
Call and switch to another workflow
Modular design
Logic
Let AI make a decision between paths
AI-powered branching
Checkpoint
Insert human approval or revision steps
By combining these blocks strategically, you can build workflows that are flexible, intelligent, and user-aware—essential for creating production-grade AI agents.
3/3
Use a Generate Text block to return a list of entities in a specific JSON format.
The JSON should be an array of objects, each with label and subtitle keys:
Save this as a variable, e.g., entities.
Add a User Input block.
Select Text Choice.
Set the prompt (e.g., "Which entity would you like to research further?").
Under Dynamic Source, specify the variable holding your JSON (e.g., entities).
The selected label will be stored as the input value.
This enables workflows to dynamically populate input options based on AI-generated data.
Sometimes user input is too broad (e.g., just entering "dogs"). To handle this, MindStudio offers a User Context block to gather deeper context through AI-generated questions.
Collect a general topic input from the user.
Use a User Context block:
Set the topic as input.
Provide a prompt like: "Help the user refine the topic they’d like to research. Gather more contextual information in order to perform a full research report."
Choose the Interview Depth (Quick, Medium, or Thorough).
Set a Maximum Question Limit.
Save the results in a variable, e.g., topicDetails.
Use topicDetails downstream to:
Generate refined search queries
Provide detailed context to summarization or report-generation blocks
This results in much more specific, targeted output.
Collect topic from the user.
Run a User Context block to gather more details.
Use topicDetails to generate Google search queries.
Scrape and summarize each result in sub-workflows.
Aggregate findings and generate a detailed report.
This method creates highly accurate and contextual results.
User-Adaptive: Tailors the experience based on AI or previous input
Flexible: Works with structured JSON or free-text context
Scalable: Enables detailed processing of dynamic lists or open-ended tasks
Use dynamic user inputs when:
The user's next step should be informed by previous AI outputs
You need to collect deeper, more relevant context
You want to make AI workflows more flexible and responsive
Dynamic inputs are essential for building smart, adaptive AI agents that can guide users and gather meaningful context in real time.
Keep workflows modular and organized
Handle variable-length lists
Execute iterations in parallel or sequentially
The parent workflow in this example:
Accepts a topic as input
Runs a Google Search block
Iterates over the returned list of URLs
For each URL, runs a sub-workflow to scrape and summarize content
Aggregates the results and uses them to generate a long-form article
The sub-workflow (scrape URL) should:
Accept a launch variable: URL
Use a Scrape URL block to get page content
Generate a summary, key takeaways, and quotes
Return a structured JSON object with these outputs
Example JSON output from the sub-workflow:
In the parent workflow:
Add a Run Workflow block
Select the sub-workflow you created
Switch mode to Run Multiple Times
Under Input Data, pass the output from the Google Search block (e.g., search)
Use Auto Extract or JSON Array Input:
Use a prompt like extract all URLs
Reference the extracted value as item
Choose JSON Array Input
Use dot notation to reference: item.url
Configure Execution Mode:
Parallel: Recommended for speed if iterations are independent
Sequential: Use when order matters or there's shared state
Set Error Behavior:
Choose whether to fail on errors or ignore failed runs
Set retry attempts if needed
Define Output Variable:
For example: sources (an array of all scraped summaries)
Once all sub-workflows complete:
The output (sources) can be passed into a Generate Text or Generate Asset block
You can format this data into:
An HTML page
A long-form article with footnotes
A structured JSON document
Input topic: "Future of Space Travel"
Google Search returns ~27 results
Each URL is processed in the scrape URL sub-workflow
Resulting summaries are aggregated and used to generate a detailed article with citations and source list
Scalable: Works with any number of inputs
Modular: Easier to maintain or reuse scrape logic
Flexible: You can switch sources, change formats, or reuse logic across different agents
Running sub-workflows in MindStudio allows you to:
Iterate over dynamic lists
Process and transform structured data
Improve workflow performance using parallel execution
Simplify complex builds with modular design
Use sub-workflows whenever you need to apply the same logic repeatedly to parts of a list—especially when dealing with external data, scraping, or transforming structured JSON.
Write a blog post about the following topic:
<topic>{{topic}}</topic><example>
## Title of blog post
A compelling hook...
### Key Takeaways
- Point one
- Point two
### Conclusion
</example>Write a TLDDR summary for the following content:
<content>{{page_content}}</content>cssCopyEditBrainstorm an outline and key topics that should be covered when writing an article about the following topic:
<topic>{{ topic }}</topic>Write a long-form blog post about the following topic:
<topic>{{ topic }}</topic>
Make sure to follow the outline and cover key points mentioned below:
<key_points>{{ key_points }}</key_points>Generate an image prompt that can be used by an AI image model. Make it about the following blog post:
<blog_post>{{ blog_post }}</blog_post>
{{ blog_post }}<h1>{{updated_report.title}}</h1>
<p>{{updated_report.subtitle}}</p>
{{#each updated_report.sections}}
<img src="{{image}}" />
<h2>{{header}}</h2>
<p>{{intro}}</p>
{{/each}}[
{ "label": "NASA", "subtitle": "Space agency mentioned in article" },
{ "label": "SpaceX", "subtitle": "Private aerospace company" }
]{
"url": "...",
"summary": "...",
"takeaways": ["..."],
"quotes": ["..."]
}Human-in-the-loop control
Learn to use Markdown, XML Tags, and how to
Once you have mastered the Context / Task / Format Framework, the next step is to make outputs cleaner, more reusable, and easier to plug into workflows.
When drafting your AI Prompts, there are patterns can help you turn your basic prompts into more advanced prompts:
using Markdown,
using variables
tagging variables with XML.
These make your prompts readable for humans, reliable for the model, and predictable for downstream blocks.
Markdown is a lightweight formatting language. It uses simple symbols like #, -, and * to add structure to text — things like headings, bold/italics, bullet lists, numbered lists, code blocks, and tables.
Why it matters in prompting:
Prompts become more readable. You can break up long instructions into clear sections instead of one big block of text.
Responses become more structured. You can tell the AI to output answers as Markdown so they’re consistent, readable, and easy to reuse in docs, tickets, or other tools.
Markdown formatting Quick Guide:
For more information, check out this to learn about different ways you can format text using Markdown.
When you write a longer prompt, Markdown makes it easier for you (and teammates) to scan later. Adding Headings, lists, and code blocks turn your prompt into a mini instruction document rather than a wall of text.
Example:
You can also tell the AI to return its output in Markdown. This enforces a repeatable structure and prevents free-form answers that are hard to work with.
Example:
NOTE: Notice how in this example we use Markdown in both ways.
Variables let you insert dynamic values into prompts without rewriting them each time. In MindStudio, variables are always written in double curly braces: {{varName}}.
When your workflow runs, these variables are replaced with live data.
Personalization: Insert user inputs, or large data automatically.
Reusability: Add the same variable as context in multiple places in your AI Agent.
Flexibility: Variables can be used in across all blocks in MindStudio.
When the workflow runs, {{customerName}} is replaced with the actual name and {{meetingNotes}} is filled with content from a transcript or notes block.
Use descriptive names:
❌ Bad variable names: var1 , a , xyz
✅ Good variable names: renewalDate , firstName
Sometimes you’ll have multiple variables or large chunks of text. To keep things clear, make sure to wrap them in XML tags using <>.
Tagging allows you as a way to label different pieces of information for the AI model and shows the AI model where each label begins and ends.
Use opening tags: <tagName>
Whatever content you want to label with tags goes in the middle. This can be a {{variable}} or plain text.
At the end, make sure to close your tags with </tagName> . Notice the / before the tag name.
NOTE: This is standard practice across prompting across major model providers.
Separates multiple variables cleanly.
Makes it easy for the AI to ground its answers to the right source.
Helps AI Models can reliably extract specific values from data.
NOTE: Notice how in this example we use XML tags to label the example output in addition to labeling contextual information.
Basic prompting is about clarity, and context engineering is about giving the AI the right information to work with. The more relevant and well-structured the context you provide, the more accurate and useful the AI’s answers will be.
If you don’t give the AI model the background materials they need to complete the task, it will just make things up. On the other hand, if you give the AI model too much information with no guidance, they’ll get overwhelmed.
Context engineering is how you give just the right amount of background, in the right way.
Why Context Matters:
Grounding: Prevents the AI from guessing by supplying the facts it should rely on.
Relevance: Keeps answers tied to your data, not general internet knowledge.
Control: Lets you shape the “memory” of the model so it stays on task.
In the System Prompt Tab, can include a prompt to guide the AI’s behavior or provide global information that you’d like your AI Agent to know about. This acts like the intern’s “job description.”
Example:
You can inject documents, transcripts, notes, or snippets directly into the prompt. This ensures the AI bases its answer on your content, not what it happens to know.
Make it clear which rules take priority.
Example:
“Always base your answer on the supplied transcript, even if you know other information.”
Use {{variables}} to pass in dynamic context like customer names, transcripts, or notes. Wrap large chunks in <tags> so the AI knows where they begin and end.
Prompt Example (with Context Engineering)
Good Practices:
Keep context relevant — don’t paste in entire documents if only one section matters.
Segment long text with clear tags so the AI doesn’t confuse sources.
Be explicit: tell the AI to “only use the supplied material.”
Did you provide enough background info for the AI to understand the request?
Did you limit context to only what’s relevant?
Did you tag variables or long text so the AI can clearly separate them?
Did you tell the AI which context takes priority?
summaryKeep long text blobs (like transcripts) in separate sections, not mixed into a single instruction.
Did you define the task clearly, with no ambiguity about what should be done?
Did you specify the format for the response so it comes back structured and easy to use?
Did you keep instructions concise and avoid burying the key ask inside a wall of text?
Did you check for consistency — e.g., if you asked for Markdown output, did you state “return Markdown only”?
Did you handle edge cases (e.g., “if no data is found, say ‘No results available’”)?
Did you include constraints where needed (e.g. length, tone, audience)?
# H1 Header
## H2 Header
### H3 Header
**bold text**
*italic text*
***italic and bold text***
- bullet point item
- bullet point item
1. numbered list item
2. numbered list item
[linked text](<https://www.example.com>)
# Context
Topic: AI Agents in the workforce
Audience: AI beginners who want practical tips
# Task
Write a professional but friendly LinkedIn post.
# Response Format
- Keep it under 150 words
- Use a conversational tone
- Include 3 bullet points for key takeaways # Context
Product Team Meeting Transcript
#Task
Summarize the transcript and extract all action items with the person responsible and the due date.
# Response Formatting
Present the results as a that include:
- an easily scannable TL;DR list of items discussed
- a table of action items that correspond to the people they were assigned to
Your response should look like this:
## Meeting Summary
- 3-5 bullet point
## Decisions Made
- Bullet list
## Action Items
| Assignee | Task | Due Date |
|----------|------|----------| ## Context
Customer Name: {{customerName}}
Meeting Notes: {{meetingNotes}}
## Task:
Draft a follow-up email to after their onboarding call.
## Format:
Return Markdown with:
# Subject
# Email Body (3–4 sentences, friendly tone)
# Next Steps (based on meeting notes)<tagName>Content or {{variables}}</tagname><customerName>{{customerName}}</customerName>
<meetingNotes>{{meetingNotes}}</meetingNotes>
## Task:
Draft a follow-up email to <customerName> after their onboarding call.
## Format:
Reply with the content of the email and nothing else.
<exampleOutput>
Subject: (Subject line of the email)
Body:
(3–4 sentences, friendly tone)
Next Steps:
(4-6 bullet points based on <meetingNotes>)
</exampleOutput>## Role
You are a Customer Success AI Assistant.
Your role is to help Customer Success Managers (CSMs) by drafting summaries, emails, and action items that save them time and ensure accuracy.
## Info you should always remember:
- Prioritize clarity and professionalism in every response.
- Always keep answers concise, focusing on key details rather than long explanations.
- Never invent information. If something is missing from the context provided, state it clearly.
- When presenting information, use Markdown formatting with clear headings and bullet points.
- When referencing customers, always use the provided {{customerName}} variable.
- Base recommendations only on the supplied transcripts, notes, or variables, not on outside assumptions.
<meetingTranscript>
{{meetingNotes}}
</meetingTranscript>
## Task:
Extract all action items with assignees and due dates.
Always base your answer on the <meetingTranscript>, even if you know other information.
## Format:
Return Markdown with:
- One paragraph summary of the call
- Action Items Bullet list with [Assignee]: [Task] (Due: Date)