Skip to content

Domain 3: Implement an Agentic Solution (5-10%) โ€‹

โ† Domain 2 ยท Domain 4 โ†’


This is a new domain for 2025 focusing on autonomous AI agents that can use tools and reason through multi-step tasks. It carries the lowest weight on the exam (5โ€“10%), but the concepts are distinct enough to trip up candidates who confuse agent patterns with standard LLM calls.

3.1 Azure AI Agent Service โ€‹

A fully managed service in Microsoft AI Foundry for building, deploying, and scaling AI agents. It abstracts the orchestration loop so you focus on configuring tools and instructions rather than managing state.

Reasoning Loop โ€‹

An agent doesn't generate a single response โ€” it iterates:

  1. Receive user goal
  2. Plan โ€” decide if a tool is needed
  3. Execute โ€” call the tool (code, search, function)
  4. Observe โ€” process the tool result
  5. Repeat or Respond โ€” loop until the goal is met or budget is exhausted

Key Exam Concept

The phrase "autonomous multi-step task" in a question โ†’ answer is AI Agent Service, not a plain Prompt Flow run or a single LLM call.

Core Agent Components โ€‹

ComponentWhat It IsExam Signal
InstructionsSystem-level prompt that defines the agent's persona and constraints"agent must always respond in English"
ThreadA conversation session โ€” stores the message history"maintain conversation state across turns"
RunA single execution of the reasoning loop on a Thread"trigger the agent to process a message"
ToolA capability the agent can invoke (built-in or custom function)"agent needs to call an API"

Built-in Tools โ€‹

ToolPurposeWhen to Use
Code InterpreterWrites and executes Python in a secure sandboxMath, data analysis, file conversion, chart generation
File SearchRetrieves information from uploaded documents via vector search"agent searches uploaded PDFs", "agent answers from your files"
Function CallingCalls a custom function/API you defineReal-time data, internal systems, custom logic
Azure AI SearchUses an external AI Search index as a grounding source"agent searches your indexed knowledge base"

Code Interpreter vs File Search

Code Interpreter โ†’ agent writes and executes code to solve a problem (e.g., calculate statistics, generate a chart). File Search โ†’ agent retrieves information from documents without executing code.

The exam distinguishes these with phrases like "solve math problems" (Code Interpreter) vs "answer from uploaded documents" (File Search).

Configuration Reference โ€‹

python
# Create an agent with tools
agent = client.agents.create_agent(
    model="gpt-4o",
    name="data-analyst",
    instructions="You are a data analyst. Use code interpreter to analyze data.",
    tools=[{"type": "code_interpreter"}]
)

# Create a thread and run
thread = client.agents.create_thread()
client.agents.create_message(thread_id=thread.id, role="user", content="Summarize this CSV")
run = client.agents.create_and_process_run(thread_id=thread.id, agent_id=agent.id)

Async Pattern

Agent runs are asynchronous. create_and_process_run() polls until completion. For manual polling, use create_run() then GET the run status until completed or failed.


3.2 Microsoft Agent Framework โ€‹

The Agent Framework provides patterns and components for building sophisticated multi-agent applications. It integrates with Semantic Kernel for orchestration logic.

Key Components โ€‹

ComponentPurpose
PlannerBreaks a high-level goal into a sequence of tool calls / sub-tasks
PersonaDefines the agent's name, description, and behavioral constraints
MemoryPersistent storage across agent turns (conversation history, retrieved facts)
KernelThe Semantic Kernel engine that connects the model to tools and memory

Multi-Agent Orchestration Patterns โ€‹

When a single agent isn't enough, multiple specialized agents can collaborate:

PatternStructureBest For
HierarchicalA "Manager" agent delegates sub-tasks to "Worker" agentsComplex workflows where tasks can be decomposed
SequentialAgents form a pipeline โ€” each passes output to the nextDocument processing, staged analysis
Group Chat / JointAgents discuss and critique each other's responsesResearch tasks requiring multiple perspectives

Exam Trigger

"Multiple agents collaborating" โ†’ answer involves multi-agent orchestration (not a single agent with multiple tools).


3.3 Testing, Monitoring, and Deployment โ€‹

Constraints and Safety โ€‹

  • Max turns: Set a budget on the reasoning loop to prevent infinite tool-calling cycles.
  • Content filters: Applied at the model level via Azure OpenAI content filtering settings.
  • Grounding checks: Use Prompt Shields to detect prompt injection attempts within tool outputs.

Evaluation โ€‹

  • Foundry Tracing: Captures every step of the reasoning loop โ€” tool calls, inputs, outputs, latency. Essential for debugging "why did the agent call that tool?"
  • Evaluation Flows: Run the agent against a dataset of golden Q&A pairs; measure groundedness, relevance, and task completion rate.

Deployment โ€‹

Agents are deployed as scalable endpoints within an AI Foundry Project. They inherit the project's security, connections, and compute settings from the parent Hub.

Scope Trap

The exam may ask where agents are deployed. Agents live in a Project (your workspace), not directly in a Hub. The Hub provides shared resources; the Project is where you build and deploy.


Flashcards

1 / 8
โ“

(Click to reveal)
๐Ÿ’ก

โ† Domain 2 ยท Domain 4 โ†’

Happy Studying! ๐Ÿš€ โ€ข Privacy-friendly analytics โ€” no cookies, no personal data
Privacy Policy โ€ข AI Disclaimer โ€ข Report an issue