Skip to content

AI-102: Exam Guide โ€‹

โ† Overview ยท Cheatsheet โ†’


How the Exam Wants You to Think โ€‹

The AI-102 exam is for Azure AI Engineers โ€” developers who build and deploy AI solutions using Microsoft's services and SDKs. It values implementation knowledge: which service to use, how to configure it, and how to integrate it into applications.

Answer Philosophy โ€‹

  1. Choose the managed service over DIY โ€” Microsoft wants you to use Azure AI services rather than build from scratch. RAG over custom training; prebuilt models over custom when possible.
  2. Foundry is the platform โ€” Everything lives in Microsoft AI Foundry. Hubs manage shared infrastructure; Projects are your workspace. When in doubt, the answer involves Foundry.
  3. SDK over REST โ€” The exam prefers DefaultAzureCredential() and the Foundry SDK over raw REST calls or hardcoded keys.
  4. Async patterns for heavy operations โ€” OCR Read API, Document Translation, and batch operations all follow the 202 โ†’ Operation-Location โ†’ GET async pattern.

Keyword Detection Table โ€‹

If you see...Look for this in the answer...
"avoid hardcoded keys" / "keyless auth"Managed Identity + DefaultAzureCredential()
"predictable latency" / "high throughput"Provisioned Throughput (PTU)
"data residency" / "offline" / "edge"Docker container deployment
"build, test, deploy AI apps"Microsoft AI Foundry Project
"shared compute, connections, security"Microsoft AI Foundry Hub
"ground the model in your own data"RAG (On Your Data / AI Search)
"specific tone, format, or rare domain"Fine-tuning
"visual LLM workflow" / "evaluate prompts"Prompt Flow
"block prompt injection attacks"Prompt Shields
"autonomous multi-step task"AI Agent Service
"agent uses Python to solve math"Code Interpreter tool
"agent searches uploaded documents"File Search tool
"multiple agents collaborating"Multi-agent orchestration
"handwritten text extraction"Read API (OCR 4.0)
"locate objects with bounding boxes"Custom Vision โ€” Object Detection
"1:1 face comparison"Face Verification
"1:N face comparison against known people"Face Identification + PersonGroup
"recognize spoken intent / wake word"Speech SDK โ€” Intent Recognition / Keyword
"translate entire Word/PDF, preserve layout"Document Translation (async)
"utterance โ†’ intent + entities"CLU (Conversational Language Understanding)
"multi-turn Q&A from documents"Custom Question Answering
"text from images/tables in complex docs"Content Understanding / Document Intelligence
"enrich documents with AI before indexing"Azure AI Search Skillset
"Power BI analytics from enriched docs"Knowledge Store โ€” Table Projections
"custom extract logic in skillset"Custom Skill (Azure Function)
"keyword + vector combined search"Hybrid Search
"re-rank to surface single best answer"Semantic Ranking

Exam Traps โ€‹

Watch out for these common mistakes!

  • RAG vs Fine-tuning: RAG = runtime context injection (fast, cheap, updatable). Fine-tuning = baked-in knowledge (expensive, slow, better for tone/format). The exam frequently asks "which approach?" โ€” if the data changes often โ†’ RAG. If the style/format must be rigid โ†’ Fine-tuning.

  • Hub vs Project: Hub = shared infrastructure (compute, connections, role assignments). Project = your workspace inside a hub. A question about "setting up shared compute for multiple teams" โ†’ Hub. "Building a specific chatbot" โ†’ Project.

  • PTU vs Standard: Standard = pay-per-token, variable latency. PTU = reserved capacity, consistent latency, higher fixed cost. Exam uses "predictable latency" or "guaranteed throughput" as the signal for PTU.

  • Content Understanding vs Document Intelligence: Content Understanding (new) handles multimodal pipelines (images, video, audio + docs) with AI summarization. Document Intelligence = forms and structured extraction with prebuilt/custom models. Both extract from documents but different use cases.

  • Custom Skill interface: A Custom Skill must follow the exact input/output schema expected by Azure AI Search. The skill receives a values array and must return a values array with the same record keys. Forgetting this schema is a common mistake.

  • Async OCR pattern: The Read API returns 202 Accepted with an Operation-Location header. You must then GET that URL and poll until status: succeeded. Many candidates try to use the response from the initial POST.

  • PersonGroup vs FaceList: PersonGroup (and LargePersonGroup) is for Identification (1:N โ€” "who is this?"). FaceList (and LargeFaceList) is for Find-Similar (1:N โ€” "find faces similar to this one"). The training step is required for PersonGroup, not FaceList.

  • Semantic Ranking vs Vector Search: Vector search finds semantically similar documents using embeddings. Semantic ranking re-ranks already-retrieved results using an LLM to surface the single best answer. They are not the same โ€” Semantic Ranking is a post-retrieval step.

  • Content filters scope: Azure OpenAI content filters apply to the model's inputs AND outputs. Azure AI Content Safety is a separate standalone service for user-generated content moderation. Don't confuse the two.


Decision Quick Reference โ€‹

"Which generative AI approach?" โ€‹

Data changes often, reduce hallucinations โ†’ RAG (On Your Data)
Specific tone, format, domain jargon    โ†’ Fine-tuning
Visual workflow, test prompt variants   โ†’ Prompt Flow
Autonomous multi-step reasoning         โ†’ AI Agent Service

"Which vision service?" โ€‹

General image analysis, OCR, tagging   โ†’ Image Analysis 4.0 (Azure Vision)
Custom categories / bounding boxes     โ†’ Custom Vision
Video insights (faces, brands, topics) โ†’ Video Indexer
Real-time movement in video feed       โ†’ Spatial Analysis
Face verification / identification     โ†’ Face API

"Which NLP service?" โ€‹

Analyze existing text (sentiment, NER) โ†’ Language Service
Understand spoken intent / commands    โ†’ CLU + Speech SDK
Q&A from documents                     โ†’ Custom Question Answering
Translate text or documents            โ†’ Translator Service

"Which search / extraction approach?" โ€‹

Search structured and unstructured data โ†’ Azure AI Search
Extract fields from forms and invoices  โ†’ Document Intelligence
AI-enriched extraction pipeline         โ†’ AI Search Skillset
New multimodal document pipeline        โ†’ Content Understanding

"Which authentication?" โ€‹

Production app, avoid key rotation   โ†’ Managed Identity (DefaultAzureCredential)
Simple testing / scripts             โ†’ Subscription key
Cross-service access, audit trail    โ†’ RBAC roles

2025 Exam Domain Weights โ€‹

DomainWeight
1: Plan and manage an Azure AI solution20-25%
2: Implement generative AI solutions15-20%
3: Implement an agentic solution5-10%
4: Implement computer vision solutions10-15%
5: Implement NLP solutions15-20%
6: Implement knowledge mining and information extraction15-20%

High-Value Focus

Domains 1, 5, and 6 each carry 15-20%+ weight and together represent over half the exam. Domain 3 (Agentic) is new and lightly weighted โ€” know the concepts but do not over-invest.


Final Strategy โ€‹

  • Know your async patterns cold โ€” Read API, Document Translation, and batch operations all follow 202 โ†’ Operation-Location โ†’ GET. This pattern comes up repeatedly.
  • "Foundry" is the answer to "where" โ€” Hubs, projects, Prompt Flow, model catalog, deployments. When a question is about the platform or portal โ†’ AI Foundry.
  • Eliminate third-party answers โ€” If a choice involves building something from scratch when an Azure service exists, it is almost certainly wrong.
  • D1 + D5 + D6 = 50-65% of the exam โ€” Prioritise Plan & Manage, NLP, and Knowledge Mining for maximum return on study time.

โ† Overview ยท Cheatsheet โ†’

Happy Studying! ๐Ÿš€ โ€ข Privacy-friendly analytics โ€” no cookies, no personal data
Privacy Policy โ€ข AI Disclaimer โ€ข Report an issue