AIF-C01: Quick Refresher β
Final Review
This page is designed for the final "cram" session before stepping into the AIF-C01 exam.
ποΈ Domain 1: AI/ML Fundamentals (20%) β
AI vs. ML vs. DL β
- AI: Broadest category (mimicking human intelligence).
- ML: Learning from data without explicit rules.
- Deep Learning (DL): Multi-layered neural networks (mimics brain).
Learning Types β
- Supervised: Uses labeled data (e.g., house price prediction).
- Unsupervised: Uses unlabeled data (e.g., customer clustering).
- Reinforcement: Learns via rewards/penalties (e.g., AWS DeepRacer).
The ML Pipeline β
- Business Goal β‘οΈ 2. Data Prep β‘οΈ 3. Model Training β‘οΈ 4. Evaluation β‘οΈ 5. Deployment (Inference) β‘οΈ 6. Monitoring.
Inference Types β
- Real-time: Low latency, immediate (e.g., fraud check at checkout).
- Batch: High volume, delayed (e.g., generating monthly recommendations).
β¨ Domain 2 & 3: Generative AI & Foundation Models (52%) β
Foundation Models (FMs) β
Huge models pre-trained on massive data; multi-purpose.
Key GenAI Concepts β
- Tokens: Units of text (not always full words).
- Hallucination: Confidently wrong output.
- Temperature: Creativity setting (0 = predictable, 1 = creative).
Model Customization β
- Prompt Engineering: Designing better inputs (Zero-shot, Few-shot).
- RAG (Retrieval-Augmented Generation): Connecting the model to external data (Vector DB) for up-to-date, private info.
- Fine-tuning: Re-training the model on specific data to change its weight/behavior.
Evaluation Metrics β
- ROUGE/BLEU: Measures text similarity (mostly for summarization/translation).
π οΈ AWS Services Comparison (The "Which Tool" Section) β
| Service | Primary Use Case |
|---|---|
| Amazon Bedrock | API-based access to Foundation Models (Claude, Llama, Titan). Fastest for GenAI. |
| Amazon SageMaker | The "Kitchen Sink." Full control over building, training, and deploying custom models. |
| Amazon Q | AI-powered assistant for businesses (Q Business) or developers (Q Developer). |
| Rekognition | Computer Vision (image/video analysis). |
| Polly / Transcribe | Text-to-Speech (Polly) / Speech-to-Text (Transcribe). |
| Comprehend | Natural Language Processing (sentiment analysis, entity extraction). |
| Lex | Building conversational bots (chatbots). |
π‘οΈ Domain 4 & 5: Security & Responsible AI (28%) β
Responsible AI Pillars β
Fairness, Explainability, Privacy, Robustness, Governance.
AWS Tools for Responsibility β
- SageMaker Clarify: Detects bias and provides model explainability.
- Bedrock Guardrails: Filters out harmful content or PII from LLM responses.
- Amazon A2I: Adds a "Human-in-the-loop" for reviewing low-confidence predictions.
Security & Compliance β
- Shared Responsibility: AWS secures the "of" (hardware), You secure the "in" (data/settings).
- Data Privacy: Bedrock does not use customer data to train its base models.
- Governance: Model Cards (documentation) and AI Service Cards provide transparency.
π― Bedrock vs SageMaker Quick Decision Tree β
Need to use AI/ML?
ββ Want pre-built models via API?
β ββ GenAI/LLMs? β Amazon Bedrock β
β ββ Specific tasks? β AI Services (Rekognition, Comprehend, etc.)
β
ββ Want to build/train custom models?
ββ Full ML lifecycle control? β SageMaker βWhen to Choose Bedrock β
- β Need LLMs quickly (no training)
- β Text/Chat generation
- β API-first approach
- β Multi-model access (Claude, Llama, Titan)
- β RAG implementation
- β Quick prototyping
When to Choose SageMaker β
- β Custom model training
- β Full control over ML pipeline
- β Data science workflows
- β Model monitoring & drift detection
- β Specialized use cases
- β MLOps requirements
π Key Acronyms to Know β
| Acronym | Full Form | Quick Definition |
|---|---|---|
| FM | Foundation Model | Large pre-trained model |
| LLM | Large Language Model | Text-focused foundation model |
| RAG | Retrieval-Augmented Generation | Connect LLM to external data |
| A2I | Amazon Augmented AI | Human review workflows |
| MLOps | ML Operations | DevOps for ML models |
| PII | Personally Identifiable Information | Sensitive personal data |
| ROUGE | Recall-Oriented Understudy for Gisting Evaluation | Text similarity metric |
π‘ Final Minute Tips β
Service Selection Rules β
- If the question asks for "Easy/No-Code/API": Think Bedrock or High-level AI services (Rekognition, Polly).
- If the question asks for "Full Control/Data Scientist": Think SageMaker.
- If the question asks about "Bias": Think SageMaker Clarify.
- If the question asks about "External Knowledge/Real-time data": Think RAG or Knowledge Bases for Bedrock.
Common Exam Traps β
Watch Out!
- Bedrock β Training: Bedrock uses pre-trained models only
- Hallucinations: LLMs can be confidently wrong - use RAG or guardrails
- Temperature: Higher = creative but less accurate
- Fine-tuning β Prompt Engineering: Fine-tuning changes the model; prompting doesn't
- Shared Responsibility: You're responsible for data, AWS handles infrastructure
π Model Selection Quick Guide β
For Text Tasks β
| Task | Best Service | Why |
|---|---|---|
| Chat/Conversation | Bedrock (Claude) | Natural dialogue |
| Code Generation | Bedrock (Claude/CodeWhisperer) | Optimized for code |
| Summarization | Bedrock (Titan/Claude) | Fast, accurate |
| Translation | Translate (simple) / Bedrock (complex) | Cost vs capability |
| Sentiment | Comprehend | Purpose-built |
For Vision Tasks β
| Task | Best Service | Why |
|---|---|---|
| Object Detection | Rekognition | Pre-built, easy |
| Face Analysis | Rekognition | Specialized |
| Custom Vision | SageMaker | Full control |
| Medical Imaging | SageMaker | Compliance needs |
π Responsible AI Quick Checks β
Before Deployment Checklist β
- [ ] Fairness: Does model work equally for all groups?
- [ ] Explainability: Can you explain decisions?
- [ ] Privacy: Is sensitive data protected?
- [ ] Safety: Are guardrails in place?
- [ ] Transparency: Is model documented?
- [ ] Monitoring: Is drift detection enabled?
Key Questions to Answer β
Is there bias? β Use SageMaker ClarifyNeed human review? β Use Amazon A2IFilter harmful output? β Use Bedrock GuardrailsTrack model performance? β Use SageMaker Model Monitor
π RAG Implementation Quick Reference β
Components β
User Query
β
1. Embedding Model (convert query to vector)
β
2. Vector Database (find similar documents)
β
3. Retrieved Context + Query
β
4. LLM (generate answer with context)
β
Answer with SourcesAWS RAG Stack β
- Vector DB: OpenSearch, Bedrock Knowledge Bases
- Embeddings: Bedrock Titan Embeddings
- LLM: Bedrock (Claude, Llama)
- Storage: S3 (documents)
π¬ Additional Resources β
Essential Video β
Watch this AWS Certified AI Practitioner Exam Guide
This video provides a direct comparison between the two heavy-hitters of the examβBedrock and SageMakerβhelping you decide which service fits specific exam scenarios.
β‘ Last 5 Minutes Before Exam β
Must Remember β
- Bedrock = API access to FMs (no training)
- SageMaker = Full ML lifecycle (build/train/deploy)
- RAG = External knowledge for LLMs
- Clarify = Bias detection and explainability
- Shared Responsibility = AWS hardware, You data
Quick Mental Check β
- Can you explain AI vs ML vs DL? β
- Do you know when to use Bedrock vs SageMaker? β
- Can you describe RAG in one sentence? β
- Do you know the 6 Responsible AI principles? β
- Can you name 3 AI services besides Bedrock/SageMaker? β
You've Got This!
Take a deep breath. You've studied. Trust your preparation. Good luck! π
β Back to Overview | Study Notes | Exam Tips
Last Updated: 2026-01-14