Skip to content

AIF-C01: Quick Refresher ​

← Back to Overview

Final Review

This page is designed for the final "cram" session before stepping into the AIF-C01 exam.


πŸ—οΈ Domain 1: AI/ML Fundamentals (20%) ​

AI vs. ML vs. DL ​

  • AI: Broadest category (mimicking human intelligence).
  • ML: Learning from data without explicit rules.
  • Deep Learning (DL): Multi-layered neural networks (mimics brain).

Learning Types ​

  • Supervised: Uses labeled data (e.g., house price prediction).
  • Unsupervised: Uses unlabeled data (e.g., customer clustering).
  • Reinforcement: Learns via rewards/penalties (e.g., AWS DeepRacer).

The ML Pipeline ​

  1. Business Goal ➑️ 2. Data Prep ➑️ 3. Model Training ➑️ 4. Evaluation ➑️ 5. Deployment (Inference) ➑️ 6. Monitoring.

Inference Types ​

  • Real-time: Low latency, immediate (e.g., fraud check at checkout).
  • Batch: High volume, delayed (e.g., generating monthly recommendations).

✨ Domain 2 & 3: Generative AI & Foundation Models (52%) ​

Foundation Models (FMs) ​

Huge models pre-trained on massive data; multi-purpose.

Key GenAI Concepts ​

  • Tokens: Units of text (not always full words).
  • Hallucination: Confidently wrong output.
  • Temperature: Creativity setting (0 = predictable, 1 = creative).

Model Customization ​

  • Prompt Engineering: Designing better inputs (Zero-shot, Few-shot).
  • RAG (Retrieval-Augmented Generation): Connecting the model to external data (Vector DB) for up-to-date, private info.
  • Fine-tuning: Re-training the model on specific data to change its weight/behavior.

Evaluation Metrics ​

  • ROUGE/BLEU: Measures text similarity (mostly for summarization/translation).

πŸ› οΈ AWS Services Comparison (The "Which Tool" Section) ​

ServicePrimary Use Case
Amazon BedrockAPI-based access to Foundation Models (Claude, Llama, Titan). Fastest for GenAI.
Amazon SageMakerThe "Kitchen Sink." Full control over building, training, and deploying custom models.
Amazon QAI-powered assistant for businesses (Q Business) or developers (Q Developer).
RekognitionComputer Vision (image/video analysis).
Polly / TranscribeText-to-Speech (Polly) / Speech-to-Text (Transcribe).
ComprehendNatural Language Processing (sentiment analysis, entity extraction).
LexBuilding conversational bots (chatbots).

πŸ›‘οΈ Domain 4 & 5: Security & Responsible AI (28%) ​

Responsible AI Pillars ​

Fairness, Explainability, Privacy, Robustness, Governance.

AWS Tools for Responsibility ​

  • SageMaker Clarify: Detects bias and provides model explainability.
  • Bedrock Guardrails: Filters out harmful content or PII from LLM responses.
  • Amazon A2I: Adds a "Human-in-the-loop" for reviewing low-confidence predictions.

Security & Compliance ​

  • Shared Responsibility: AWS secures the "of" (hardware), You secure the "in" (data/settings).
  • Data Privacy: Bedrock does not use customer data to train its base models.
  • Governance: Model Cards (documentation) and AI Service Cards provide transparency.

🎯 Bedrock vs SageMaker Quick Decision Tree ​

Need to use AI/ML?
β”œβ”€ Want pre-built models via API?
β”‚  β”œβ”€ GenAI/LLMs? β†’ Amazon Bedrock ⭐
β”‚  └─ Specific tasks? β†’ AI Services (Rekognition, Comprehend, etc.)
β”‚
└─ Want to build/train custom models?
   └─ Full ML lifecycle control? β†’ SageMaker ⭐

When to Choose Bedrock ​

  • βœ… Need LLMs quickly (no training)
  • βœ… Text/Chat generation
  • βœ… API-first approach
  • βœ… Multi-model access (Claude, Llama, Titan)
  • βœ… RAG implementation
  • βœ… Quick prototyping

When to Choose SageMaker ​

  • βœ… Custom model training
  • βœ… Full control over ML pipeline
  • βœ… Data science workflows
  • βœ… Model monitoring & drift detection
  • βœ… Specialized use cases
  • βœ… MLOps requirements

πŸ”‘ Key Acronyms to Know ​

AcronymFull FormQuick Definition
FMFoundation ModelLarge pre-trained model
LLMLarge Language ModelText-focused foundation model
RAGRetrieval-Augmented GenerationConnect LLM to external data
A2IAmazon Augmented AIHuman review workflows
MLOpsML OperationsDevOps for ML models
PIIPersonally Identifiable InformationSensitive personal data
ROUGERecall-Oriented Understudy for Gisting EvaluationText similarity metric

πŸ’‘ Final Minute Tips ​

Service Selection Rules ​

  1. If the question asks for "Easy/No-Code/API": Think Bedrock or High-level AI services (Rekognition, Polly).
  2. If the question asks for "Full Control/Data Scientist": Think SageMaker.
  3. If the question asks about "Bias": Think SageMaker Clarify.
  4. If the question asks about "External Knowledge/Real-time data": Think RAG or Knowledge Bases for Bedrock.

Common Exam Traps ​

Watch Out!

  • Bedrock β‰  Training: Bedrock uses pre-trained models only
  • Hallucinations: LLMs can be confidently wrong - use RAG or guardrails
  • Temperature: Higher = creative but less accurate
  • Fine-tuning β‰  Prompt Engineering: Fine-tuning changes the model; prompting doesn't
  • Shared Responsibility: You're responsible for data, AWS handles infrastructure

πŸ“Š Model Selection Quick Guide ​

For Text Tasks ​

TaskBest ServiceWhy
Chat/ConversationBedrock (Claude)Natural dialogue
Code GenerationBedrock (Claude/CodeWhisperer)Optimized for code
SummarizationBedrock (Titan/Claude)Fast, accurate
TranslationTranslate (simple) / Bedrock (complex)Cost vs capability
SentimentComprehendPurpose-built

For Vision Tasks ​

TaskBest ServiceWhy
Object DetectionRekognitionPre-built, easy
Face AnalysisRekognitionSpecialized
Custom VisionSageMakerFull control
Medical ImagingSageMakerCompliance needs

πŸŽ“ Responsible AI Quick Checks ​

Before Deployment Checklist ​

  • [ ] Fairness: Does model work equally for all groups?
  • [ ] Explainability: Can you explain decisions?
  • [ ] Privacy: Is sensitive data protected?
  • [ ] Safety: Are guardrails in place?
  • [ ] Transparency: Is model documented?
  • [ ] Monitoring: Is drift detection enabled?

Key Questions to Answer ​

Is there bias? β†’ Use SageMaker ClarifyNeed human review? β†’ Use Amazon A2IFilter harmful output? β†’ Use Bedrock GuardrailsTrack model performance? β†’ Use SageMaker Model Monitor


πŸš€ RAG Implementation Quick Reference ​

Components ​

User Query
    ↓
1. Embedding Model (convert query to vector)
    ↓
2. Vector Database (find similar documents)
    ↓
3. Retrieved Context + Query
    ↓
4. LLM (generate answer with context)
    ↓
Answer with Sources

AWS RAG Stack ​

  • Vector DB: OpenSearch, Bedrock Knowledge Bases
  • Embeddings: Bedrock Titan Embeddings
  • LLM: Bedrock (Claude, Llama)
  • Storage: S3 (documents)

🎬 Additional Resources ​

Essential Video ​

Watch this AWS Certified AI Practitioner Exam Guide

This video provides a direct comparison between the two heavy-hitters of the examβ€”Bedrock and SageMakerβ€”helping you decide which service fits specific exam scenarios.


⚑ Last 5 Minutes Before Exam ​

Must Remember ​

  1. Bedrock = API access to FMs (no training)
  2. SageMaker = Full ML lifecycle (build/train/deploy)
  3. RAG = External knowledge for LLMs
  4. Clarify = Bias detection and explainability
  5. Shared Responsibility = AWS hardware, You data

Quick Mental Check ​

  • Can you explain AI vs ML vs DL? βœ“
  • Do you know when to use Bedrock vs SageMaker? βœ“
  • Can you describe RAG in one sentence? βœ“
  • Do you know the 6 Responsible AI principles? βœ“
  • Can you name 3 AI services besides Bedrock/SageMaker? βœ“

You've Got This!

Take a deep breath. You've studied. Trust your preparation. Good luck! πŸ€

← Back to Overview | Study Notes | Exam Tips

Last Updated: 2026-01-14

Happy Studying! πŸš€ β€’ We use privacy-friendly analytics (no cookies, no personal data) β€’ Privacy Policy