Exam Guide & Traps โ
โ Domain 5 ยท Cheatsheet โ
Exam Traps โ
Trap 1: Bedrock Model Confusion โ
Know which model for which scenario:
| If question mentions... | Think... |
|---|---|
| Long documents (200K tokens) | Anthropic Claude |
| Cost-effective text generation | Amazon Titan Text |
| Embeddings for RAG | Amazon Titan Embeddings |
| Image generation | Stable Diffusion XL |
| Multilingual content | Cohere or AI21 Jurassic |
Trap 2: RAG vs Fine-Tuning vs Prompt Engineering โ
Common mistake: Choosing fine-tuning when RAG or prompting would work.
Correct order:
- Prompt Engineering (try first - cheapest, fastest)
- RAG (need current/private data)
- Fine-Tuning (only if above methods insufficient)
Examples:
โ "Fine-tune the model to access current news"
โ "Use RAG to provide current news as context"
โ "Fine-tune for better output format"
โ "Use few-shot prompting with examples"
Trap 3: ML Metrics Selection โ
| Scenario | Metric | Why |
|---|---|---|
| Spam filter | Precision | False positives costly (important emails marked spam) |
| Medical diagnosis | Recall | False negatives costly (miss diseases) |
| Balanced dataset | Accuracy | Classes are equal |
| Imbalanced data | Precision/Recall | Accuracy misleading |
Trap 4: Responsible AI Tools โ
| Need | AWS Service |
|---|---|
| Detect bias in data/models | SageMaker Clarify |
| Monitor model drift | SageMaker Model Monitor |
| Human review workflows | Amazon A2I |
| Explain predictions | SageMaker Clarify |
Trap 5: Security Best Practices โ
When asked about securing AI systems:
- โ Encrypt at rest (AWS KMS, S3 encryption)
- โ Encrypt in transit (TLS/SSL)
- โ IAM policies for access control
- โ Input validation and sanitization
- โ Monitor for prompt injection attacks
Common trap: Forgetting that Bedrock data does not train public models.
Decision Quick Reference โ
Bedrock vs SageMaker Quick Decision Tree โ
Need to use AI/ML?
โโ Want pre-built models via API?
โ โโ GenAI/LLMs? โ Amazon Bedrock โญ
โ โโ Specific tasks? โ AI Services (Rekognition, Comprehend, etc.)
โ
โโ Want to build/train custom models?
โโ Full ML lifecycle control? โ SageMaker โญDecision Quick Reference โ
"Which AWS AI service?" โ
Extract text from documents โ Amazon Textract
Analyze sentiment โ Amazon Comprehend
Translate languages โ Amazon Translate
Speech-to-text โ Amazon Transcribe
Text-to-speech โ Amazon Polly
Build chatbot โ Amazon Lex
Recommendations โ Amazon Personalize
Fraud detection โ Amazon Fraud Detector
Search documents โ Amazon Kendra
Access foundation models โ Amazon Bedrock
Full ML platform โ Amazon SageMaker"RAG, Fine-Tuning, or Prompt Engineering?" โ
Simple task, common format โ Prompt Engineering (zero-shot)
Custom format, few examples โ Prompt Engineering (few-shot)
Need current data โ RAG
Need private/proprietary data โ RAG
Have 100+ training examples โ Fine-Tuning
Domain-specific terminology โ Fine-Tuning
Budget constrained โ Prompt Engineering"Which type of learning?" โ
Have labeled data (X and y) โ Supervised Learning
No labels, find patterns โ Unsupervised Learning
Learn through rewards โ Reinforcement Learning"How to address concerns about..." โ
Hallucinations โ Use RAG, implement guardrails, human review
Data privacy โ Bedrock doesn't train on your data, encryption
Bias โ Use SageMaker Clarify, diverse training data
Cost โ Choose appropriate model size, optimize prompts, cache
Model drift โ SageMaker Model Monitor, regular retrainingExam Day Reminders โ
Think Like This โ
For Bedrock questions:
- Longest context? โ Claude (200K)
- Cheapest? โ Titan
- Embeddings? โ Titan Embeddings
- Images? โ Stable Diffusion
For ML questions:
- False positives bad? โ Precision
- False negatives bad? โ Recall
- Model too simple? โ Underfitting
- Memorized training data? โ Overfitting
For Responsible AI:
- Detect bias? โ SageMaker Clarify
- Monitor drift? โ SageMaker Model Monitor
- Human review? โ Amazon A2I
For RAG:
- Always: Embeddings โ Vector DB โ Retrieve โ Generate
- AWS vector DB: Amazon OpenSearch Service