Domain 1: Business Value of AI (35-40%) β
β Overview Β· Next Domain β
Exam Tip
The AB-731 exam is for Business Leaders, not developers. When in doubt, choose the answer that focuses on Strategy, ROI, and Governance over technical implementation or coding.
GenAI vs Traditional AI β
| Traditional AI | Generative AI |
|---|---|
| Analyzes, classifies, predicts | Creates new content |
| Spam filter, fraud detection | Write emails, generate images |
Foundation vs Specialized Models β
- Foundation Models: Large, generic models (like GPT-4o) pre-trained on massive data. They "know everything" generally.
- Specialized Models: Models fine-tuned for a specific task (e.g., medical diagnosis, legal coding). They are more efficient for niche tasks.
Key Terms β
Key AI Terms
What affects output quality?
(Click to reveal)| Term | Remember |
|---|---|
| Prompt | Input β affects output quality |
| Token | Text unit β affects cost |
| Context window | Input + output limit |
| Temperature | 0 = predictable, 1 = creative |
Critical
Tokens are the primary cost driver. Understand that context window limits (Input + Output) directly impact both the cost and the richness of the AI's response.
Data Maturity: The Prerequisite β
Before seeing ROI, an organization must have a foundation of Data Maturity.
- Quality: AI is only as good as the data it's grounded in ("Garbage In, Garbage Out").
- Accessibility: Data must be broken out of siloes so the AI can "see" it (via RAG).
- Security: Robust permissions ensured before AI deployment to prevent oversharing.
- Labels: For specialized tasks, high-quality labeled data is required for Fine-tuning.
Business Value Areas β
- Productivity: Draft emails, summarize meetings, analyze data
- Decision-making: Faster insights from more data
- Automation: Handle routine tasks (FAQs, document processing)
- Customer experience: Personalized, faster responses
Value Types:
- Efficiency: Time saved, faster completion of routine tasks.
- Growth: New business models, personalized customer insights.
- Risk Mitigation: Better compliance tracking, early anomaly detection.
Trap
Questions often ask about ROI. Include all value types (Efficiency, Growth, and Risk), not just cost savings.
Model Customization Strategies β
When the base model isn't enough, leaders must choose how to customize it.
| Strategy | Effort | Cost | Data Needs | Best Use Case |
|---|---|---|---|---|
| Prompt Engineering | Low | Low | None | Controlling style, formatting, and simple tasks using instructions. |
| RAG (Retrieval-Augmented Generation) | Medium | Medium | Knowledge Base | Grounding the model in your latest business data/documents. |
| Fine-tuning | High | High | Labeled Dataset | Achieving a very specific style or deep domain expertise (rarely needed). |
Strategic Rule of Thumb:
- Always start with Prompt Engineering.
- Move to RAG if the model needs to "know" specific business data.
- Only consider Fine-tuning if RAG and Prompts can't achieve the required tone or specialized format.
Model Weight Updates & Terminology β
A common exam trap is confusing which customization method actually changes the model itself.
| Term | Are Weights Updated? | Core Concept |
|---|---|---|
| Pre-training | YES (Extensive) | Creating the model from scratch on massive datasets. This is where the model "learns" language. |
| Fine-tuning | YES (Targeted) | Taking a pre-trained model and continuing training on a small, specific dataset to adapt its behavior/style. |
| RAG | NO | The model remains frozen. It uses external data as "context" in the prompt to ground its answers. |
| Prompt Engineering | NO | Providing instructions and examples to guide the model's existing knowledge. |
Terminology Tip
- Grounding: Providing context via RAG so the model doesn't hallucinate. (No weight change)
- Transfer Learning: The underlying principle of Fine-tuningβbuilding on top of a pre-trained model. (Weight change)
Adaptation vs Retrieval (Fine-tuning vs RAG) β
This comparison focuses on where the knowledge comes from.
| Concept | Term | Analogy | Best for... |
|---|---|---|---|
| Adaptation | Fine-tuning | Learning a new skill or language style from a textbook. | Nuanced behavior, specialized industry jargon. |
| Retrieval | RAG | Taking an "open-book" exam with access to a library. | Facts, real-time data, and proprietary internal docs. |
Common Pitfall
Fine-tuning is NOT for real-time data. A common exam trap is asking how to provide an AI with today's stock prices. The answer is RAG/Grounding, not Fine-tuning.
Model Customization Quiz
Which method uses a Knowledge Base to ground answers without updating weights?
(Click to reveal)