Skip to content

Domain 1: Business Value of AI (35-40%) ​

← Overview Β· Next Domain β†’

Exam Tip

The AB-731 exam is for Business Leaders, not developers. When in doubt, choose the answer that focuses on Strategy, ROI, and Governance over technical implementation or coding.


GenAI vs Traditional AI ​

Traditional AIGenerative AI
Analyzes, classifies, predictsCreates new content
Spam filter, fraud detectionWrite emails, generate images

Foundation vs Specialized Models ​

  • Foundation Models: Large, generic models (like GPT-4o) pre-trained on massive data. They "know everything" generally.
  • Specialized Models: Models fine-tuned for a specific task (e.g., medical diagnosis, legal coding). They are more efficient for niche tasks.

Key Terms ​

Key AI Terms

1 / 4
❓

What affects output quality?

(Click to reveal)
πŸ’‘
Prompt: The input you provide to the model directly affects the quality of the output.
TermRemember
PromptInput β†’ affects output quality
TokenText unit β†’ affects cost
Context windowInput + output limit
Temperature0 = predictable, 1 = creative

Critical

Tokens are the primary cost driver. Understand that context window limits (Input + Output) directly impact both the cost and the richness of the AI's response.

Data Maturity: The Prerequisite ​

Before seeing ROI, an organization must have a foundation of Data Maturity.

  • Quality: AI is only as good as the data it's grounded in ("Garbage In, Garbage Out").
  • Accessibility: Data must be broken out of siloes so the AI can "see" it (via RAG).
  • Security: Robust permissions ensured before AI deployment to prevent oversharing.
  • Labels: For specialized tasks, high-quality labeled data is required for Fine-tuning.

Business Value Areas ​

  • Productivity: Draft emails, summarize meetings, analyze data
  • Decision-making: Faster insights from more data
  • Automation: Handle routine tasks (FAQs, document processing)
  • Customer experience: Personalized, faster responses

Value Types:

  • Efficiency: Time saved, faster completion of routine tasks.
  • Growth: New business models, personalized customer insights.
  • Risk Mitigation: Better compliance tracking, early anomaly detection.

Trap

Questions often ask about ROI. Include all value types (Efficiency, Growth, and Risk), not just cost savings.


Model Customization Strategies ​

When the base model isn't enough, leaders must choose how to customize it.

StrategyEffortCostData NeedsBest Use Case
Prompt EngineeringLowLowNoneControlling style, formatting, and simple tasks using instructions.
RAG (Retrieval-Augmented Generation)MediumMediumKnowledge BaseGrounding the model in your latest business data/documents.
Fine-tuningHighHighLabeled DatasetAchieving a very specific style or deep domain expertise (rarely needed).

Strategic Rule of Thumb:

  • Always start with Prompt Engineering.
  • Move to RAG if the model needs to "know" specific business data.
  • Only consider Fine-tuning if RAG and Prompts can't achieve the required tone or specialized format.

Model Weight Updates & Terminology ​

A common exam trap is confusing which customization method actually changes the model itself.

TermAre Weights Updated?Core Concept
Pre-trainingYES (Extensive)Creating the model from scratch on massive datasets. This is where the model "learns" language.
Fine-tuningYES (Targeted)Taking a pre-trained model and continuing training on a small, specific dataset to adapt its behavior/style.
RAGNOThe model remains frozen. It uses external data as "context" in the prompt to ground its answers.
Prompt EngineeringNOProviding instructions and examples to guide the model's existing knowledge.

Terminology Tip

  • Grounding: Providing context via RAG so the model doesn't hallucinate. (No weight change)
  • Transfer Learning: The underlying principle of Fine-tuningβ€”building on top of a pre-trained model. (Weight change)

Adaptation vs Retrieval (Fine-tuning vs RAG) ​

This comparison focuses on where the knowledge comes from.

ConceptTermAnalogyBest for...
AdaptationFine-tuningLearning a new skill or language style from a textbook.Nuanced behavior, specialized industry jargon.
RetrievalRAGTaking an "open-book" exam with access to a library.Facts, real-time data, and proprietary internal docs.

Common Pitfall

Fine-tuning is NOT for real-time data. A common exam trap is asking how to provide an AI with today's stock prices. The answer is RAG/Grounding, not Fine-tuning.

Model Customization Quiz

1 / 4
❓

Which method uses a Knowledge Base to ground answers without updating weights?

(Click to reveal)
πŸ’‘
RAG (Retrieval-Augmented Generation).