CloudPath Academy

Your guide to AWS certification success

Amazon Web Services AWS Broken Labs

AWS Certified AI Practitioner (AIF-C01) Domain 2

Fundamentals of Generative AI

Official Exam Guide: AWS Certified AI Practitioner Exam Guide
Skill Builder: AWS AI Practitioner Learning Plan


Domain Overview

Domain Weight: 24% of the exam (largest domain)

This domain tests your understanding of generative AI concepts, foundation models, prompt engineering, and AWS services for generative AI.


How to Study This Domain Effectively

Study Tips

  1. Understand generative AI vs traditional ML - Know the difference between discriminative models (classify/predict) and generative models (create new content). Exam tests whether you understand when to use generative AI.

  2. Learn foundation models - Understand what foundation models are, how they’re trained (pre-training on massive data), and their capabilities (text, images, code generation). Know the difference between foundation models and traditional ML models.

  3. Master prompt engineering - Prompting is how you interact with LLMs. Understand techniques like zero-shot, few-shot, chain-of-thought prompting. Questions test whether you can identify effective prompts for specific tasks.

  4. Know AWS generative AI services - Amazon Bedrock (access foundation models), Amazon Q (business assistant), SageMaker JumpStart. Match services to use cases.

  5. Understand model capabilities and limitations - LLMs can generate text, answer questions, summarize, translate. But they can also hallucinate (generate incorrect information). Know the strengths and weaknesses.

  1. Start with generative AI fundamentals - What it is, how it differs from traditional ML
  2. Learn foundation models - Pre-training, fine-tuning, model types
  3. Study prompt engineering - Techniques for better outputs
  4. Master Amazon Bedrock - AWS’s primary generative AI service
  5. Understand RAG and model customization - Improving model outputs

Task 2.1: Explain the basic concepts of generative AI

Key Concepts

1. Generative AI vs Traditional ML

Why: Understanding the difference is fundamental. Traditional ML classifies or predicts based on patterns. Generative AI creates new content (text, images, code, audio).

Key Differences:

Use Cases:

AWS Documentation:

2. Foundation Models (FMs)

Why: Foundation models are large models pre-trained on vast data that can be adapted for many tasks. Understanding FMs is central to generative AI.

Key Concepts:

Examples:

AWS Documentation:

3. Large Language Models (LLMs)

Why: LLMs are the most common type of foundation model. Understanding LLM capabilities (text generation, question answering, summarization, translation) helps you identify appropriate use cases.

Capabilities:

AWS Documentation:


Task 2.2: Understand prompt engineering

Key Concepts

1. What is Prompt Engineering?

Why: Prompts are how you communicate with LLMs. Better prompts yield better outputs. Understanding prompt engineering helps you get accurate, relevant responses from AI models.

Key Concepts:

2. Prompt Engineering Techniques

Why: Different techniques work for different tasks. Exam tests whether you can identify appropriate prompting strategies.

Techniques:

Examples:

Zero-shot:

Translate this to Spanish: "Hello, how are you?"

Few-shot:

English: cat → Spanish: gato
English: dog → Spanish: perro
English: bird → Spanish: ?

Chain-of-thought:

Let's solve this step by step:
1. First, identify...
2. Then, calculate...
3. Finally, conclude...

AWS Documentation:


Task 2.3: Identify AWS services for generative AI

Key Services

1. Amazon Bedrock

Why: Bedrock is AWS’s primary service for accessing foundation models. Understanding Bedrock is essential for the exam.

Key Features:

Use Cases:

AWS Documentation:

2. Amazon Q

Why: Amazon Q is AWS’s AI-powered business assistant. It helps with AWS-related questions, code generation, and business intelligence.

Capabilities:

AWS Documentation:

3. Amazon SageMaker JumpStart

Why: JumpStart provides pre-trained models and solutions for quick deployment.

Features:

AWS Documentation:


Task 2.4: Understand model customization and improvement

Key Concepts

1. Retrieval Augmented Generation (RAG)

Why: RAG improves LLM responses by providing relevant context from external knowledge sources. This reduces hallucinations and provides up-to-date information.

How it Works:

  1. User asks a question
  2. System retrieves relevant documents
  3. Documents are added to the prompt
  4. LLM generates response using provided context

Benefits:

AWS Services:

AWS Documentation:

2. Fine-tuning

Why: Fine-tuning adapts foundation models to specific tasks or domains using your own data.

Key Concepts:

When to Use:

AWS Documentation:

3. Prompt Optimization

Why: Improving prompts is often the easiest way to get better outputs.

Techniques:


Understanding Model Outputs and Limitations

Hallucinations

Why: LLMs can generate plausible-sounding but incorrect information. Understanding this limitation is critical.

What are Hallucinations:

Mitigation Strategies:

Token Limits

Why: Foundation models have maximum context lengths (token limits).

Key Concepts:


AWS Service FAQs


AWS Whitepapers and Resources

  1. Generative AI on AWS
  2. Amazon Bedrock Workshop
  3. Prompt Engineering Guide

Final Thoughts on Domain 2

Domain 2 (Fundamentals of Generative AI) represents 24% of the exam - the largest domain.

Key Takeaways:

  1. Understand generative AI - Creates new content vs classifies existing content
  2. Master foundation models - Pre-trained on massive data, adaptable to many tasks
  3. Learn prompt engineering - Zero-shot, few-shot, chain-of-thought techniques
  4. Know Amazon Bedrock - Primary AWS service for accessing foundation models
  5. Understand RAG - Improves accuracy by providing context
  6. Recognize limitations - Hallucinations, token limits, bias

Common Exam Patterns:

Master generative AI fundamentals - this is the core of the AI Practitioner exam!