# AI Fluency: The 4D Framework for Working with AI

> AI Fluency teaches a four-competency framework for collaborating with AI: Delegation (deciding what to give the AI), Description (communicating clearly), Discernment (evaluating outputs), and Diligence (taking responsibility). The course is light on technical depth but heavy on the disciplined judgment that separates effective AI users from prompt-and-pray users. Treat it as the operating manual under every other Skilljar course you will take.

**Domain:** D4 · Prompt Engineering (20%)
**Difficulty:** intro
**Skilljar course:** AI Fluency: Framework & Foundations (15 lessons)
**Canonical:** https://claudearchitectcertification.com/knowledge/ai-fluency-framework
**Last reviewed:** 2026-05-06

## Exam mapping

**Blueprint share:** 20% (D4) + spillover into D5

Direct prep for D4 prompt-engineering competencies and the human-judgment side of D5 reliability. The 4D framework (Delegation, Description, Discernment, Diligence) is the meta-skill that frames every other domain on the exam.

## What you'll learn

- What AI Fluency is and the three modes of human-AI engagement (automation, augmentation, agency)
- The 4D framework end-to-end: Delegation, Description, Discernment, Diligence
- How to apply Problem / Platform / Task awareness inside Delegation
- How to write Product, Process, and Performance descriptions when prompting
- Six foundational prompting techniques and how to troubleshoot when output is off
- How the Description-Discernment loop tightens iteratively, and what Diligence statements are for

## Prerequisites

- **Claude 101 (knowledge)** (knowledge · `claude-101`)

## Lesson outline

### 1. Introduction to AI Fluency

AI Fluency means engaging with AI effectively, efficiently, ethically, and safely; framework-driven, not vibes-driven.

### 2. Why do we need AI Fluency?

Generative AI shifts work from execution to judgment; without a framework, output quality regresses to the prompt-and-pray mean.

### 3. The 4D Framework

Four competencies; Delegation, Description, Discernment, Diligence; that compose into every productive AI interaction.

### 4. Generative AI fundamentals

How LLMs are trained, what tokens are, why models hallucinate; the technical floor under all four Ds.

### 5. Capabilities & limitations

What Claude can do well, what it cannot, and why knowing both is the prerequisite for good Delegation.

### 6. A closer look at Delegation

Three sub-skills: Problem Awareness, Platform Awareness, Task Delegation; decide what to give the AI before you start typing.

### 7. Project planning and Delegation

Map a project's tasks, mark which need human judgment vs. which can go to AI, build a delegation plan together with Claude.

### 8. A closer look at Description

Three sub-skills: Product Description (what), Process Description (how), Performance Description (style of interaction).

### 9. Effective prompting techniques

Six techniques: give context, show examples, specify constraints, break into steps, ask AI to think first, define role/tone.

### 10. A closer look at Discernment

Three sub-skills: Product Discernment (output quality), Process Discernment (reasoning), Performance Discernment (interaction).

### 11. The Description-Discernment loop

Description and Discernment form a feedback loop; sharper description improves output, sharper discernment improves your next description.

### 12. A closer look at Diligence

Three sub-skills: Creation Diligence (system choice), Transparency Diligence (disclosure), Deployment Diligence (own the output).

### 13. Conclusion

Course wrap-up: the 4Ds compose; weakness in any one degrades the others; develop them deliberately.

### 14. Certificate of completion

Skilljar issues a verifiable certificate URL; logistics only.

### 15. Additional activities

Optional exercises: draft a Diligence statement, audit a recent AI interaction through the 4D lens.

## Our simplification

The 4D Framework is the operating system. Every other Skilljar course you take; Claude Code, MCP, Tool Use, RAG; teaches a *capability*. AI Fluency teaches the *discipline* under those capabilities: how to decide what to delegate, how to communicate it, how to evaluate the result, and how to take responsibility for what you ship. The four Ds; Delegation, Description, Discernment, Diligence; are not stages, they are competencies that compose. A great prompt with bad delegation is wasted effort. Great evaluation with no diligence is dishonest. Treat the 4Ds as a checklist you run on every meaningful AI interaction, not a one-time mindset.

Delegation is the first D and the most under-practiced. It has three sub-skills: Problem Awareness (do I actually understand what I'm trying to do?), Platform Awareness (what can this specific AI system do, and where does it fall over?), and Task Delegation (which slices go to me, which to the AI, which to a human collaborator). The course's central insight: most AI failures trace back to Delegation, not prompting. People hand off tasks the AI cannot do (deep domain reasoning without context) or keep tasks the AI could do better (mechanical reformatting). The fix is not a better prompt; it's a better split.

Description is the prompting competency, and the course breaks it into three layers most people collapse into one. Product Description is *what* you want; the output, format, audience, length. Process Description is *how* you want the AI to get there; show your work, ask before you guess, break into steps. Performance Description is *how you want to interact*; be concise, push back on bad ideas, ask clarifying questions before starting. Six prompting techniques follow: give context, show examples, specify constraints, break into steps, ask the AI to think first, define role or tone. The biggest unlock is realizing AI is a partner, not a vending machine; Performance Description is the layer that earns you the partnership.

Discernment is Description's mirror image, and the course makes a sharp claim: you cannot evaluate AI output well without your own expertise. There are three flavors. Product Discernment; is the output accurate, appropriate, coherent, relevant? Process Discernment; did the reasoning track, or did it skip steps and confabulate? Performance Discernment; did the AI behave well during the interaction, or did it sycophantically agree, drift, or refuse to push back? The Discernment-Description feedback loop is where fluency compounds: each round of evaluation sharpens the next round of description, and the AI's output quality climbs with you.

Diligence is the ethics layer and the one that's hardest to formalize. Three sub-skills: Creation Diligence (which system you choose and why; privacy, alignment, capability fit), Transparency Diligence (being honest with everyone who needs to know that AI was involved; readers, clients, students, regulators), and Deployment Diligence (you own the output, full stop, AI did not write it, *you* shipped it). The course recommends drafting a personal or project-level diligence statement declaring how you use AI in your work. The exam-relevant take is that Diligence is what stops Claude's helpfulness from becoming your liability, and a senior architect operationalizes it with policy, not memos.

Three modes of engagement frame the whole framework. Automation is AI executing a specific task you scoped (write this email, summarize this doc). Augmentation is you and AI as creative partners thinking through a problem together. Agency is you setting up the AI to act independently; agents, autonomous loops, scheduled tasks. The 4Ds apply to all three modes, but the *weight* shifts. In Automation, Description carries the load. In Augmentation, Discernment dominates because you're iterating live. In Agency, Diligence becomes structural; you cannot review every action live, so the system itself has to encode your judgment. Most exam scenarios live in Agency mode, which is why the framework matters for the architect role.

Where this course fits in your prep: if you're coming from Claude 101 or Claude Code 101, AI Fluency is the meta-layer that ties them together. It will not teach you a single API call or a single feature. It will fix the mental model that determines how well every other course you take actually sticks. The course is short (15 lessons, ~75 minutes including videos), heavy on reflection exercises, and intentionally cross-domain; the same framework applies whether you're a writer using Claude.ai, a developer using Claude Code, or an architect designing an agentic system. Skim the lesson videos at 1.5x; the framework itself is what stays with you.

## Patterns

### The 4 Ds at a glance

Each D has three sub-skills. Memorize the structure once and the rest of the course is filling in the cells.

- **Delegation.** Problem Awareness, Platform Awareness, Task Delegation. Decide what to give the AI before you type the first prompt.
- **Description.** Product (what), Process (how), Performance (interaction style). The prompting competency, three layers most people collapse into one.
- **Discernment.** Product (output quality), Process (reasoning), Performance (interaction). You cannot evaluate output you don't understand; expertise gates discernment.
- **Diligence.** Creation (system choice), Transparency (disclosure), Deployment (you own it). The ethics layer, operationalized through policy, not memos.

### 6 foundational prompting techniques

These come from Lesson 9 and map directly to Description sub-skills. Use them as a self-check when output is off.

- **Give context.** What you want, why you want it, who it's for, relevant background. Most prompt failures are missing context, not bad wording.
- **Show examples.** One or two examples of the output style or format you want. Multishot beats explanation almost every time.
- **Specify constraints.** Format, length, what to include, what to exclude. Constraints are how you make output checkable.
- **Break into steps.** Decompose multi-step reasoning. Claude follows numbered plans more reliably than implicit chains.
- **Ask the AI to think first.** Give Claude room to reason before answering. Use scratchpad XML tags or extended thinking when stakes are high.
- **Define role or tone.** Specify who Claude is acting as and how it should communicate. Performance Description in one line.

## Key takeaways

- The 4D framework; Delegation, Description, Discernment, Diligence; is the operating system under every other AI skill; weakness in one degrades the others. (`4d-framework`)
- Most AI failures trace back to Delegation (wrong split), not prompting; fix the split before you fix the prompt. (`4d-framework`)
- Description has three layers: Product (what), Process (how), Performance (interaction style). Collapsing them into one is the most common prompting mistake. (`prompt-engineering-techniques`)
- Discernment requires domain expertise; you cannot reliably evaluate output you don't understand, which is why human-in-the-loop is structural, not optional. (`evaluation`)
- Diligence is operationalized through Creation (system choice), Transparency (disclosure), and Deployment (you own the output). Write a Diligence statement for any project you ship. (`claude-for-operations`)
- Three modes of engagement; Automation, Augmentation, Agency; shift which D carries the load; in Agency mode (agents), Diligence has to be encoded structurally. (`conversational-ai-patterns`)

## Concepts in play

- **4D framework** (`4d-framework`), The course's central organizing concept
- **Prompt engineering techniques** (`prompt-engineering-techniques`), Description's tactical layer
- **Evaluation** (`evaluation`), Discernment's structural counterpart
- **System prompts** (`system-prompts`), Where Performance Description lives in practice
- **Attention engineering** (`attention-engineering`), Process Description at the prompt-architecture level

## Scenarios in play

- **Conversational AI patterns** (`conversational-ai-patterns`), Augmentation-mode use case where Discernment dominates
- **Claude for operations** (`claude-for-operations`), Agency-mode use case where Diligence becomes structural

## Curated sources

- **AI Fluency framework; Anthropic Learn** (anthropic-blog, 2025-06-01): Anthropic's home page for the framework with the source white paper by Rick Dakan and Joseph Feller, plus the diligence statement template referenced in Lesson 12.
- **Prompt engineering overview; Anthropic documentation** (anthropic-blog, 2025-09-01): Canonical Anthropic prompt-engineering guide that the Lesson 9 six techniques map onto cleanly. Use it as the technical companion to the framework.
- **AI Fluency: A Framework for the AI Age (Dakan & Feller)** (paper, 2024-09-01): Original academic source for the 4D framework; deeper than the Skilljar course on the pedagogical and ethical foundations.

## FAQ

### Q1. What does the 4D framework stand for in AI fluency?

The four Ds are Delegation, Description, Discernment, and Diligence. Delegation is deciding what to give the AI; Description is communicating clearly with the AI; Discernment is evaluating what the AI produces; Diligence is taking responsibility for the outcome. They are competencies that compose, not stages; every meaningful AI interaction touches all four.

### Q2. How is description different from prompt engineering?

Prompt engineering is *one layer* of Description. Description has three sub-skills: Product Description (what you want), Process Description (how the AI should get there), and Performance Description (how you want the interaction to feel). Prompt engineering techniques; context, examples, constraints, decomposition; sit inside Product and Process Description. Performance Description ("push back on bad ideas", "ask before guessing") is the layer most prompt-engineering guides leave out.

### Q3. Why do AI experts say delegation matters more than prompting?

Because most AI failures are wrong-split failures, not wrong-prompt failures. If you hand Claude a task it cannot do well; deep domain reasoning without your context, novel research with no source material; no amount of prompting saves the output. The reverse is also true: if you keep tasks Claude could do faster (mechanical reformatting, boilerplate generation), you're paying a tax for no reason. Fix the split first, then optimize the prompt.

### Q4. What is a diligence statement and why would I write one?

A diligence statement is a short written declaration of how you use AI in a specific project or role; which systems, what you delegate, how you disclose, how you verify before shipping. It operationalizes Diligence into policy rather than vague intent. The Skilljar course links to the example diligence statement Anthropic itself published for the AI Fluency course. Useful in academic, professional, regulated, or client-facing work where AI involvement is a question that will be asked.

### Q5. How do I evaluate Claude's output if I'm not an expert in the topic?

You can't, reliably, and the course is direct about that. Discernment depends on domain expertise; without it, you can spot fluency but not accuracy. The practical workarounds: only delegate evaluation-required work in domains where you *are* expert, recruit a human reviewer who is, or design a verification step (independent source, eval set, automated test) that does not require you to judge the content. Trusting AI output you cannot evaluate is the failure mode the framework exists to prevent.

### Q6. What are the three modes of engaging with AI according to the framework?

Automation, where AI executes a specific task you scoped. Augmentation, where you and AI collaborate as creative partners on the same problem. Agency, where you guide AI to act independently on your behalf; agents, scheduled jobs, autonomous loops. The 4Ds apply to all three modes, but the weight shifts: Description carries Automation, Discernment carries Augmentation, Diligence has to be structurally encoded in Agency.

### Q7. Is AI fluency a technical skill or a soft skill?

Both, deliberately. Delegation and Description involve technical literacy (knowing what models can do, how prompts compose, what context windows are). Discernment and Diligence involve judgment, ethics, and communication. The framework's premise is that effective AI use cannot be split into "technical" and "soft"; the architect role specifically requires fluency on both axes, which is why the course is foundational for exam prep.

### Q8. Why does my prompt seem clear but Claude still misunderstands?

Most often you've written strong Product Description (what you want) but skipped Process Description (how to get there) or Performance Description (interaction style). Try adding "think step by step before answering", "ask me clarifying questions if anything is ambiguous", or "show your reasoning in <scratchpad> tags". The course's secret-weapon move: ask Claude itself to critique and improve your prompt before you run it on the actual task.

---

**Source:** https://claudearchitectcertification.com/knowledge/ai-fluency-framework
**Vault sources:** Course_05/Lesson_03_the-4d-framework.md; Course_05/Lesson_05_capabilities-and-limitations.md; Course_05/Lesson_06_a-closer-look-at-delegation.md; Course_05/Lesson_08_a-closer-look-at-description.md; Course_05/Lesson_09_effective-prompting-techniques.md; Course_05/Lesson_10_a-closer-look-at-discernment.md; Course_05/Lesson_11_the-description-discernment-loop.md; Course_05/Lesson_12_a-closer-look-at-diligence.md
**Last reviewed:** 2026-05-06
