You'll walk away with
- What AI Fluency is and the three modes of human-AI engagement (automation, augmentation, agency)
- The 4D framework end-to-end: Delegation, Description, Discernment, Diligence
- How to apply Problem / Platform / Task awareness inside Delegation
- How to write Product, Process, and Performance descriptions when prompting
- Six foundational prompting techniques and how to troubleshoot when output is off
- How the Description-Discernment loop tightens iteratively, and what Diligence statements are for
Read these first
Lesson outline
Every lesson from AI Fluency: Framework & Foundations with our one-line simplification. The Skilljar course is the source; we summarize.
| # | Skilljar lesson | Our simplification |
|---|---|---|
| 1 | Introduction to AI Fluency | AI Fluency means engaging with AI effectively, efficiently, ethically, and safely; framework-driven, not vibes-driven. |
| 2 | Why do we need AI Fluency? | Generative AI shifts work from execution to judgment; without a framework, output quality regresses to the prompt-and-pray mean. |
| 3 | The 4D Framework | Four competencies; Delegation, Description, Discernment, Diligence; that compose into every productive AI interaction. |
| 4 | Generative AI fundamentals | How LLMs are trained, what tokens are, why models hallucinate; the technical floor under all four Ds. |
| 5 | Capabilities & limitations | What Claude can do well, what it cannot, and why knowing both is the prerequisite for good Delegation. |
| 6 | A closer look at Delegation | Three sub-skills: Problem Awareness, Platform Awareness, Task Delegation; decide what to give the AI before you start typing. |
| 7 | Project planning and Delegation | Map a project's tasks, mark which need human judgment vs. which can go to AI, build a delegation plan together with Claude. |
| 8 | A closer look at Description | Three sub-skills: Product Description (what), Process Description (how), Performance Description (style of interaction). |
| 9 | Effective prompting techniques | Six techniques: give context, show examples, specify constraints, break into steps, ask AI to think first, define role/tone. |
| 10 | A closer look at Discernment | Three sub-skills: Product Discernment (output quality), Process Discernment (reasoning), Performance Discernment (interaction). |
| 11 | The Description-Discernment loop | Description and Discernment form a feedback loop; sharper description improves output, sharper discernment improves your next description. |
| 12 | A closer look at Diligence | Three sub-skills: Creation Diligence (system choice), Transparency Diligence (disclosure), Deployment Diligence (own the output). |
| 13 | Conclusion | Course wrap-up: the 4Ds compose; weakness in any one degrades the others; develop them deliberately. |
| 14 | Certificate of completion | Skilljar issues a verifiable certificate URL; logistics only. |
| 15 | Additional activities | Optional exercises: draft a Diligence statement, audit a recent AI interaction through the 4D lens. |
The course in 7 paragraphs
The 4D Framework is the operating system. Every other Skilljar course you take; Claude Code, MCP, Tool Use, RAG; teaches a *capability*. AI Fluency teaches the *discipline* under those capabilities: how to decide what to delegate, how to communicate it, how to evaluate the result, and how to take responsibility for what you ship. The four Ds; Delegation, Description, Discernment, Diligence; are not stages, they are competencies that compose. A great prompt with bad delegation is wasted effort. Great evaluation with no diligence is dishonest. Treat the 4Ds as a checklist you run on every meaningful AI interaction, not a one-time mindset.
Delegation is the first D and the most under-practiced. It has three sub-skills: Problem Awareness (do I actually understand what I'm trying to do?), Platform Awareness (what can this specific AI system do, and where does it fall over?), and Task Delegation (which slices go to me, which to the AI, which to a human collaborator). The course's central insight: most AI failures trace back to Delegation, not prompting. People hand off tasks the AI cannot do (deep domain reasoning without context) or keep tasks the AI could do better (mechanical reformatting). The fix is not a better prompt; it's a better split.
Description is the prompting competency, and the course breaks it into three layers most people collapse into one. Product Description is *what* you want; the output, format, audience, length. Process Description is *how* you want the AI to get there; show your work, ask before you guess, break into steps. Performance Description is *how you want to interact*; be concise, push back on bad ideas, ask clarifying questions before starting. Six prompting techniques follow: give context, show examples, specify constraints, break into steps, ask the AI to think first, define role or tone. The biggest unlock is realizing AI is a partner, not a vending machine; Performance Description is the layer that earns you the partnership.
Discernment is Description's mirror image, and the course makes a sharp claim: you cannot evaluate AI output well without your own expertise. There are three flavors. Product Discernment; is the output accurate, appropriate, coherent, relevant? Process Discernment; did the reasoning track, or did it skip steps and confabulate? Performance Discernment; did the AI behave well during the interaction, or did it sycophantically agree, drift, or refuse to push back? The Discernment-Description feedback loop is where fluency compounds: each round of evaluation sharpens the next round of description, and the AI's output quality climbs with you.
Diligence is the ethics layer and the one that's hardest to formalize. Three sub-skills: Creation Diligence (which system you choose and why; privacy, alignment, capability fit), Transparency Diligence (being honest with everyone who needs to know that AI was involved; readers, clients, students, regulators), and Deployment Diligence (you own the output, full stop, AI did not write it, *you* shipped it). The course recommends drafting a personal or project-level diligence statement declaring how you use AI in your work. The exam-relevant take is that Diligence is what stops Claude's helpfulness from becoming your liability, and a senior architect operationalizes it with policy, not memos.
Three modes of engagement frame the whole framework. Automation is AI executing a specific task you scoped (write this email, summarize this doc). Augmentation is you and AI as creative partners thinking through a problem together. Agency is you setting up the AI to act independently; agents, autonomous loops, scheduled tasks. The 4Ds apply to all three modes, but the *weight* shifts. In Automation, Description carries the load. In Augmentation, Discernment dominates because you're iterating live. In Agency, Diligence becomes structural; you cannot review every action live, so the system itself has to encode your judgment. Most exam scenarios live in Agency mode, which is why the framework matters for the architect role.
Where this course fits in your prep: if you're coming from Claude 101 or Claude Code 101, AI Fluency is the meta-layer that ties them together. It will not teach you a single API call or a single feature. It will fix the mental model that determines how well every other course you take actually sticks. The course is short (15 lessons, ~75 minutes including videos), heavy on reflection exercises, and intentionally cross-domain; the same framework applies whether you're a writer using Claude.ai, a developer using Claude Code, or an architect designing an agentic system. Skim the lesson videos at 1.5x; the framework itself is what stays with you.
The 4 Ds at a glance
Each D has three sub-skills. Memorize the structure once and the rest of the course is filling in the cells.
- Delegation
Problem Awareness, Platform Awareness, Task Delegation. Decide what to give the AI before you type the first prompt.
Concept: 4d-framework ↗ - Description
Product (what), Process (how), Performance (interaction style). The prompting competency, three layers most people collapse into one.
Concept: prompt-engineering-techniques ↗ - Discernment
Product (output quality), Process (reasoning), Performance (interaction). You cannot evaluate output you don't understand; expertise gates discernment.
Concept: evaluation ↗ - Diligence
Creation (system choice), Transparency (disclosure), Deployment (you own it). The ethics layer, operationalized through policy, not memos.
Concept: 4d-framework ↗
6 foundational prompting techniques
These come from Lesson 9 and map directly to Description sub-skills. Use them as a self-check when output is off.
- Give context
What you want, why you want it, who it's for, relevant background. Most prompt failures are missing context, not bad wording.
Concept: prompt-engineering-techniques ↗ - Show examples
One or two examples of the output style or format you want. Multishot beats explanation almost every time.
Concept: prompt-engineering-techniques ↗ - Specify constraints
Format, length, what to include, what to exclude. Constraints are how you make output checkable.
Concept: prompt-engineering-techniques ↗ - Break into steps
Decompose multi-step reasoning. Claude follows numbered plans more reliably than implicit chains.
Concept: prompt-engineering-techniques ↗ - Ask the AI to think first
Give Claude room to reason before answering. Use scratchpad XML tags or extended thinking when stakes are high.
Concept: attention-engineering ↗ - Define role or tone
Specify who Claude is acting as and how it should communicate. Performance Description in one line.
Concept: system-prompts ↗
6 takeaways with cross-pillar bridges
The 4D framework; Delegation, Description, Discernment, Diligence; is the operating system under every other AI skill; weakness in one degrades the others.
Most AI failures trace back to Delegation (wrong split), not prompting; fix the split before you fix the prompt.
Description has three layers: Product (what), Process (how), Performance (interaction style). Collapsing them into one is the most common prompting mistake.
Discernment requires domain expertise; you cannot reliably evaluate output you don't understand, which is why human-in-the-loop is structural, not optional.
Diligence is operationalized through Creation (system choice), Transparency (disclosure), and Deployment (you own the output). Write a Diligence statement for any project you ship.
Three modes of engagement; Automation, Augmentation, Agency; shift which D carries the load; in Agency mode (agents), Diligence has to be encoded structurally.
How this maps to the CCA-F exam
3 hand-picked extras
These amplify the Skilljar course beyond what the course itself covers. Each was picked for a specific reason.
AI Fluency framework; Anthropic Learn
Anthropic's home page for the framework with the source white paper by Rick Dakan and Joseph Feller, plus the diligence statement template referenced in Lesson 12.
Read source ↗Prompt engineering overview; Anthropic documentation
Canonical Anthropic prompt-engineering guide that the Lesson 9 six techniques map onto cleanly. Use it as the technical companion to the framework.
Read source ↗AI Fluency: A Framework for the AI Age (Dakan & Feller)
Original academic source for the 4D framework; deeper than the Skilljar course on the pedagogical and ethical foundations.
Read source ↗Concepts in this course
4D framework
The course's central organizing concept
Concept: 4d-framework ↗Prompt engineering techniques
Description's tactical layer
Concept: prompt-engineering-techniques ↗Evaluation
Discernment's structural counterpart
Concept: evaluation ↗System prompts
Where Performance Description lives in practice
Concept: system-prompts ↗Attention engineering
Process Description at the prompt-architecture level
Concept: attention-engineering ↗Where you'll see this in production
Other course mirrors you may want next
8 questions answered
Phrased as the way real students search. Tagged by intent so you can scan to what you actually need.
