Follow PublicUniverse on Twitter

Revised Bloom’s Taxonomy for an AI World

We need a taxonomy that treats AI not as an optional bolt-on, but as a collaborator requiring students to develop new literacies—deep knowledge, interpretation, prompt-crafting, critical scrutiny, evaluation, and co-creation—while keeping human expertise firmly in the driver’s seat.


1. Knowing (Instead of Remembering)

Building an active, accurate knowledge base and understanding AI’s limits.

  • What it is: Mastery of domain facts, concepts, and context—plus knowing where AI typically makes confident but incorrect assertions.

  • Why it matters: Without a robust foundation (e.g., planetary radiation levels, biochemical pathways), you can’t recognize AI’s mistakes.

  • Strategies for the classroom:

    • Annotated source‐checks: Students document and verify every fact AI suggests.

    • Concept maps: Visually link key ideas and highlight gaps AI may overlook.

    • Mini‐debates: Teams challenge AI-generated claims, forcing deep recall and justification.


2. Interpreting (Instead of Understanding)

Decoding both subject matter and how AI “thinks.”

  • What it is: Grasping concepts well enough to translate, explain, and spot where AI’s pattern-based responses diverge from true reasoning.

  • Why it matters: You must see not only what AI says, but how it arrives there—and anticipate its blind spots.

  • Classroom moves:

    • Ask students to paraphrase AI outputs in their own words, noting any leaps or missing premises.

    • Have them diagram AI’s reasoning flowcharts and identify unstated assumptions.


3. Applying with AI

Using knowledge and AI tools together—broken into two complementary sub-skills.

a. Prompt Crafting

  • What it is: The art of writing precise, targeted prompts that steer AI toward useful, accurate answers.

  • Why it matters: A vague prompt yields junk. Effective prompting is a technical skill that underpins every AI interaction.

  • Classroom moves:

    • Prompt‐revision workshops: students iteratively refine queries and compare outputs.

    • Prompt templates: “Explain X under constraint Y,” “Compare A and B with respect to C.”

b. Applying AI Outputs

  • What it is: Integrating AI’s suggestions into real-world or project contexts, then refining them with human judgment.

  • Why it matters: AI can draft a lab protocol or design outline—but only you can adapt it to your specific data, constraints, and goals.

  • Classroom moves:

    • Case studies: use an AI-generated draft, then annotate where you’d modify steps based on local variables.

    • Peer review: students swap AI-assisted work and suggest domain-specific tweaks.


4. Decomposing AI Arguments (Instead of Analyzing with AI)

Holding AI accountable by breaking down its claims.

  • What it is: Dissecting AI’s output into individual assertions, evidence, assumptions, and logical steps.

  • Why it matters: True analysis means finding gaps—e.g., AI may list abiotic pathways but ignore radiation constraints.

  • Classroom moves:

    • Claim–evidence charts: isolate each AI claim and flag missing or weak support.

    • Bias checklists: identify where training data or algorithms may skew results.


5. Evaluating with AI

Judging the validity, reliability, and relevance of AI’s work.

  • What it is: Applying a clear rubric to decide when to trust, modify, or reject AI outputs.

  • Why it matters: AI is fallible and overconfident; sound judgment rests on solid criteria.

  • Evaluation rubric:

    1. Source validity: Are references credible and current?

    2. Logical consistency: Do steps follow rigorously from premises?

    3. Bias detection: Has the model introduced skew or omitted viewpoints?

  • Classroom moves:

    • Rubric-based reviews: score AI responses and justify each rating.

    • Counter-example exercises: find real cases where AI fails each rubric criterion.


6. Creating with AI

Co-creating original work by blending AI’s strengths with your expertise.

  • What it is: Producing projects—essays, simulations, designs—where AI’s output is only the starting point, refined and directed by your domain knowledge.

  • Why it matters: AI can generate ideas en masse, but only you ensure they’re accurate, nuanced, and ethically sound.

  • Mini-case narrative:

    1. Draft hypothesis: A student proposes an abiotic pathway on Europa.

    2. AI simulation: They prompt a model to generate reaction conditions.

    3. Refinement: The student adjusts parameters (radiation dose, temperature range).

    4. Critique: They assess AI’s assumptions—e.g., neglect of ice chemistry—and revise the final model.

From 1928 to 2025: Researching a Minimum-Palmos with the Help of AI






There’s something magical about holding a piece of true history — and something equally fascinating about blending it with today's technology.

Recently, I picked up a Zeiss Ikon Minimum-Palmos — a compact 6.5×9 cm folding plate camera first designed in the early 1900s.
It’s a marvel of German engineering: leather bellows, rack-and-pinion focusing, a ground-glass hood, and a roaring focal-plane shutter capable of 1/750s exposures — all powered purely by springs and gears.

But stepping into that world meant learning an entire system: no manuals, no spare parts, and little modern documentation.
That’s where AI came in — acting like a digital research assistant, technician, and even a design partner.


🛠 What We Did Together

  • Identification:
    We confirmed the model, production range (~1927–28), and lens serials (Tessar 12 cm f/4.5).

  • Condition Evaluation:
    Using photos, AI helped walk through a 10-point historical camera inspection — including the shutter curtain, bellows, struts, and light traps — and issued a full photographic report.

  • Historical Validation:
    We compared it against WWI standards and verified it would be an excellent match for Weimar-era.

  • Accessory Fitting:
    After I sourced a vintage Rada Rollfilm-Kassette, AI analyzed the fit — confirming it was the correct thin DIN-style back that would work without modification.

  • Manual Recreation:
    Together, we even recreated a 1930-style field manual for using the Rada back, written to match the language and formatting of period German photographic handbooks.


📜 What It Showed Me

There’s something poetic about restoring one of the last great mechanical cameras while using one of the newest technologies humanity has created.

  • Old Tech: Pure mechanics. Craftsmanship. Manual focus. Curtain shutters.

  • New Tech: Instant research. Document recreation. 3D printing design. Preservation support.

Instead of feeling like cheating, the AI became a kind of historical interpreter — bridging the language, technical drawings, and obscure knowledge of a century ago to make the restoration not just possible, but meaningful.


🎯 In the End:

  • One vintage folding camera ready for service again.

  • One rollfilm back mounted and usable.

  • One proud owner stepping back into history — with a little help from the future.


Stay tuned — next post, I’ll be running the Minimum-Palmos through a live field test using Fomapan 100 film loaded into the Rada back. Photos (hopefully) to come! 📷✨

My Revised Bloom’s Taxonomy for an AI World

 As a science teacher who’s seen the educational pendulum swing from content knowledge to critical thinking, I believe the current emphasis on critical thinking over content is misdirected in an AI world. Tools like Grok and ChatGPT can churn out analyses and arguments faster than any human, but their frequent errors, biases, and oversimplifications make a strong foundation of factual knowledge more crucial than ever. Drawing on my deep content expertise, I’ve reimagined Bloom’s Taxonomy to reflect how learning should evolve when AI is a classroom reality. This revised hierarchy prioritizes content knowledge as the bedrock for challenging AI’s outputs and introduces skills like prompt engineering and output critique, ensuring students can harness AI’s power without being misled by its flaws.

My Revised Bloom’s Taxonomy for an AI World
I’ve always relied on my content knowledge to guide my teaching, whether I’m dissecting a textbook or, more recently, testing scientific ideas with AI. For example, when news buzzed about “life chemistry” on another planet, I used AI to explore whether those molecules could form abiotically, then leaned on my biochemical expertise to challenge its claims. This experience convinced me that Bloom’s Taxonomy—Remembering, Understanding, Applying, Analyzing, Evaluating, Creating—needs an overhaul to prepare students for an AI-driven future. AI can mimic higher-order skills like analysis or creation, but without a robust knowledge base, students can’t spot its mistakes. My revised taxonomy redefines the six levels to elevate content knowledge and integrate AI-specific skills, ensuring learners can collaborate with AI while staying in control.

  1. Knowing (Instead of Remembering)
    I see Knowing as building a deep, accurate knowledge base—not just memorizing facts, but mastering domain-specific details and understanding AI’s limitations. In my classroom, I need to know planetary conditions, to question AI’s claims about abiotic chemistry. Students must do the same to avoid being fooled by AI’s confident errors. Unlike Bloom’s “Remembering,” which feels passive, Knowing is active and foundational, the starting point for everything else. Without it, you’re at the mercy of AI’s output.
  2. Comprehending (Instead of Understanding)
    Comprehending means grasping concepts and their contexts deeply enough to interpret information and spot AI’s missteps. When AI suggested abiotic pathways for life molecules, I understood the chemistry well enough to see it ignored planetary constraints. This level also includes knowing how AI works—its pattern-based responses aren’t true reasoning—so I can anticipate where it might go wrong. Compared to Bloom’s “Understanding,” Comprehending demands a sharper awareness of AI’s interpretive gaps, making it critical for navigating its summaries or explanations.
  3. Applying with AI (Instead of Applying)
    I use Applying with AI to mean using my knowledge to apply concepts in real-world or AI-assisted scenarios, especially by crafting precise prompts. When I asked AI, “Can amino acids form abiotically?” and followed up with, “What makes this unlikely on Mars?” I was applying my expertise to get useful responses. This isn’t just using knowledge—it’s about prompting AI effectively to generate ideas I can refine. Unlike Bloom’s “Applying,” this level includes prompt engineering as a core skill, because a bad prompt leads to useless AI output.
  4. Analyzing with AI (Instead of Analyzing)
    Analyzing with AI is about dissecting AI’s responses to find errors, biases, or gaps, using my content knowledge as the lens. When AI listed abiotic pathways, I broke down its claims, spotting where it overlooked radiation or temperature constraints. This isn’t just analyzing data—it’s scrutinizing AI’s logic against what I know to be true. Bloom’s “Analyzing” didn’t account for tech, but in my version, it’s about holding AI accountable, which requires a rock-solid knowledge base.
  5. Evaluating with AI (Instead of Evaluating)
    Evaluating with AI means judging whether AI’s outputs are valid, reliable, or relevant, deciding when to trust or reject them. I evaluated AI’s abiotic chemistry hypothesis as implausible for Europa because I knew its radiation levels ruled out certain processes. My students did this too, using textbook data to weigh AI’s claims. This goes beyond Bloom’s “Evaluating” by focusing on AI’s fallibility—its biases, missing sources, or overconfidence—making content knowledge the key to sound judgment.
  6. Creating with AI (Instead of Creating)
    Creating with AI is producing original work by blending AI’s insights with my verified knowledge, ensuring accuracy and depth. My students’ essays on Europa’s chemistry combined AI’s suggested pathways with their own corrections, grounded in curriculum facts. This isn’t just creating—it’s collaborating with AI while keeping my expertise in charge. Unlike Bloom’s “Creating,” which assumes solo human effort, my version sees AI as a partner, but one that needs constant oversight.
Why I Revised Bloom’s Taxonomy
I’ve watched education prioritize critical thinking over content, assuming students can analyze or evaluate without a strong factual foundation. But AI changes the game. It can generate analyses or arguments in seconds—faster than any student—but it’s often wrong or simplistic, like when it overstated abiotic formation possibilities without considering planetary realities. My experience challenging AI’s claims with my biochemical knowledge shows that content is the anchor for using AI effectively. By elevating Knowing and Comprehending, my taxonomy ensures students have the facts to question AI’s outputs. Adding skills like prompt engineering and output critique prepares them to navigate AI’s strengths and weaknesses, whether they’re debating alien chemistry or tackling real-world problems.
Key Changes:
  • Content Knowledge Comes First: Knowing and Comprehending aren’t “lower” skills—they’re the foundation for everything. Without them, students can’t challenge AI’s errors, as I did with its chemistry claims.
  • AI-Specific Skills: Prompting AI (Applying) and critiquing its outputs (Analyzing, Evaluating) are new necessities, reflecting how I use AI to test ideas.
  • Collaboration with AI: Higher levels (Creating) involve working with AI, but my expertise ensures the final product is accurate, like my students’ essays.
  • Rebalanced Priorities: Critical thinking is still vital, but it’s hollow without content to ground it, fixing the misdirected shift I see in education.
How This Plays Out in My Classroom
My revised taxonomy mirrors how I teach. When I used AI to explore abiotic chemistry, I started with Knowing (planetary facts), moved to Comprehending (abiotic vs. biotic processes), Applied with AI (prompting targeted questions), Analyzed with AI (dissecting its responses), Evaluated with AI (judging plausibility), and guided students to Create with AI (writing evidence-based essays). This approach ensures students don’t just accept AI’s answers—they control it with knowledge. It’s why I believe content knowledge is non-negotiable in an AI world.
In Practice:
  • Knowing: I teach students key facts, like Europa’s ice composition.
  • Comprehending: We discuss how abiotic processes work in specific contexts.
  • Applying with AI: Students prompt AI to hypothesize chemical pathways.
  • Analyzing with AI: They compare AI’s claims to factual data.
  • Evaluating with AI: They decide if AI’s ideas hold up, citing evidence.
  • Creating with AI: They write arguments integrating AI’s ideas with corrections.
Why This Matters
AI is here to stay, and it’s already shaping how my students learn and think. But its ability to mimic critical thinking—spitting out analyses or creative ideas—can fool them if they don’t have the content knowledge to push back. My revised Bloom’s Taxonomy puts knowing and comprehending at the core, ensuring students can use AI as a tool, not a crutch. It prepares them for a future where AI is everywhere, from astrobiology labs to everyday life, by teaching them to question, refine, and create with confidence. This isn’t just a tweak to an old framework—it’s a call to rethink learning so we empower students to stay one step ahead of the machines.