An LLM literacy project
Building intuition about large language models
Fletcher G.W. Christensen
Statistics, UNM
2026

A series of interactive explainers about how large language models actually work — built for people who use them, are curious about them, or are trying to decide what to think about them.

This project began in April 2026 as a contribution to the University of New Mexico's AI Literacy Faculty Fellows (ALFF) program — a College of Arts and Sciences initiative for faculty developing genuine intuition about AI in their teaching and research. The original audience was a group of fifteen colleagues from a variety of academic domains: languages, humanities, social sciences, and natural sciences. The goal was to build artifacts that would let a non-technical reader come away with the kind of understanding that lets you explain a concept accurately to someone else, not just recognize it when you hear it.

In their development, the artifacts took on a life beyond their original audience. The seven explainers below are now public. They're written for anyone who has used an LLM and wants a more honest account of what's happening underneath — students, faculty, family members, friends, advisees, colleagues, anyone who's been told these systems are either magic or trivial and suspects the truth is more interesting than either. Each artifact is self-contained and can be read on its own; they're sequenced to build on each other, but jumping in at any point that interests you should work.

Two more series are planned: a usage guide series on how to work with these systems effectively, and an ethics series on the harder questions about them. Topics for both are listed below the current artifacts as speculative placeholders.

Seven interactive artifacts on the mechanisms underneath large language models. Each takes one concept and works it through with examples you can poke at directly. Roughly 10–25 minutes each, depending on how deep into the dig-deeper branches you go.

A planned second series on how to use large language models effectively. Where the explainer series is about what these systems are, this series will be about what to do with them — practical questions about getting useful work out of LLMs without overtrusting them or underusing them.

Topics below are speculative and may change during development.

  • Prompting strategies — what actually changes the output, and what's superstition
  • Local LLM setup — running models on your own hardware, what's possible at consumer scale
  • Agentic use — what it means to give an LLM tools, and where the rough edges are
  • Working with long documents — practical strategies for context-heavy tasks tentative
  • Verification and sanity-checking — habits for catching what the model gets wrong tentative
  • Choosing tools and models — how to think about the choice between Claude, ChatGPT, Gemini, and open-weight alternatives tentative

A planned third series on the ethical questions raised by large language models — a more open-ended set of artifacts that try to surface the hard questions honestly rather than resolve them prematurely.

Topics below are speculative and may change during development.

  • Hallucination — what it is, why it happens, and what taking it seriously means in practice
  • Training data and consent — whose work was used to build these systems, and what was or wasn't agreed to
  • Environmental impacts — the energy and water costs of training and inference, and what we do and don't know
  • Academic impacts — what it means for teaching, learning, and research when students and scholars have these tools
  • Economic impacts — what gets disrupted, what gets concentrated, who benefits and who pays
  • Equity and language access — who these systems work well for and who they don't tentative
  • Concentration of capability — open-weights vs. closed-frontier models, and what's at stake in that distinction tentative
  • Anthropomorphism and overtrust — the cognitive habits LLMs encourage and what to do about them tentative