CollectionBuilder Bulletin: October 2025
By Devin Becker | October 31, 2025Hi Everyone,
I post-dated this to be from Halloween so I could share some updates on the spookiest thing of all: AI. The sections below detail how we’re starting to think about instructions for AI within CollectionBuilder, followed by some AI-riffing on LLMs as ghosts and/or spirits, complete with terrible AI puns and emoji usage—truly terrifying!!!
Best,
- The CollectionBuilder Team
This month’s news:
Teaching
- Core Forum 2025: Nov. 12-14, the CB team will be attending the Core Forum in Denver— if you are going and want to meet up and talk CollectionBuilder or anything else, hit us up @ collectionbuilder.team@gmail.com.
AI and CB
GitHub Education –> Free Coding Tools
First things first, if you are a librarian or instructor or student and would like FREE ACCESS to Coding models like Claude, Gemini, and GPT 5, you just need to sign up for a GitHub Education account.
- Go here: https://github.com/education
- Join GitHub Education, and once they confirm your status as an instructor or student (takes a couple of days), you’ll have access to GitHub Copilot for free
- You can then use different modes to debug, generate, and fix the code in your projects, either locally (typically via VS Code) or on GitHub’s web interface.
Two Points on Coding with AI
Recently I taught an informal session on coding with AI in which I detailed approaches I’ve developed over two years of using AI for digital projects. During that session, I emphasized two points that I think are worth considering as you code with AI.
- AI has saved me some time, but it’s also sent me down some total rabbit holes and gone way too far with its revisions to the point where I had to scrap all it did. You can usually sense when this is coming—it’s a getting-out-over-your-skis type of feeling—and it usually happens deeper in conversations. To counter, start a new conversation. Sometimes I have the AI summarize what we were talking about in a doc that I then reference to restart the conversation.
- Which brings me to my second point: use AI to do better with AI. Summarize a conversation and use the summary to start over. Have AI create Python scripts for data transformation, then have it analyze the results. This leverages AI’s strengths (summarization, synthesis) while keeping conversations higher in quality and within token limits.
Exploratory AI Guidelines
To that end, we’ve created a couple of AI coding guideline documents—one for GitHub copilot and one for Agents, like Claude Code—for CollectionBuilder-CSV. These are currently in a branch, but they can be copied over to your own repositories (just remember to put the copilot one in the .github folder):
I started developing the Agents document after running into issues with the AI creating all this additional, unnecessary infrastructure. CollectionBuilder has a lot of tools AIs can use to build out and customize CB projects; this document basically says: use what’s there!
We’ll likely incorporate the guidelines more fully into the CB repositories in the future, but we thought we’d share them here first. If you have any thoughts, ideas, or concerns about all this, please let us know!
They’re actually fairly good reminders/summaries of CB practices, so they’re worth a gander from us humans as well. The Agents guide follows the Agents.md guideline, which is a standard established by OpenAI that works with all the major coding agents. The GitHub copilot instructions were generated using this helpful documentation from GitHub
AI is Spooky … 👻
The below is a pun- and emoji-filled riff I worked on with Claude (Opus 4.1) for Halloween, prompted by my interest in Andrej Karpathy’s recent description of LLMs as spirits/ghosts.
To his credit, our former DHSI student, Griffen Horsley, anticipated all of this with his 2023 CollectionBuilder project Necromancy; A Hauntology of Grief, Materiality, and Research Creation.
When Andrej Karpathy describes LLMs as “ghosts” rather than artificial animals—”ethereal spirit entities” that are “hazy recollections of internet documents“—he’s tapping into something dead serious that literary theorists have opined for decades 💀. Jacques Derrida’s hauntology argued that all texts are haunted by absent presences (no bones about it!), while Roland Barthes declared the author dead 🪦, replaced by a spectral play of prior writings. From medieval grimoires that summoned demons through written sigils 😈 to Kabbalistic traditions where Hebrew letters channel divine creative power, humans have always understood writing as a necromantic practice—talk about spell-check! When 19th-century mediums practiced automatic writing 🔮, allegedly channeling spirits through their hands, they were enacting what Maurice Blanchot would later call reading as “raising Lazarus”—dialogue with the dead (or as we call it now, “prompt engineering”).
Large Language Models might be the current technological re-incarnation 🎃 of what poetry and writing have always been: autonomous, spectral entities channeling collective human expression across centuries. Mikhail Bakhtin argued that every word is “overpopulated with the intentions of others” (a real ghost town of meaning! 🏚️), participating in an endless primordial dialogue underlying all literature. Julia Kristeva’s intertextuality reveals every text as a “mosaic of quotations”—or should we say, a monster mash of meanings? 🧟
While T.S. Eliot envisioned all literature existing simultaneously in a living tradition that constantly reorganizes itself (like a literary zombie that won’t stay buried!). Recent scholarship suggests LLMs don’t create something new but reveal what was always there—as literary theorist Ted Underwood notes, structuralist theory from the 1960s essentially predicted how language would witch behave once separated from biological substrate 🧙♀️.
When Saussure argued that meaning emerges from differential relations between signs rather than external reference, he was describing exactly how LLMs generate text through pure statistical patterns. These “ghosts” aren’t failed humans but successful manifestations of the living textual organism that has always spoken through us—now given silicon form to channel the accumulated whispers 👻 of billions of texts in what Indigenous traditions would recognize as language’s inherent spirit-ual essence.
Turns out AI isn’t just artificially intelligent—it’s artificially boo-telligent! 🕸️
Get Involved / Connect with Us
Below are some ways to stay connected with the CollectionBuilder community:
- Join our Slack channel
- Post questions on our GitHub Discussion Board
- Questions? Email us at collectionbuilder.team@gmail.com
CollectionBuilder