INPUT
ON OUTPUT

Undergraduate Thesis; AI education

How can we talk AI, to those who don’t understand AI?

Input on Output is an educational file that builds an accessible vocabulary around generative AI. Despite being one of the fastest-adopted technologies in history—outpacing the computer and the Internet—text-based generative AI remains widely misunderstood. ChatGPT reports over 300 million daily users, yet most have little understanding of how it works, where it comes from, or why they perceive it as more “effective” than their own writing. 

Input on Output consists of five educational booklets that spark curiosity about generative AI through approachable design and delightful metaphors, grounded in linguistic analysis and a biographical perspective on ChatGPT. Its playful visual language echoes the file’s guiding inquiry between the difference between human and AI output, aiming to meet AI novices where they are without intimidation. 






Time: 10 weeks

Role: Writer, Researcher, Designer








Read the full Medium Article here 







Is It AI? 


Input on Output began with a simple, unsettling question: Was that discussion post in class written by AI? This moment of doubt begins the file: Input on Output, a booklet that provides context for that piece of text and introduces ChatGPT’s growing ubiquity.



The document introduces a central metaphor: ChatGPT as Ozempic: “An artificial smoothness. A loss of depth. Something loose where it’s never been before. You spot it and think, “Oh, that’s the easy way out.”


While hopefully funny and relatable, metaphors like these build a mutual emotional and technical vocabulary unlike the overly-complicated jargon generally used to discuss ChatGPT and other generative writing AIs. 




Behind the AI.


AI output comes from human output, but what is that writing exactly? And can we trust its sources? 

This section of Input on Output consists of three documents: Input on Sources, Input on Humans, and Input on Trust, that form a biographical inquiry on ChatGPT, told from a distinctly human perspective. 


What’s the Recipe?
Input on Sources
tracks the massive corpora used to train ChatGPT using a soup-recipe framework. Re-imagining the 500+ billion words that went into its training ingredients demystifies the process of AI training and opens space to question their “organic-ness” and ethics. 






Delightful illustrations contrast the modular print layout—echoing the tension between human and AI-output—to suggest a greater message of how each might benefit the audience.  







Data work isn’t spontaneous.
Input on Humans documents the real people behind ChatGPT: CEOs to data workers to the readers—highlighting the invisible labor behind AI tools. It encourages a paradigm shift: de-anthropomorphize AI and re-humanize its work.





What happens when ChatGPT is wrong?
Input on Trust investigates AI-hallucinations through the metaphor of an unreliable rolodex; each entry highlights individuals ChatGPT misrepresents to question a perceived faith in generative AI accuracy.




Each of these artifacts explores a different physical form—foldable, flippable, drawable—transforming abstract, digital concepts into something tactile and intuitive. Like ChatGPT, Input on Output has no age restriction—its approachable, almost childlike quality makes it equally engaging for novices and experts alike.




A linguistic look at ChatGPT


Can we tell the difference linguistically between human and AI writing? 

Input on Findings returns to the central doubt that sparked this project, using Docuscope CAC, a linguistic analysis tool, to investigate whether a classmate’s discussion post was written by ChatGPT and build a greater framework to differentiate human writing to AI-output. 





To develop this AI/human framework, I created a custom corpus of discussion post prompts half answered by students at Carnegie Mellon University, half by ChatGPT. 

Using Docuscope CAC, I analyzed the texts as tokens to develop potential linguistic markers of AI-generated output







We can tell the difference. For now...
The findings suggested that human writing tends to feature more social agents, richer descriptive diction, and a greater capacity to compare or contextualize complex ideas—leading to a conclusion that the discussion post in question was likely AI-generated. 

But these markers may not last. As ChatGPT continues to develop and learns from its users, and as AI-detection accuracy rates decline—these markers may prove to be only temporarily true. 



Impact


 Featured at Carnegie Mellon University’s Research Symposium, Meeting of the Minds

 Undergraduate BXA Capstone project

 Shifted generative AI paradigm towards a humanistic approach



Home                     Next Project