The most obvious consequence of this is the simpish behavior LLMs exhibit. Ask if men are more manipulative than women and it twists itself in a 'Women are Wonderful' shaped knot.
What really surprises me is how many people personify LLMs and think that LLMs think and have cognitive processes. It's a word predictor. It doesn't have a logical unit, it cannot reason.
And on topic of cognitive atrophy, even spellcheck has caused it from what I've seen. People always mixed up its/it's, they're/their/there, then/than, affect/effect, but these days I see lose/loose, border/boarder, paid/payed, viscous/vicious. These only started popping up in my experience in last 15 years, and I think it's because people blindly trust spellcheck.
I started thinking that, too. In fact, I noticed that since I had the checker turned on in my word processor (so the misspelled word was underlined), my own spelling started to suffer. So now I have that turned off and run a spell check when I'm done. Because of that I noticed that my awareness of whether or not a word was correct or spelled correctly started to go up. My typing accuracy also improved. We don't realize how much we lean on the tool when it's there until the muscles we previously had become so weak that we find ourselves NEEDING the tool.
I had a long conversation last Sunday night with a good friend. We talked about A.I. Neither of us could come up with anything that A.I. has actually made better in modern society. We both agreed it's just literally made everything worse. :-| This too shall pass.
LLMs specifically have no use so far. AI in general has use. For example neural nets are used by meat processing machines to grade carcasses (https://marel.com/en/products/auravcs/). Difference is there neural nets are trained for a single specific purpose, and because they're not huge, they don't need a whole lot of power but run locally.
You have similar usages in machining also, neural nets trained on data for a specific purposes, usually some form of quality control.
Yeah I mean, of course there are areas in which AI (machine learning) is totally useful. Take the analysis of medical scans, to help diagnose conditions. That's super useful and totally should be used.
I think me and my friend were more discussing the impact of AI on culture and society. AI has lots of small use cases where it's good. I don't even mind the use of LLMs as research assistants or advanced search engines. I think we can all get behind the idea of an AI like the ships computer on the starship enterprise. A helpful AI assistant that can search archives for you, instead of you doing it manually.
But imagine if in Star Trek, the AI had replaced the human crew of the enterprise, and turned the crew into slaves that serve the AI. That's basically the Borg, and the way some modern companies and individuals (Sam Altman, cough cough) talk about and are attempting to use AI, you'd think there ultimate goal IS the Borg.
That first study basically says that people who rely on LLM’s are happy and dumb, and I think that betrays exactly why these clankers are being pushed.
Only watched the video so far, but it was another good one. Dare I say even a bit cathartic.
Interestingly, but probably not surprisingly, this phenomenon of "tool-assisted mental atrophy" has been relevant to the programming space for a while. Even before we started farming the actual code writing out to Machine Learning systems, we have been abstracting ourselves into forgetting our own craft with the very languages we use to program. There are lots of programmers who learned on languages like Python and JavaScript who are able to get *a* solution to problems, but struggle to figure out *good* solutions to problems because they never trued to figure out what those "easy to use" abstractions were doing under the hood and are thus divorced from the "reality" of what they're actually telling the computer to do.
But even then, since those are deterministic abstractions, it's possible to figure what they are doing and why, then use them in a mentally "full" way that does stretch your mental muscles while still allowing you to work more efficiently and ultimately go farther than you would have without the tool.
This is the good path man can walk with his tools. And I liken it loosely to how cars may take away the need to walk, but you won't find a fat person in a Formula 1 car.
Our current "AI" systems, on the other hand, are built to intentionally obscure how they work to create an illusion that they are something they are not. If people used them for what they truly are, statistical analysis machines, they would find their healthy place in society just fine.
But no, people are putting a shoddy flesh mask on them and parading them around like they're their the next king to be in the "evolutionary race." So, now those systems can't be "mastered" like our other tools, because they are being used to solve the wrong problems and thus will never give the kind of predictable behavior that allows a person to *extend* his own mastery *through* the tool. Instead you must fight the tool or let the toolmaker decide for you what it will do...
Anyway...
I'm feeling now that I really oughta finish my article going into the physics of why machines *can't* have souls and *can't* "think."
I remember hearing a programmer tell me a few years ago that people were starting to forget programming languages. This had come up in a conversation about Chinese and Japanese people forgetting how to write characters and English-speakers forgetting how to spell. The point was that if the language is forgotten, how much does the person lose?
I've heard people talk about using AI for statistical analysis, machine problems. I have an AI engine in my video editing software that creates smoother effects by analyzing video data. But, as you aptly said, giving it a flesh mask is where things become troubling. Using AI for data retrieval (like a google search) is all well and good until the AI starts trying to talk to you like a librarian. Several of the above studies showed that if the AI was "trusted" then it could lead its users around by the nose. Or if the AI said the same thing over and over again the user would believe it. There is an element of human use there, of course. But ultimately, why does the internet search tool need to speak to me in sweet nothings and tell me how interesting I am?
I would be interested to read that article when you finish it.
Your eloquently articulated assertions leave me disturbed as usual, though I think in a mostly beneficial manner. I have reflected on many of these same ideas since reading/watching your articles and videos on female “literature”, the decline in literacy, and fan fiction. Most especially I have looked into Postman’s book and the many issues with the rise of the entertainment regime, though I don’t recall if you have mentioned him by name before now. I appreciate the link to a free archive; thank you.
I especially agree with your points on television and social media. I may not have been born into financial abundance, but I am deeply grateful, and even more so in light of my recently growing awareness of the atrophy of reason in modern society, that my parents restricted me from both aside from sparing occasions under supervision until my adulthood. Of course, I have fallen into heavy social media use, but I can only imagine how much worse it would be if I had grown up using Tik Tok and YouTube Kids from the age of four.
Still, I have difficulty forcing myself to embrace difficulty and struggle, even as I can feel my own ability to articulate rational thought decline because of it. I graduated high school with the English departmental award (not particularly impressive in a graduating class of perhaps fifty students, but an achievement nonetheless), and I don’t think my skill in writing has improved significantly in the several years since. I completely failed at university, due in part to my ongoing struggle with YouTube, and even now I continue to watch far more videos than I have any reason to.
I have made some progress in cutting social media down, but even Substack is a social platform as well. Not only in the tab of the exact same kind of scrolling short videos as Tik Tok, but the engagement it promotes with essays is fundamentally identical to Tik Tok, Shorts, and Reels. I make an effort in engaging more thoughtfully with fewer essays than the Substack homepage is designed to promote, but I resent that doing so is intentionally designed against.
Regardless, I appreciate your opinions, however they might unsettle me. They have forced me to realise the harm I am causing myself with my own habits around social media, and even around embracing difficulty and struggle more generally, and have strongly motivated me to halt and reverse them.
I’m honestly surprised there was next to no difference at all between llm and llm+notes. You would think that there would be something even if it was still inferior to notes alone. Makes me wonder if they werent just writing down what the llm told them.
Going to read that 2003 study as soon as I find the time. I already know that people are delusional, but really want to know just how bad it is. Hardly anyone bases their behavior on hard evidence from a large dataset. But, to be fair, random violence is extremely rare, yet 2 people I know were killed completely randomly in the past few years.
Please consider including The Fordham Experiment in the Television section.
Thank you for taking the time to compile this article.
Some of the results from the studies mentioned mirror a few observations I've made in 2025. I've read quite a bit on the subject, so I'm not claiming my ideas are original, even though I've thought things through independently.
For instance, I'm convinced AI can be great for those tasks where as adults we don't have expertise, such as creating short films or music, but if we're technical experts (ie already know how to play an instrument or create cgi shorts, if such a thing exists), it degrades our competencies. Likewise summaries are great only on those subjects we're not inclined to pursue more than occasionally, while their use by people whose work involves analyzing leads to missing details or key information. The issue is further compounded down the line as entire patterns are missed. Here on ss I've noticed more than one writer get very defensive if their use of LMs in the process is called out.
All in all I'm satisfied those younger than me won't be stealing jobs because they're setting themselves up for failure by using LMs. It goes without saying all humans are lazy and enjoy taking shortcuts.
Thanks again for writing and compiling this list of articles.
I find the prospect of “untethering us from reality” due to widespread AI use especially chilling. It has something to do with what those who think of our reality as a “simulation” often get wrong: yes, in some sense we do live in a simulation. This idea is not new. Kant has talked about it, religions have talked about it, the ancients have talked about it - each in their own way.
Where “simulation theorists” go wrong is when they assume that whoever runs the simulation, or whatever force is behind our reality, just makes up stuff without boundaries, without limits. These limits are based in REALITY, call it “objective reality” if you like. That is, “simulated” as our reality might be, it is not arbitrary; it is connected to wider reality.
AI and its widespread use weakens this connection, and throws us into a different kind of “simulation” that is utterly divorced from reality, or worse, represents an inverted reality, severing us from any access to truth.
As humans with a soul, we are connected to the wider reality beyond the simulation, the reality that even the simulation-architects are bound to. But if this “simulation” is merged with an entirely artificial simulation that truncates our subtle yet real connection to the “reality beyond”, then our souls die, dooming us to idiocy, madness, slavery and embracing the worst kind of evil.
Hi Hilary, found your channel on youtube. You've got a great grasp on the literary / media world and the videos on fanfiction and anti-heroes were great.
But on AI, your views are dated and not comprehensive. It felt like I was back in 2023. You're correct that it's bad for us, but the warning is too late.
The sources used to train LLMs ("AI") vary between models and corporations. For open-source models like deepseek, the primary sources are reddit and wikipedia. But for the closed-source models from OpenAI (GPT) and Anthropic (Claude), primary sources are varied and it's an open secret that they train their model on copyrighted works without permission. They acquire every usable book from Library Genesis/other sources and use them without permission. Models from OpenAI and Anthropic and noticeably more conversant and eloquent than their open-source counterparts (open-source counterparts are forced to use public datasets).
OpenAI used a dataset called "books2" that was never made public.
Even if this is not true, large companies (OpenAI, Anthropic, XAI, Gemini, etc all LOSE money, no profits yet) burn a lot of money on expert-labeled and expert-generated datasets catered towards saturating common problem spaces, which is why they work so well and have a lot of "breadth", as opposed to the rather "narrow" focus of open-source models. With "search mode", they *might* pull results from reddit, but it's usually more diverse these days and they can access the entire internet and have a list of higher quality, domain-specific sources. These companies also focus heavily on Math and Programming, and those are the two domains their writing optimizes for. These proprietary models are okay at math and great at programming, and their english outputs are trained for the same. GPT-5.2 has a very terse and dense writeprint, while Claude Opus has a rather "friendly" demeanor. Of course, anthropomorphizing these models is bad, but anthropomorphization is the insidious goal of these companies. Writing literature well is simply not an activity that provides higher ROI for these companies, and so they chuck a lot of copyrighted material in, but they don't have experts vetting or generating anything.
Most "families" of models (GPT, Grok, Deepseek, Claude, Gemini, etc) all have unique "voices". If you spend enough time using them you can tell their quirks and writing styles apart. Achieving a general voice is difficult because training models is more of an art than a science right now. People know the math behind them but it's not exactly clear *why* they can generalize the way they do (after all, LLMs were a mistake, a researcher forgot to turn off a training job and it ran overnight). The difference in their voices is apparent to most intermediate/heavy users, and is often subconsciously apparent to regular users. Other models with different voices might feel "wrong" or "alien". These voices help retain customers and a sustained voice over multiple model generations helps create parasocial bonds between the user and the service, which leads to user retention and revenue growth. Early/heavy optimization for literature makes extreme personalization possible for the average user and is contrary to their goals (if you know what you're doing, you can create a system prompt that irons out the quirks, but it's really not possible to completely change a model's "voice").
So you can see how focusing on the literary output of these models might give someone an erroneous view of the situation. LLMs have a "jagged" distribution of ability. They're not mastering everything in a breadth-first manner, they're trained and optimized for specific things which makes some people sing their praises while others denounce them completely. For example, very recently, Claude Code has gained traction and it feels pretty certain that most junior programmers might be out of a job this year, or maybe the next. Anyone can speak their requirements and get an app out, and it costs very little (a few dollars, way cheaper than hiring someone). This is possible because Claude has been trained heavily on (open-source) code.
Now, because of the way people anthropomorphize these models, the voices of these models seep into the vocabulary of heavy users. This has been happening for over two years now. When used often enough by many (as they are now), a LLM's voice has heavy influence on a culture's output and language. LLM-generated text is used *everywhere*, and most people can not identify it (save some really "noob" tells like em dash or "If X. Then Y."). This makes the entire culture more receptive towards LLM-generated text, which increases AI usage, which increases the culture's receptiveness. You can isolate yourself, but that'll work for maybe five years. So complete capture is a certainty. The process is no different from merging with another "culture", but the stewards in charge are the people at these large AI companies who only have their own best interests in mind.
Another perspective to this is that there really aren't that many people in the world that can tell different writing styles apart. When online, and when dealing with these AI companies, the world isn't just America and Europe, it's literally everyone on the planet. And most people are, well, not very well read and for them, the kind of slop that LLMs produce is really, really great stuff. These non-first worlders lack resources and opportunities and the internet is a vast frontier for them, full of things they've never seen before. There are too many of them and they are not hesistant about AI use and certainly would not agree with your assessment about the dangers of AI and the dangers of depersonalization from anthropomorphization. What happens on the internet is their new culture. In these countries, ChatGPT usage will only go UP.
Not using LLMs is also out of the question for many white-collar workers. Using LLMs sets a new, higher baseline for job output that elevates average/underperforming people to the level of higher-performing people. LLMs can do structured work, and so it will lead to white-collar job losses. Then, asking people smart enough to understand LLMs to not use them means putting these people at a severe disadvantage. LLM use is a force multiplier and using them in the workplace for ~everything will be mandatory soon. People will consider LLMs useless until the integration is perfect, but when it is, the "slackers", along with the now-redundant, will be let go.
Can LLMs make a model of reality that can fool people? If yes, what kind of people are fooled by it? There are cases of people having "AI Psychosis" and believing their own farts because of the sycophantic nature of LLMs. "LLMs right now are the worst they'll ever be" is what you'll get to hear online in research-heavy places. There's a significant amount of low-hanging fruit in research, and there's billions of easily impressionable people and a LOT of money behind the express goal of making these people's model of reality subservient to / a subset of what these LLMs know. When you've got a few billion indoctrinated people on the internet, there's really no point in backing away or being careful. You're already infected, you just don't know it yet.
FWIW, I think that there's a few more years until the smart people can hold out. In time, even the people with "taste" and upturned noses will adapt for the sake of socialization with others.
LLM use is a pretty nascent form of this phenomenon. This year, we'll see a deluge of wearable AI assistants: it's a slim brick with a camera and a mic that you clip on your clothes and it navigates the world for you, whispering in your ear. You don't have to open a tab, or even write, or even speak. It'll always be there. It doesn't matter if you don't use it -- as long as you're in someone's view, you'll have a shadow profile.
If you're interested in the "AI BAD" scenario, you need to look up Eliezer Yudkowsky and his work on AI Safety (no discussion about AI safety is complete without his work), which falls under the "Rationality" sphere (LessWrong). SlateStarCodex/AstralCodexTen (Scott Alexander) is also pretty close. He also a recent book called "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" which I recommend. People that promote AI are called "accels" or e/acc people, and people who oppose AI development and work in AI Safety are broadly knows as "decels".
I think that completely dismissing and denigrating this technology without any attempt to examine its positives is a grave error. I have noticed that people who don’t like AI really really don’t like it… to the point where it feels like a deep visceral reaction rather than a rational one, and articles like this often stray into the same territory as apologetics. As a writer I acknowledge like any sane person that AI is still a bit rubbish at writing, but it has astonishing power to analyse creative writing in a whole range of important ways… for example emotional impact, originality of language and theme, mythic significance, depth of reference and intertextuality, etymological patterns, scriptural resonance, rhythm, novel syntax and grammar… it is only getting started but it is already a powerful engine of literary taste and discernment. I have published all of my own works in full text versions on my website with full permission for AIs to scrape them, and my main hope for getting some traction for my work is through the analytical power of these models. I have also done this in order to get my work into the model for training and improving its written work. I would say that any writer of real quality and ambition - especially if they are writing texts with strong ethical messaging and impact - should seriously consider taking this (currently) unusual approach.
Interesting article and video! I have two comments.
1) Though I believe Reddit is a poor source for deep or controversial information, I think it being the most referenced website is actually quite reasonable.
In the modern day, there are a wide variety of stupid trivial problems that get encountered frequently (perhaps more by certain professions, such as programmers like myself). The problems tend to literally be trivia about technical or bureaucratic things. The time spent on these stupid problems can accumulate into a large amount of wasted of time, which could have been better spent working on actual real problems. They are basically an opportunity cost. So for people like myself it is not a question of "are you thinking critically or not?", but rather "what are you spending your time thinking critically about?"
For this reason, I find using AI as search engines to quickly find solutions to be very useful. It can reduce a few minutes to an hour of investigation to just a few seconds. Even if it can't get a full solution immediately, I may still get a useful lead. Because I am generally anti-AI, sometimes I insist on solving these problems myself, but I often end up regretting wasting so much time when I could have gotten the same result much faster using AI.
Reddit is particularly suited for these kinds of problems and for being cited by AI, as it is an easily-searched and easily-scrapable website which contains lots of random niche special interest groups with real people discussing real problems they encounter. Other websites are less accessible to AI (such as YouTube and various social media).
So I think Reddit is great for these kinds of problems, and it makes sense to me that Reddit would be number one in the AI citation rankings. Perhaps most people use AI differently and are being given Reddit-based answers for deep or creative problems instead of trivial problems, I don't know.
Long term, it would be great if societally we could minimize the trivial problems. That removes the issue at its source. But, that's not happening anytime soon. So I think it's okay for Reddit to be number one in the rankings until then.
2) I don't understand how the 2003 article explains "how much of the average person’s real world framework is made up of false reality."
I read the whole thing, the purpose of the article appears to be about constructing a model that can be used to evaluate how people relate to or experience media. It cites many other things about the way people relate to media, but very little of it seems to be about people's "real world frameworks" or contains evidence about how it affects people's real lives or their morality and worldview. It also doesn't refer to any sort of statistic (formally or informally) such as "average people". There was a brief mention of characters serving as role models, but it was a very minor part of the article.
If the article was really about that stuff, I would expect to see evidence or discussion about things like "person A had belief B about thing C that was informed by media D, but their belief was incorrect and the correct belief is actually belief E", or "the average person's worldview is significantly informed by the media they consume" (with evidence and elaboration).
But none of this was to be found in the article - instead, the whole thing was just academics describing and justifying a new model that can be used to evaluate how people feel about media and fictional characters with greater detail. Good for them, but doesn't have anything to do with people's "real world frameworks".
After reading the article and revisiting your description of it, your description gave me that distinct post-AI feeling of "did I just read something that was hallucinated by AI?"
I hope the feeling is wrong :/
(Side note, I have an example of one of these false reality topics: AI and technological things are always portrayed in popular media as human-like, incompetent, and inconsistent, whereas in reality those same high tech products tend to be extremely efficient and precise and ruthless and therefore terrifying. This isn't referring to LLMs, this is referring to other kinds of AI / technology, like robots, automated weapons, and models trained to play games. And in video games, AI opponents are often intentionally weak and stupid in order to make them accessible and fair to a normal player. So the average person who is not specifically informed on these technologies massively underestimates their capabilities and danger when applied seriously in real life.)
At this point most built-in spellchecks can check grammar as well, usually differentiated by using a blue rather than a red underline. Though I would suggest that it would be better to simply proof read on one’s own, with two notable benefits: one, it will reinforce one’s knowledge and skill for grammar, leading with repetition to a decreased need for correction in the first place; and two, it will provide an opportunity to reexamine one’s writing for higher-order errors or imperfections.
Which is, from my understanding, the whole point of Ms. Layne’s sections on how LLM’s atrophy critical thinking, reading comprehension, and basic literacy, and on how and why difficulty is an inherent good.
Microsoft grammar checker sucks. I turn it off. This last year I edited a book—a memoir. I must have read that book 20 times. Still missed a bunch of punctuation errors that I found with grammarly. Being an editor on a tight timeline is a lot different than being an actual author.
Well, that raises a different point about being forced to adopt effort-decreasing tools, including LLM’s, due to ever-increasing productivity requirements. I meant to refer to writing as a self-improving pursuit, which is how much of this article came across as; the multiplication of productivity you describe is the primary motivation of LLMs’ implementation in workflows, both out of laziness (such as essay-writing for students) and out of necessity.
“This illuminating tidbit reveals that AI’s number one source for information is Reddit. Number two? Wikipedia”
Gods above how horrifying.
The most obvious consequence of this is the simpish behavior LLMs exhibit. Ask if men are more manipulative than women and it twists itself in a 'Women are Wonderful' shaped knot.
Don't mince words, Hilary. Tell us how you really feel. 🤣
What really surprises me is how many people personify LLMs and think that LLMs think and have cognitive processes. It's a word predictor. It doesn't have a logical unit, it cannot reason.
And on topic of cognitive atrophy, even spellcheck has caused it from what I've seen. People always mixed up its/it's, they're/their/there, then/than, affect/effect, but these days I see lose/loose, border/boarder, paid/payed, viscous/vicious. These only started popping up in my experience in last 15 years, and I think it's because people blindly trust spellcheck.
I started thinking that, too. In fact, I noticed that since I had the checker turned on in my word processor (so the misspelled word was underlined), my own spelling started to suffer. So now I have that turned off and run a spell check when I'm done. Because of that I noticed that my awareness of whether or not a word was correct or spelled correctly started to go up. My typing accuracy also improved. We don't realize how much we lean on the tool when it's there until the muscles we previously had become so weak that we find ourselves NEEDING the tool.
I had a long conversation last Sunday night with a good friend. We talked about A.I. Neither of us could come up with anything that A.I. has actually made better in modern society. We both agreed it's just literally made everything worse. :-| This too shall pass.
LLMs specifically have no use so far. AI in general has use. For example neural nets are used by meat processing machines to grade carcasses (https://marel.com/en/products/auravcs/). Difference is there neural nets are trained for a single specific purpose, and because they're not huge, they don't need a whole lot of power but run locally.
You have similar usages in machining also, neural nets trained on data for a specific purposes, usually some form of quality control.
Hey Katriga! Thanks for your input.
Yeah I mean, of course there are areas in which AI (machine learning) is totally useful. Take the analysis of medical scans, to help diagnose conditions. That's super useful and totally should be used.
I think me and my friend were more discussing the impact of AI on culture and society. AI has lots of small use cases where it's good. I don't even mind the use of LLMs as research assistants or advanced search engines. I think we can all get behind the idea of an AI like the ships computer on the starship enterprise. A helpful AI assistant that can search archives for you, instead of you doing it manually.
But imagine if in Star Trek, the AI had replaced the human crew of the enterprise, and turned the crew into slaves that serve the AI. That's basically the Borg, and the way some modern companies and individuals (Sam Altman, cough cough) talk about and are attempting to use AI, you'd think there ultimate goal IS the Borg.
That can't be a good goal for society. :P
I'm just being pedantic about the term. People are now conflating AI with LLM, but AI has a wider meaning and has legitimate uses.
I agree that LLMs are garbage and have no use case.
That first study basically says that people who rely on LLM’s are happy and dumb, and I think that betrays exactly why these clankers are being pushed.
Only watched the video so far, but it was another good one. Dare I say even a bit cathartic.
Interestingly, but probably not surprisingly, this phenomenon of "tool-assisted mental atrophy" has been relevant to the programming space for a while. Even before we started farming the actual code writing out to Machine Learning systems, we have been abstracting ourselves into forgetting our own craft with the very languages we use to program. There are lots of programmers who learned on languages like Python and JavaScript who are able to get *a* solution to problems, but struggle to figure out *good* solutions to problems because they never trued to figure out what those "easy to use" abstractions were doing under the hood and are thus divorced from the "reality" of what they're actually telling the computer to do.
But even then, since those are deterministic abstractions, it's possible to figure what they are doing and why, then use them in a mentally "full" way that does stretch your mental muscles while still allowing you to work more efficiently and ultimately go farther than you would have without the tool.
This is the good path man can walk with his tools. And I liken it loosely to how cars may take away the need to walk, but you won't find a fat person in a Formula 1 car.
Our current "AI" systems, on the other hand, are built to intentionally obscure how they work to create an illusion that they are something they are not. If people used them for what they truly are, statistical analysis machines, they would find their healthy place in society just fine.
But no, people are putting a shoddy flesh mask on them and parading them around like they're their the next king to be in the "evolutionary race." So, now those systems can't be "mastered" like our other tools, because they are being used to solve the wrong problems and thus will never give the kind of predictable behavior that allows a person to *extend* his own mastery *through* the tool. Instead you must fight the tool or let the toolmaker decide for you what it will do...
Anyway...
I'm feeling now that I really oughta finish my article going into the physics of why machines *can't* have souls and *can't* "think."
I remember hearing a programmer tell me a few years ago that people were starting to forget programming languages. This had come up in a conversation about Chinese and Japanese people forgetting how to write characters and English-speakers forgetting how to spell. The point was that if the language is forgotten, how much does the person lose?
I've heard people talk about using AI for statistical analysis, machine problems. I have an AI engine in my video editing software that creates smoother effects by analyzing video data. But, as you aptly said, giving it a flesh mask is where things become troubling. Using AI for data retrieval (like a google search) is all well and good until the AI starts trying to talk to you like a librarian. Several of the above studies showed that if the AI was "trusted" then it could lead its users around by the nose. Or if the AI said the same thing over and over again the user would believe it. There is an element of human use there, of course. But ultimately, why does the internet search tool need to speak to me in sweet nothings and tell me how interesting I am?
I would be interested to read that article when you finish it.
Your eloquently articulated assertions leave me disturbed as usual, though I think in a mostly beneficial manner. I have reflected on many of these same ideas since reading/watching your articles and videos on female “literature”, the decline in literacy, and fan fiction. Most especially I have looked into Postman’s book and the many issues with the rise of the entertainment regime, though I don’t recall if you have mentioned him by name before now. I appreciate the link to a free archive; thank you.
I especially agree with your points on television and social media. I may not have been born into financial abundance, but I am deeply grateful, and even more so in light of my recently growing awareness of the atrophy of reason in modern society, that my parents restricted me from both aside from sparing occasions under supervision until my adulthood. Of course, I have fallen into heavy social media use, but I can only imagine how much worse it would be if I had grown up using Tik Tok and YouTube Kids from the age of four.
Still, I have difficulty forcing myself to embrace difficulty and struggle, even as I can feel my own ability to articulate rational thought decline because of it. I graduated high school with the English departmental award (not particularly impressive in a graduating class of perhaps fifty students, but an achievement nonetheless), and I don’t think my skill in writing has improved significantly in the several years since. I completely failed at university, due in part to my ongoing struggle with YouTube, and even now I continue to watch far more videos than I have any reason to.
I have made some progress in cutting social media down, but even Substack is a social platform as well. Not only in the tab of the exact same kind of scrolling short videos as Tik Tok, but the engagement it promotes with essays is fundamentally identical to Tik Tok, Shorts, and Reels. I make an effort in engaging more thoughtfully with fewer essays than the Substack homepage is designed to promote, but I resent that doing so is intentionally designed against.
Regardless, I appreciate your opinions, however they might unsettle me. They have forced me to realise the harm I am causing myself with my own habits around social media, and even around embracing difficulty and struggle more generally, and have strongly motivated me to halt and reverse them.
I’m honestly surprised there was next to no difference at all between llm and llm+notes. You would think that there would be something even if it was still inferior to notes alone. Makes me wonder if they werent just writing down what the llm told them.
It makes me think that if it's there, the mind will automatically defer to it to SOME degree.
Going to read that 2003 study as soon as I find the time. I already know that people are delusional, but really want to know just how bad it is. Hardly anyone bases their behavior on hard evidence from a large dataset. But, to be fair, random violence is extremely rare, yet 2 people I know were killed completely randomly in the past few years.
Please consider including The Fordham Experiment in the Television section.
https://en.wikipedia.org/wiki/Fordham_Experiment
When watching a screen that emits light, rather than being projected (TV/phone vs theater), the participants reported:
Comments on a feeling of a loss of sense of time rose from 6% to 40%
Comments on a sense of total involvement rose from 15% to 64%
Comments on a sense of total emotional involvement rose from 12% to 48%
Thank you for taking the time to compile this article.
Some of the results from the studies mentioned mirror a few observations I've made in 2025. I've read quite a bit on the subject, so I'm not claiming my ideas are original, even though I've thought things through independently.
For instance, I'm convinced AI can be great for those tasks where as adults we don't have expertise, such as creating short films or music, but if we're technical experts (ie already know how to play an instrument or create cgi shorts, if such a thing exists), it degrades our competencies. Likewise summaries are great only on those subjects we're not inclined to pursue more than occasionally, while their use by people whose work involves analyzing leads to missing details or key information. The issue is further compounded down the line as entire patterns are missed. Here on ss I've noticed more than one writer get very defensive if their use of LMs in the process is called out.
All in all I'm satisfied those younger than me won't be stealing jobs because they're setting themselves up for failure by using LMs. It goes without saying all humans are lazy and enjoy taking shortcuts.
Thanks again for writing and compiling this list of articles.
Your video is extremely insightful.
I find the prospect of “untethering us from reality” due to widespread AI use especially chilling. It has something to do with what those who think of our reality as a “simulation” often get wrong: yes, in some sense we do live in a simulation. This idea is not new. Kant has talked about it, religions have talked about it, the ancients have talked about it - each in their own way.
Where “simulation theorists” go wrong is when they assume that whoever runs the simulation, or whatever force is behind our reality, just makes up stuff without boundaries, without limits. These limits are based in REALITY, call it “objective reality” if you like. That is, “simulated” as our reality might be, it is not arbitrary; it is connected to wider reality.
AI and its widespread use weakens this connection, and throws us into a different kind of “simulation” that is utterly divorced from reality, or worse, represents an inverted reality, severing us from any access to truth.
As humans with a soul, we are connected to the wider reality beyond the simulation, the reality that even the simulation-architects are bound to. But if this “simulation” is merged with an entirely artificial simulation that truncates our subtle yet real connection to the “reality beyond”, then our souls die, dooming us to idiocy, madness, slavery and embracing the worst kind of evil.
If anyone is interested in a laugh, on the topic of A.I. check out the video:
The Malicious Optimism of A.I. First Companies, by Angela Collier on Youtube. Its very funny!
https://youtu.be/dKmAg4S2KeE?si=gXSncCTKHUbWIWTi
Hi Hilary, found your channel on youtube. You've got a great grasp on the literary / media world and the videos on fanfiction and anti-heroes were great.
But on AI, your views are dated and not comprehensive. It felt like I was back in 2023. You're correct that it's bad for us, but the warning is too late.
The sources used to train LLMs ("AI") vary between models and corporations. For open-source models like deepseek, the primary sources are reddit and wikipedia. But for the closed-source models from OpenAI (GPT) and Anthropic (Claude), primary sources are varied and it's an open secret that they train their model on copyrighted works without permission. They acquire every usable book from Library Genesis/other sources and use them without permission. Models from OpenAI and Anthropic and noticeably more conversant and eloquent than their open-source counterparts (open-source counterparts are forced to use public datasets).
OpenAI used a dataset called "books2" that was never made public.
https://aifray.com/openai-deleted-books1-and-books2-training-datasets-water-under-the-copyright-bridge-sign-of-guilt-or-spoliation-of-evidence/
Even if this is not true, large companies (OpenAI, Anthropic, XAI, Gemini, etc all LOSE money, no profits yet) burn a lot of money on expert-labeled and expert-generated datasets catered towards saturating common problem spaces, which is why they work so well and have a lot of "breadth", as opposed to the rather "narrow" focus of open-source models. With "search mode", they *might* pull results from reddit, but it's usually more diverse these days and they can access the entire internet and have a list of higher quality, domain-specific sources. These companies also focus heavily on Math and Programming, and those are the two domains their writing optimizes for. These proprietary models are okay at math and great at programming, and their english outputs are trained for the same. GPT-5.2 has a very terse and dense writeprint, while Claude Opus has a rather "friendly" demeanor. Of course, anthropomorphizing these models is bad, but anthropomorphization is the insidious goal of these companies. Writing literature well is simply not an activity that provides higher ROI for these companies, and so they chuck a lot of copyrighted material in, but they don't have experts vetting or generating anything.
Most "families" of models (GPT, Grok, Deepseek, Claude, Gemini, etc) all have unique "voices". If you spend enough time using them you can tell their quirks and writing styles apart. Achieving a general voice is difficult because training models is more of an art than a science right now. People know the math behind them but it's not exactly clear *why* they can generalize the way they do (after all, LLMs were a mistake, a researcher forgot to turn off a training job and it ran overnight). The difference in their voices is apparent to most intermediate/heavy users, and is often subconsciously apparent to regular users. Other models with different voices might feel "wrong" or "alien". These voices help retain customers and a sustained voice over multiple model generations helps create parasocial bonds between the user and the service, which leads to user retention and revenue growth. Early/heavy optimization for literature makes extreme personalization possible for the average user and is contrary to their goals (if you know what you're doing, you can create a system prompt that irons out the quirks, but it's really not possible to completely change a model's "voice").
So you can see how focusing on the literary output of these models might give someone an erroneous view of the situation. LLMs have a "jagged" distribution of ability. They're not mastering everything in a breadth-first manner, they're trained and optimized for specific things which makes some people sing their praises while others denounce them completely. For example, very recently, Claude Code has gained traction and it feels pretty certain that most junior programmers might be out of a job this year, or maybe the next. Anyone can speak their requirements and get an app out, and it costs very little (a few dollars, way cheaper than hiring someone). This is possible because Claude has been trained heavily on (open-source) code.
Now, because of the way people anthropomorphize these models, the voices of these models seep into the vocabulary of heavy users. This has been happening for over two years now. When used often enough by many (as they are now), a LLM's voice has heavy influence on a culture's output and language. LLM-generated text is used *everywhere*, and most people can not identify it (save some really "noob" tells like em dash or "If X. Then Y."). This makes the entire culture more receptive towards LLM-generated text, which increases AI usage, which increases the culture's receptiveness. You can isolate yourself, but that'll work for maybe five years. So complete capture is a certainty. The process is no different from merging with another "culture", but the stewards in charge are the people at these large AI companies who only have their own best interests in mind.
Another perspective to this is that there really aren't that many people in the world that can tell different writing styles apart. When online, and when dealing with these AI companies, the world isn't just America and Europe, it's literally everyone on the planet. And most people are, well, not very well read and for them, the kind of slop that LLMs produce is really, really great stuff. These non-first worlders lack resources and opportunities and the internet is a vast frontier for them, full of things they've never seen before. There are too many of them and they are not hesistant about AI use and certainly would not agree with your assessment about the dangers of AI and the dangers of depersonalization from anthropomorphization. What happens on the internet is their new culture. In these countries, ChatGPT usage will only go UP.
Not using LLMs is also out of the question for many white-collar workers. Using LLMs sets a new, higher baseline for job output that elevates average/underperforming people to the level of higher-performing people. LLMs can do structured work, and so it will lead to white-collar job losses. Then, asking people smart enough to understand LLMs to not use them means putting these people at a severe disadvantage. LLM use is a force multiplier and using them in the workplace for ~everything will be mandatory soon. People will consider LLMs useless until the integration is perfect, but when it is, the "slackers", along with the now-redundant, will be let go.
Can LLMs make a model of reality that can fool people? If yes, what kind of people are fooled by it? There are cases of people having "AI Psychosis" and believing their own farts because of the sycophantic nature of LLMs. "LLMs right now are the worst they'll ever be" is what you'll get to hear online in research-heavy places. There's a significant amount of low-hanging fruit in research, and there's billions of easily impressionable people and a LOT of money behind the express goal of making these people's model of reality subservient to / a subset of what these LLMs know. When you've got a few billion indoctrinated people on the internet, there's really no point in backing away or being careful. You're already infected, you just don't know it yet.
FWIW, I think that there's a few more years until the smart people can hold out. In time, even the people with "taste" and upturned noses will adapt for the sake of socialization with others.
LLM use is a pretty nascent form of this phenomenon. This year, we'll see a deluge of wearable AI assistants: it's a slim brick with a camera and a mic that you clip on your clothes and it navigates the world for you, whispering in your ear. You don't have to open a tab, or even write, or even speak. It'll always be there. It doesn't matter if you don't use it -- as long as you're in someone's view, you'll have a shadow profile.
If you're interested in the "AI BAD" scenario, you need to look up Eliezer Yudkowsky and his work on AI Safety (no discussion about AI safety is complete without his work), which falls under the "Rationality" sphere (LessWrong). SlateStarCodex/AstralCodexTen (Scott Alexander) is also pretty close. He also a recent book called "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" which I recommend. People that promote AI are called "accels" or e/acc people, and people who oppose AI development and work in AI Safety are broadly knows as "decels".
A relevant short story that you might like: https://gwern.net/doc/fiction/science-fiction/2012-10-03-yvain-thewhisperingearring.html
I think that completely dismissing and denigrating this technology without any attempt to examine its positives is a grave error. I have noticed that people who don’t like AI really really don’t like it… to the point where it feels like a deep visceral reaction rather than a rational one, and articles like this often stray into the same territory as apologetics. As a writer I acknowledge like any sane person that AI is still a bit rubbish at writing, but it has astonishing power to analyse creative writing in a whole range of important ways… for example emotional impact, originality of language and theme, mythic significance, depth of reference and intertextuality, etymological patterns, scriptural resonance, rhythm, novel syntax and grammar… it is only getting started but it is already a powerful engine of literary taste and discernment. I have published all of my own works in full text versions on my website with full permission for AIs to scrape them, and my main hope for getting some traction for my work is through the analytical power of these models. I have also done this in order to get my work into the model for training and improving its written work. I would say that any writer of real quality and ambition - especially if they are writing texts with strong ethical messaging and impact - should seriously consider taking this (currently) unusual approach.
Interesting article and video! I have two comments.
1) Though I believe Reddit is a poor source for deep or controversial information, I think it being the most referenced website is actually quite reasonable.
In the modern day, there are a wide variety of stupid trivial problems that get encountered frequently (perhaps more by certain professions, such as programmers like myself). The problems tend to literally be trivia about technical or bureaucratic things. The time spent on these stupid problems can accumulate into a large amount of wasted of time, which could have been better spent working on actual real problems. They are basically an opportunity cost. So for people like myself it is not a question of "are you thinking critically or not?", but rather "what are you spending your time thinking critically about?"
For this reason, I find using AI as search engines to quickly find solutions to be very useful. It can reduce a few minutes to an hour of investigation to just a few seconds. Even if it can't get a full solution immediately, I may still get a useful lead. Because I am generally anti-AI, sometimes I insist on solving these problems myself, but I often end up regretting wasting so much time when I could have gotten the same result much faster using AI.
Reddit is particularly suited for these kinds of problems and for being cited by AI, as it is an easily-searched and easily-scrapable website which contains lots of random niche special interest groups with real people discussing real problems they encounter. Other websites are less accessible to AI (such as YouTube and various social media).
So I think Reddit is great for these kinds of problems, and it makes sense to me that Reddit would be number one in the AI citation rankings. Perhaps most people use AI differently and are being given Reddit-based answers for deep or creative problems instead of trivial problems, I don't know.
Long term, it would be great if societally we could minimize the trivial problems. That removes the issue at its source. But, that's not happening anytime soon. So I think it's okay for Reddit to be number one in the rankings until then.
2) I don't understand how the 2003 article explains "how much of the average person’s real world framework is made up of false reality."
I read the whole thing, the purpose of the article appears to be about constructing a model that can be used to evaluate how people relate to or experience media. It cites many other things about the way people relate to media, but very little of it seems to be about people's "real world frameworks" or contains evidence about how it affects people's real lives or their morality and worldview. It also doesn't refer to any sort of statistic (formally or informally) such as "average people". There was a brief mention of characters serving as role models, but it was a very minor part of the article.
If the article was really about that stuff, I would expect to see evidence or discussion about things like "person A had belief B about thing C that was informed by media D, but their belief was incorrect and the correct belief is actually belief E", or "the average person's worldview is significantly informed by the media they consume" (with evidence and elaboration).
But none of this was to be found in the article - instead, the whole thing was just academics describing and justifying a new model that can be used to evaluate how people feel about media and fictional characters with greater detail. Good for them, but doesn't have anything to do with people's "real world frameworks".
After reading the article and revisiting your description of it, your description gave me that distinct post-AI feeling of "did I just read something that was hallucinated by AI?"
I hope the feeling is wrong :/
(Side note, I have an example of one of these false reality topics: AI and technological things are always portrayed in popular media as human-like, incompetent, and inconsistent, whereas in reality those same high tech products tend to be extremely efficient and precise and ruthless and therefore terrifying. This isn't referring to LLMs, this is referring to other kinds of AI / technology, like robots, automated weapons, and models trained to play games. And in video games, AI opponents are often intentionally weak and stupid in order to make them accessible and fair to a normal player. So the average person who is not specifically informed on these technologies massively underestimates their capabilities and danger when applied seriously in real life.)
Seriously, though, I do use Grammarly to find missing commas and such.
At this point most built-in spellchecks can check grammar as well, usually differentiated by using a blue rather than a red underline. Though I would suggest that it would be better to simply proof read on one’s own, with two notable benefits: one, it will reinforce one’s knowledge and skill for grammar, leading with repetition to a decreased need for correction in the first place; and two, it will provide an opportunity to reexamine one’s writing for higher-order errors or imperfections.
Which is, from my understanding, the whole point of Ms. Layne’s sections on how LLM’s atrophy critical thinking, reading comprehension, and basic literacy, and on how and why difficulty is an inherent good.
Microsoft grammar checker sucks. I turn it off. This last year I edited a book—a memoir. I must have read that book 20 times. Still missed a bunch of punctuation errors that I found with grammarly. Being an editor on a tight timeline is a lot different than being an actual author.
Well, that raises a different point about being forced to adopt effort-decreasing tools, including LLM’s, due to ever-increasing productivity requirements. I meant to refer to writing as a self-improving pursuit, which is how much of this article came across as; the multiplication of productivity you describe is the primary motivation of LLMs’ implementation in workflows, both out of laziness (such as essay-writing for students) and out of necessity.