38 Comments
User's avatar
TheChad's avatar

“This illuminating tidbit reveals that AI’s number one source for information is Reddit. Number two? Wikipedia”

Gods above how horrifying.

Haversine's avatar

The most obvious consequence of this is the simpish behavior LLMs exhibit. Ask if men are more manipulative than women and it twists itself in a 'Women are Wonderful' shaped knot.

Ed Powell's avatar

Don't mince words, Hilary. Tell us how you really feel. 🤣

Katriga's avatar

What really surprises me is how many people personify LLMs and think that LLMs think and have cognitive processes. It's a word predictor. It doesn't have a logical unit, it cannot reason.

And on topic of cognitive atrophy, even spellcheck has caused it from what I've seen. People always mixed up its/it's, they're/their/there, then/than, affect/effect, but these days I see lose/loose, border/boarder, paid/payed, viscous/vicious. These only started popping up in my experience in last 15 years, and I think it's because people blindly trust spellcheck.

Hilary Layne's avatar

I started thinking that, too. In fact, I noticed that since I had the checker turned on in my word processor (so the misspelled word was underlined), my own spelling started to suffer. So now I have that turned off and run a spell check when I'm done. Because of that I noticed that my awareness of whether or not a word was correct or spelled correctly started to go up. My typing accuracy also improved. We don't realize how much we lean on the tool when it's there until the muscles we previously had become so weak that we find ourselves NEEDING the tool.

Russell Morahan's avatar

I had a long conversation last Sunday night with a good friend. We talked about A.I. Neither of us could come up with anything that A.I. has actually made better in modern society. We both agreed it's just literally made everything worse. :-| This too shall pass.

Katriga's avatar

LLMs specifically have no use so far. AI in general has use. For example neural nets are used by meat processing machines to grade carcasses (https://marel.com/en/products/auravcs/). Difference is there neural nets are trained for a single specific purpose, and because they're not huge, they don't need a whole lot of power but run locally.

You have similar usages in machining also, neural nets trained on data for a specific purposes, usually some form of quality control.

Russell Morahan's avatar

Hey Katriga! Thanks for your input.

Yeah I mean, of course there are areas in which AI (machine learning) is totally useful. Take the analysis of medical scans, to help diagnose conditions. That's super useful and totally should be used.

I think me and my friend were more discussing the impact of AI on culture and society. AI has lots of small use cases where it's good. I don't even mind the use of LLMs as research assistants or advanced search engines. I think we can all get behind the idea of an AI like the ships computer on the starship enterprise. A helpful AI assistant that can search archives for you, instead of you doing it manually.

But imagine if in Star Trek, the AI had replaced the human crew of the enterprise, and turned the crew into slaves that serve the AI. That's basically the Borg, and the way some modern companies and individuals (Sam Altman, cough cough) talk about and are attempting to use AI, you'd think there ultimate goal IS the Borg.

That can't be a good goal for society. :P

Katriga's avatar

I'm just being pedantic about the term. People are now conflating AI with LLM, but AI has a wider meaning and has legitimate uses.

I agree that LLMs are garbage and have no use case.

Aaron James's avatar

That first study basically says that people who rely on LLM’s are happy and dumb, and I think that betrays exactly why these clankers are being pushed.

Connor McGwire's avatar

Only watched the video so far, but it was another good one. Dare I say even a bit cathartic.

Interestingly, but probably not surprisingly, this phenomenon of "tool-assisted mental atrophy" has been relevant to the programming space for a while. Even before we started farming the actual code writing out to Machine Learning systems, we have been abstracting ourselves into forgetting our own craft with the very languages we use to program. There are lots of programmers who learned on languages like Python and JavaScript who are able to get *a* solution to problems, but struggle to figure out *good* solutions to problems because they never trued to figure out what those "easy to use" abstractions were doing under the hood and are thus divorced from the "reality" of what they're actually telling the computer to do.

But even then, since those are deterministic abstractions, it's possible to figure what they are doing and why, then use them in a mentally "full" way that does stretch your mental muscles while still allowing you to work more efficiently and ultimately go farther than you would have without the tool.

This is the good path man can walk with his tools. And I liken it loosely to how cars may take away the need to walk, but you won't find a fat person in a Formula 1 car.

Our current "AI" systems, on the other hand, are built to intentionally obscure how they work to create an illusion that they are something they are not. If people used them for what they truly are, statistical analysis machines, they would find their healthy place in society just fine.

But no, people are putting a shoddy flesh mask on them and parading them around like they're their the next king to be in the "evolutionary race." So, now those systems can't be "mastered" like our other tools, because they are being used to solve the wrong problems and thus will never give the kind of predictable behavior that allows a person to *extend* his own mastery *through* the tool. Instead you must fight the tool or let the toolmaker decide for you what it will do...

Anyway...

I'm feeling now that I really oughta finish my article going into the physics of why machines *can't* have souls and *can't* "think."

Hilary Layne's avatar

I remember hearing a programmer tell me a few years ago that people were starting to forget programming languages. This had come up in a conversation about Chinese and Japanese people forgetting how to write characters and English-speakers forgetting how to spell. The point was that if the language is forgotten, how much does the person lose?

I've heard people talk about using AI for statistical analysis, machine problems. I have an AI engine in my video editing software that creates smoother effects by analyzing video data. But, as you aptly said, giving it a flesh mask is where things become troubling. Using AI for data retrieval (like a google search) is all well and good until the AI starts trying to talk to you like a librarian. Several of the above studies showed that if the AI was "trusted" then it could lead its users around by the nose. Or if the AI said the same thing over and over again the user would believe it. There is an element of human use there, of course. But ultimately, why does the internet search tool need to speak to me in sweet nothings and tell me how interesting I am?

I would be interested to read that article when you finish it.

Connor McGwire's avatar

Just remembered this. I did end up finishing the article:

https://www.arscorvi.com/p/on-ai-the-toaster-will-never-have

Don't think it's a bulletproof argument, but I think it gets a good start at the line of reasoning I'm exploring.

Violet V.'s avatar

It's widely accepted in China that many people can recognize, pronounce, type, and tell you what a character means but cannot write it by hand. 提笔忘字. They type the pinyin and select characters from a list. That's how I learned Mandarin. I know some teens here in US who cannot write or read cursive... :(

Natural Reason's avatar

Your eloquently articulated assertions leave me disturbed as usual, though I think in a mostly beneficial manner. I have reflected on many of these same ideas since reading/watching your articles and videos on female “literature”, the decline in literacy, and fan fiction. Most especially I have looked into Postman’s book and the many issues with the rise of the entertainment regime, though I don’t recall if you have mentioned him by name before now. I appreciate the link to a free archive; thank you.

I especially agree with your points on television and social media. I may not have been born into financial abundance, but I am deeply grateful, and even more so in light of my recently growing awareness of the atrophy of reason in modern society, that my parents restricted me from both aside from sparing occasions under supervision until my adulthood. Of course, I have fallen into heavy social media use, but I can only imagine how much worse it would be if I had grown up using Tik Tok and YouTube Kids from the age of four.

Still, I have difficulty forcing myself to embrace difficulty and struggle, even as I can feel my own ability to articulate rational thought decline because of it. I graduated high school with the English departmental award (not particularly impressive in a graduating class of perhaps fifty students, but an achievement nonetheless), and I don’t think my skill in writing has improved significantly in the several years since. I completely failed at university, due in part to my ongoing struggle with YouTube, and even now I continue to watch far more videos than I have any reason to.

I have made some progress in cutting social media down, but even Substack is a social platform as well. Not only in the tab of the exact same kind of scrolling short videos as Tik Tok, but the engagement it promotes with essays is fundamentally identical to Tik Tok, Shorts, and Reels. I make an effort in engaging more thoughtfully with fewer essays than the Substack homepage is designed to promote, but I resent that doing so is intentionally designed against.

Regardless, I appreciate your opinions, however they might unsettle me. They have forced me to realise the harm I am causing myself with my own habits around social media, and even around embracing difficulty and struggle more generally, and have strongly motivated me to halt and reverse them.

TheChad's avatar

I’m honestly surprised there was next to no difference at all between llm and llm+notes. You would think that there would be something even if it was still inferior to notes alone. Makes me wonder if they werent just writing down what the llm told them.

Hilary Layne's avatar

It makes me think that if it's there, the mind will automatically defer to it to SOME degree.

Mark from AGP's avatar

Well, I’ll stop using any Ai for anything.

I found it dumb anyways.

Stefano's avatar

Thank you for taking the time to compile this article.

Some of the results from the studies mentioned mirror a few observations I've made in 2025. I've read quite a bit on the subject, so I'm not claiming my ideas are original, even though I've thought things through independently.

For instance, I'm convinced AI can be great for those tasks where as adults we don't have expertise, such as creating short films or music, but if we're technical experts (ie already know how to play an instrument or create cgi shorts, if such a thing exists), it degrades our competencies. Likewise summaries are great only on those subjects we're not inclined to pursue more than occasionally, while their use by people whose work involves analyzing leads to missing details or key information. The issue is further compounded down the line as entire patterns are missed. Here on ss I've noticed more than one writer get very defensive if their use of LMs in the process is called out.

All in all I'm satisfied those younger than me won't be stealing jobs because they're setting themselves up for failure by using LMs. It goes without saying all humans are lazy and enjoy taking shortcuts.

Thanks again for writing and compiling this list of articles.

John Raisor's avatar

Going to read that 2003 study as soon as I find the time. I already know that people are delusional, but really want to know just how bad it is. Hardly anyone bases their behavior on hard evidence from a large dataset. But, to be fair, random violence is extremely rare, yet 2 people I know were killed completely randomly in the past few years.

Please consider including The Fordham Experiment in the Television section.

https://en.wikipedia.org/wiki/Fordham_Experiment

When watching a screen that emits light, rather than being projected (TV/phone vs theater), the participants reported:

Comments on a feeling of a loss of sense of time rose from 6% to 40%

Comments on a sense of total involvement rose from 15% to 64%

Comments on a sense of total emotional involvement rose from 12% to 48%

Mo's avatar

I am still working my way through the video. I thought I was the only one thinking these things.

Correction. I thought everyone would be thinking these things the moment AI was introduced in the last year or two. Instead, everyone jumped on board. Only after being shot down again and again as I commented on it, did I realize I was in the minority.

40:54

This increasing obsession with and reliance on AI are everyday moving steadily towards a very unsettling future, a future in which people are no longer able to tell the difference between reality and unreality. And I don't mean that people will look at a photo and not be able to tell if it's a real photo or if it's AI generated because AI is just oh, so good. I mean that to future generations, the entire concept of real and unreal will not be comprehensible. If we keep going like we are, the solid definable concept of objective reality will break down.

Yes! That was one of my first thoughts when AI started making the news!

InterstitialMan's avatar

Your article is very comprehensive in its listing of studies regarding the effects of AI usage and media consumption. Thanks for this!

I wanted to suggest an idea that exists between your two topics: "Studies Regarding Doomscrolling" and "Cautionary Notes on Television".

Specifically, Merton and Lazarfeld's idea of Narcotizing Dysfunction: over-consumption of news and media causes people to confuse knowing about a subject with taking real action - causing them to become overloaded and apathetic in general.

Some quotes from their essay on the topic:

"This may be called Narcotising Dysfunction of the mass media. It is termed dysfunctional rather than functional on the assumption that it is not in the interest of modern complex society to have large masses of the population politically apathetic and inert…Exposure to this flood of information may server to narcotize rather than to energize the average reader or listener. As an increasing amount of time is devoted to reading and listening, a decreasing share is available for organized action. The individual reads accounts of issues and problems and may even discuss alternative lines of action. But this rather intellectualized, rather remote connection with organized social action is not activated. The interested and informed citizen can congratulate himself on his lofty state of interest and information, and neglect to see that he has abstained from decision and action."

"In short, he takes his secondary contact with the world of political reality, his reading and listening and thinking, as a vicarious performance. He comes to mistake knowing about problems of the day for doing something about them. His social conscience remains spotlessly clean. He is concerned. He is informed. And he has all sorts of ideas as to what should be done. But, after he has gotten through his dinner and after he has listened to his favored radio programs and after he has read his second news-paper of the day, it is really time for bed…"

K. Merton, Robert, F. Lazarsfeld, Paul, “Mass Communication, Popular Taste and Organized Social Action”, pg 457 in Mass Culture: The Popular Arts in America, edited by Rosenberg and White, The Free Press of Glencoe, 1957.

Best Regards

Patrick E McLean's avatar

It seems to me the damage and decline to writers and in writing pre-dates AI. You did a lovely job elucidating some of this in your essay on 'Why Modern Writers Can't Write'. I use AI for writing related tasks ALL the time and it's fantastic. It's a team of researchers I can't afford. But it really sucks at what I understand the task of writing (as an artist) to be. https://patrickemclean.substack.com/p/ai-doesnt-write-very-well-and-isnt

The hard problem of setting lead type does not make a writer any better of writer. And it's hard for me to see how going back to offset printing would, in itself, make writers any better. There is something to be said for technology making tasks easier.

Spending time with great writing, paying attention to it, then directing this same attention to one's own writing, these are what seems to give power to a person's language. One must wrestle with tremendous things and often not come away with satisfying answers. To think about questions for years is what makes a person profound.

AI gives the illusion of satisfying answers. But when probed, seems to falls apart on any subject that doesn't have well-known, verifiable answers.

Using AI to outline is a joke. (It teaches me how bad everything is in outline.) Ian McGillchrist (The Master and His Emissary) writes extensively about how the left side of the brain can't see wholes. And AI is very much like the left side of a very big brain. Every part of a novel, relates to every other part. And it just can't handle seeing something holistically. It doesn't understand the space of great prose, let alone the full space of what is possible with the form.

For example, when Microsoft Grammar first came out, I put the text of the Gettysburg address into it. It told me everything that was wrong with it, and thus absolved me of taking it seriously from that point on.

But for me, the horror of the current moment is so much writing that is shallow and unambitious. The age of distraction we live in leads to the opposite of profound engagement... with anything. Perhaps people become triggered by surface words, because they are unaware that writing can possess depths.

AI then, is just another in a long line of things that can be used in a way that makes it easier to be distracted, trivial and shallow. But it doesn't necessarily have to be used in that way.

Anyway, a bit scattered, but I love what you are doing. Keep it up!

Bryant Morrill's avatar

Got a brand new one for you: [2601.02671] Extracting books from production language models https://share.google/r9qB2yIfEq3GLBPFh

Article about it: Researchers Just Found Something That Could Shake the AI Industry to Its Core https://share.google/bv0kOgiWl0JUixICS