LLMs are divinatory instruments, our era's oracle, minus the incense and theatrics. If we were honest, we'd admit that "artificial intelligence" is just a modern gloss on a very old instinct: to consult a higher-order text generator and search for wisdom in the obscure.
They tick all the boxes: oblique meaning, a semiotic field, the illusion of hidden knowledge, and a ritual interface. The only reason we don't call it divination is that it's skinned in dark mode UX instead of stars and moons.
Barthes reminds us that all meaning is in the eye of the reader; words have no essence, only interpretation. When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.
What we're seeing isn't new, just unfamiliar. We used to read bones and cards. Now we read tokens. They look like language, so we treat them like arguments. But they're just as oracular, complex, probabilistic signals we transmute into insight.
We've unleashed a new form of divination on a culture that doesn't know it's practicing one. That's why everything feels uncanny. And it's only going to get stranger, until we learn to name the thing we're actually doing. Which is a shame, because once we name it, once we see it for what it is, it won't be half as fun.
This sounds very wise but doesn’t seem to describe any of my use cases. Maybe some use cases are divination but it is a stretch to call all of them that.
Just looking at my recent AI prompts:
I was looking for the name of the small fibers which form a bird’s feather. ChatGPT told me they are called “barbs”. Then using straight forward google search i could verify that indeed that is the name of the thing i was looking for. How is this “divination”?
I was looking for what is the g-code equivalent for galvo fiber lasers are and ChatGPT told me there isn’t really one. The closest might be the sdk of ezcad, but it also listed several other opensource control solutions too.
Wanted to know what are the hallmarking rules in the UK for an item which consist of multiple pieces of sterling silver held together by a non-metalic part. (Turns out the total weight of the silver matters, while the weight of the non-metalic part does not count.)
Wanted to translate the hungarian phrase “besurranó tolvaj” into english. Out of the many possible translations chatGPT provided “opportunistic burglar” fit the best for what I was looking for.
Wanted to write an sql alchemy model and i had an approximate idea of what fields i needed but couldn’t be arsed to come up with good names for them and find the syntax to describe their types. ChatGPT wrote it for me in seconds what would have taken me at least ten minutes otherwise.
These are “divination” only in a very galaxy brained “oh man, when you open your mind you see everything is divination really”. I would call most of these “information retrieval”. The information is out there the LLM just helps me find it with a convenient interface. While the last one is “coding”.
Sure, some people stepped up to the Oracle and asked how to conquer Persia. Others probably asked where they left their sandals. The quality of the question doesn't change the structure of the act.
You presented clear, factual queries. Great. But even there, all the components are still in play: you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance. That's divination in structural terms. The fact that you're asking about barbs on feathers instead of the fate of empires doesn't negate the ritual, you're just a more practical querent.
Calling it "information retrieval" is fine, but it's worth noticing that this particular interface feels like more than that, like there's an illusion (or a projection) of latent knowledge being revealed. That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
I don't believe this paints with an overly broad brush. It's a real type of interaction and the subtle distinction focuses on the core relationship between human and oracle: seeking and interpreting.
> some people stepped up to the Oracle and asked how to conquer Persia. Others probably asked where they left their sandals.
And if the place would be any good at the second kind of queries you would call it Lost&Found and not the Oracle.
> illusion (or a projection) of latent knowledge being revealed
It is not an illusion. Knowledge is being revealed. The right knowledge for my question.
> That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
Ok, so if I went to a library, used a card index to find a book about bird feather anatomy, then read said book to find that the answer to my question is “barb” would you also call that “divination”?
If i would have paid a software developer to turn my imprecise description of a database table into precise and thight code which can be executed would you also call that “divination”?
> you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance
So any and all human communication is divination in your book?
I think your point is pretty silly. You're falling into a common trap of starting with the premise "I don't like AI", and then working backwards from that to pontification.
Hacker News deserves a stronger counterargument than “this is silly.”
My original comment is making a structural point, not a mystical one. It’s not saying that using AI feels like praying to a god, it's saying the interaction pattern mirrors forms of ritualized inquiry: question → symbolic output → interpretive response.
You can disagree with the framing, but dismissing it as "I don’t like AI so I’m going to pontificate" sidesteps the actual claim. There's a meaningful difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did.
This kind of analogy isn't an attack on AI. It’s an attempt to understand the human-AI relationship in cultural terms. That's worth engaging with, even if you think the metaphor fails.
> Hacker News deserves a stronger counterargument than “this is silly.”
Their counterargument is that said structural definition is overly broad, to the point of including any and all forms of symbolic communication (which is all of them). Because of that, your argument based on it doesn't really say anything at all about AI or divination, yet still seems 'deep' and mystical and wise. But this is a seeming only. And for that reason, it is silly.
By painting all things with the same brush, you lose the ability to distinguish between anything. Calling all communication divination (through your structural metaphor), and then using cached intuitions about 'the thing which used to be called divination; when it was a limited subset of the whole' is silly. You're not talking about that which used to be called divination, because you redefined divination to include all symbolic communication.
Thus your argument leaks intuitions (how that-which-was-divination generally behaves) that do not necessarily apply through a side channel (the redefined word). This is silly.
That is to say, if you want to talk about the interpretative nature of interaction with AI, that is fairly straightforward to show and I don't think anyone would fight you on it, but divination brings baggage with it that you haven't shown to be the case for AI. In point of fact, there are many ways in which AI is not at all like divination. The structural approach broadens too far too fast with not enough re-examination of priors, becoming so broad that it encompasses any kind of communication at all.
With all of that said, there seems to be a strong bent in your rhetoric towards calling it divination anyway, which suggests reasoning from that conclusion, and that the structural approach is but a blunt instrument to force AI into a divination shaped hole, to make 'poignant and wise' commentary on it.
> "I don’t like AI so I’m going to pontificate" sidesteps the actual claim
What claim? As per ^, maximally broad definition says nothing about AI that is not also about everything, and only seems to be a claim because it inherits intuitions from a redefined term.
> difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did
Sure, and all communication requires interpretation. That doesn't make all communication divination. Divination implies the notion of interpretation of something that is seen to be causally disentangled from the subject. The layout of these bones reveals your destiny. The level of mercury in this thermometer reveals the temperature. The fair die is cast, and I will win big. The loaded die is cast, and I will win big. Spot the difference. It's not structural.
That implication of essential incoherence is what you're saying without saying about AI, it is the 'cultural wisdom and poignancy' feedstock of your arguments, smuggled in via the vehicle of structural metaphor along oblique angles that should by rights not permit said implication. Yet people will of course be generally uncareful and wave those intuitions through - presuming they are wrapped in appropriately philosophical guise - which is why this line of reasoning inspires such confusion.
In summary, I see a few ways to resolve your arguments coherently:
1. keep the structural metaphor, discard cached intuitions about what it means for something to be divination (w.r.t. divination being generally wrong/bad and the specifics of how and why). results in an argument of no claims or particular distinction about anything, really. this is what you get if you just follow the logic without cache invalidation errors.
2. discard the structural metaphor and thus disregard the cached intuitions as well. there is little engagement along human-AI cultural axis that isn't also human-human. AI use is interpretative but so is all communication. functionally the same as 1.
3. keep the structural metaphor and also demonstrate how AI are not reliably causally entwined with reality along boundaries obvious to humans (hard because they plainly and obviously are, as demonstrable empirically in myriad ways), at which point go off about how using AI is divination because at this point you could actually say that with confidence.
>When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.
Words have power, and those that create words - or create machines that create words - have responsibility and liability.
It is not enough to say "the reader is responsible for meaning and their actions". When people or planet-burning random matrix multipliers say things and influence the thoughts and behaviors of others there is blame and there should be liability.
Those who spread lies that caused people to storm the Capitol on January 6th believing an election to be stolen are absolutely partially responsible even if they themselves did not go to DC on that day. Those who train machines that spit out lies which have driven people to racism and genocide in the past are responsible for the consequences.
"Words have no essential meaning" and "speech carries responsibility" aren't contradictory. They're two ends of the same bridge. Meaning is always projected by the reader, but that doesn't exempt the speaker from shaping the terrain of projection.
Acknowledging the interpretive nature of language doesn't absolve us from the consequences of what we say. It just means that communication is always a gamble: we load the dice with intention and hope they land amid the chaos of another mind.
This applies whether the text comes from a person or a model. The key difference is that humans write with a theory of mind. They guess what might land, what might be misread, what might resonate. LLMs don’t guess; they sample. But the meaning still arrives the same way: through the reader, reconstructing significance from dead words.
So no, pointing out that people read meaning into LLM outputs doesn’t let humans off the hook for their own words. It just reminds us that all language is a collaborative illusion, intent on one end, interpretation on the other, and a vast gap where only words exist in between.
I agree with the substance, but would argue the author fails to "understand how AI works" in an important way:
LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another
Modern chat-tuned LLMs are not simply statistical models trained on web scale datasets. They are essentially fuzzy stores of (primarily third world) labeling effort. The response patterns they give are painstakingly and at massive scale tuned into them by data labelers. The emotional skill mentioned in the article is outsourced employees writing or giving feedback on emotional responses.
So you're not so much talking to statistical model as having a conversation with a Kenyan data labeler, fuzzily adapted through a transformer model to match the topic you've brought up.
While thw distinction doesn't change the substance of the article, it's valuable context and it's important to dispel the idea that training on the internet does this. Such training gives you GPT2. GPT4.5 is efficiently stored low- cost labor.
I don't think those of us who don't work at OpenAI, Google, etc. have enough information to accurately estimate the influence of instruction tuning on the capabilities or the general "feel" of LLMs (it's really a pity that no one releases non-instruction-tuned models anymore).
Personally my inaccurate estimate is much lower than yours. When non-instruction tuned versions of GPT-3 were available, my perception is that most of the abilities and characteristics that we associate with talking to an LLM were already there - just more erratic, e.g., you asked a question and the model might answer or might continue it with another question (which is also a plausible continuation of the provided text). But if it did "choose" to answer, it could do so with comparable accuracy to the instruction-tuned versions.
Instruction tuning made them more predictable, and made them tend to give the responses that humans prefer (e.g. actually answering questions, maybe using answer formats that humans like, etc.), but I doubt it gave them many abilities that weren't already there.
Modern chat-oriented LLMs are not simply statistical models trained on web scale datasets. Instead, they are the result of a two-stage process: first, large-scale pretraining on internet data, and then extensive fine-tuning through human feedback. Much of what makes these models feel responsive, safe, or emotionally intelligent is the outcome of thousands of hours of human annotation, often performed by outsourced data labelers around the world. The emotional skill and nuance attributed to these systems is, in large part, a reflection of the preferences and judgments of these human annotators, not merely the accumulation of web text.
So, when you interact with an advanced LLM, you’re not just engaging with a statistical model, nor are you simply seeing the unfiltered internet regurgitated back to you. Rather, you’re interacting with a system whose responses have been shaped and constrained by large-scale human feedback—sometimes from workers in places like Kenya—generalized through a neural network to handle any topic you bring up.
Ya I don’t think I’ve seen any article going in depth into just how many low level humans like data labelers and RLHF’ers there are behind the scenes of these big models. It has to be millions of people worldwide.
Right now there are top tier LLMs being produced by a bunch of different organizations: OpenAI and Anthropic and Google and Meta and DeepSeek and Qwen and Mistral and xAI and several others as well.
Are they all employing separate armies of labelers? Are they ripping off each other's output to avoid that expense? Or is there some other, less labor intensive mechanisms that they've started to use?
There are middle-men companies like Scale that recruit thousands of remote contractors, probably through other companies they hire. There are of course other less known such companies that also sit between the model companies and the contracted labelers and RLHF’ers. There’s probably several tiers of these middle companies that agglomerate larger pools of workers. But how intermixed the work is and its scale I couldn’t tell you, nor if it’s shifting to something else.
I mean on LinkenIn you can find many AI trainer companies and see they hire for every subject, language, and programming language across several expertise levels. They provide the laborers for the model companies.
> produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another
What does "thinking" even mean? It turns out that some intelligence can emerge from this stochastic process. LLM can do math and can play chess despite not trained for it. Is that not thinking?
Also, could it be possible that are our brains do the same: generating muscle output or spoken output somehow based on our senses and some "context" stored in our neural network.
The transformers are accurately described in the article. The confusion comes in the Reinforcement Learning Human Feedback (RLHF) process after a transformer based system is trained. These are algorithms on top of the basic model that make additional discriminations of the next word (or phrase) to follow based on human feedback. It's really just a layer that makes these models sound "better" to humans. And it's a great way to muddy the hype response and make humans get warm fuzzies about the response of the LLM.
This is a good summary of why the language we use to describe these tools matters[1].
It's important that the general public understands their capabilities, even if they don't grasp how they work on a technical level. This is an essential part of making them safe to use, which no disclaimer or PR puff piece about how deeply your company cares about safety will ever do.
But, of course, marketing them as "AI" that's capable of "reasoning", and showcasing how good they are at fabricated benchmarks, builds hype, which directly impacts valuations. Pattern recognition and data generation systems aren't nearly as sexy.
People are paying hundreds of dollars a month for these tools, often out of their personal pocket. That's a pretty robust indicator that something interesting is going on.
One thing these models are extremely good at is reading large amounts of text quickly and summarizing important points. That capability alone may be enough to pay $20 a month for many people.
I'm not disputing the value of what these tools can do, even though that is often inflated as well. What I'm arguing against is using language that anthropomorphizes them to make them appear far more capable than they really are. That's dishonest at best, and only benefits companies and their shareholders.
"Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age."
Perhaps "AI" can replace people like Mark Zuckerberg. If BS can be fully automated.
The thesis is spot on with why I believe many skeptics remain skeptics:
> To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.
Of course some are skeptical these tools are useful at all. Others still don’t want to use them for moral reasons. But I’m inclined to believe the majority of the conversation is people talking past each other.
The skeptics are skeptical of the way LLMs are being presented as AI. The non hype promoters find them really useful. Both can be correct. The tools are useful and the con is dangerous.
> Few phenomena demonstrate the perils that can accompany AI illiteracy as well as “Chatgpt induced psychosis,” the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they’re interacting with is a god—“ChatGPT Jesus,” as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner “spiral starchild” and “river walker” in interactions that moved him to tears. “He started telling me he made his AI self-aware,” she said, “and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.”
This sounds insane to me. When we are talking about safe AI use, I wonder if things like this are talked about.
The more technological advancement goes on, the smarter we need to be in order to use it - it seems.
> Few phenomena demonstrate the perils that can accompany AI illiteracy as well as “Chatgpt induced psychosis,” the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide.
People have been caught in that trap ever since the invention of religion. This is not a new problem.
I think insane and lonely people are definitely on the safety radar.
Even if todays general purpose models and models made by predators can have negative effects on vulnerable people, LLMs could become the technology that brings psych care to the masses.
Psychosis will find anything as a seed idea to express itself, even as basic a pattern as someone walking in lockstep with the soon-to-be patient can trigger a break. So it's not surprising that LLM chats would do the same.
We have that now, in social media. If you create some way for large numbers of people with the same nutty beliefs to easily communicate, you get a psychosis force multiplier. Before social media, nuttyness tended to be diluted by the general population.
I'll admit, the first time I started ollama locally, I asked it if I would hurt it if I turned it off. I have a master's degree in machine learning and I know it's silly, but I still did it.
What really happens: "for some reason" higher up management thinks AI will let idiots run extremely complex companies. It doesn't.
What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.
In other words: for idiot management, AI does nothing (EXCEPT enable the competition)
Of course, this results in what you now see: layoffs where as always idiots survive the layoffs, followed by the products of those companies starting to suck more and more because they laid off the people that actually understood how things worked and AI cannot make up for that. Not even close.
AI is a mortal threat to the current crop of big companies. The bigger the company, the bigger a threat it is. The skill high level managers tend to have is to "conquer" existing companies, and nothing else. With some exceptions, they don't have any skill outside of management, and so you have the eternally repeated management song: that companies can be run by professional managers, without knowing the underlying problem/business, "using numbers" and spreadsheet (except when you know a few and press them, of course it turns out they don't have a clue about the numbers, can't come up with basic spreadsheet formulas)
TLDR: AI DOESN'T let financial-expert management run an airplane company. AI lets 1000 engineers build 1000 planes without such management. AI lets a company like what Google was 15-20 years ago wipe the floor with a big airplane manufacturer. So expect big management to come with ever more ever bigger reasons why AI can't be allowed to do X.
> What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.
It's different from other force-multiplier tools in that it cuts off the pipeline of new blood while simultaneously atrophying the experienced and smart people.
Resonates. I've been thinking about a tech technology bubble (not financial bubble) for years now. Big tech companies have just been throwing engineers at problems for many years and it feels like they completely stopped caring about talent now. Not that they ever really cared deeply, but they did care superficially and that was enough to keep the machines spinning.
Now that they have AI, I can see it become an 'idiocy multiplier'. Already software is starting to break in subtle ways, it's slow, laggy, security processes have become a nightmare.
I can tell the difference between those versions of Claude quite easily. Not 10x better each version, but each is more capable and the errors are fewer.
There is certainly a difference, however Anthropic did really well with 3.5 - far, far better than any other provider could do, so the steps from there been more incremental while other providers have been playing catch up (for example Google's Gemini 2.5 Pro is really their first model that's actually useful for coding in any way).
Is the internet today better than it's ever been? I can easily imagine LLMs becoming ad-riden, propagandized, cloudlfare intermediated, enshittified crap the way most of the internet has.
They tick all the boxes: oblique meaning, a semiotic field, the illusion of hidden knowledge, and a ritual interface. The only reason we don't call it divination is that it's skinned in dark mode UX instead of stars and moons.
Barthes reminds us that all meaning is in the eye of the reader; words have no essence, only interpretation. When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.
What we're seeing isn't new, just unfamiliar. We used to read bones and cards. Now we read tokens. They look like language, so we treat them like arguments. But they're just as oracular, complex, probabilistic signals we transmute into insight.
We've unleashed a new form of divination on a culture that doesn't know it's practicing one. That's why everything feels uncanny. And it's only going to get stranger, until we learn to name the thing we're actually doing. Which is a shame, because once we name it, once we see it for what it is, it won't be half as fun.
Just looking at my recent AI prompts:
I was looking for the name of the small fibers which form a bird’s feather. ChatGPT told me they are called “barbs”. Then using straight forward google search i could verify that indeed that is the name of the thing i was looking for. How is this “divination”?
I was looking for what is the g-code equivalent for galvo fiber lasers are and ChatGPT told me there isn’t really one. The closest might be the sdk of ezcad, but it also listed several other opensource control solutions too.
Wanted to know what are the hallmarking rules in the UK for an item which consist of multiple pieces of sterling silver held together by a non-metalic part. (Turns out the total weight of the silver matters, while the weight of the non-metalic part does not count.)
Wanted to translate the hungarian phrase “besurranó tolvaj” into english. Out of the many possible translations chatGPT provided “opportunistic burglar” fit the best for what I was looking for.
Wanted to write an sql alchemy model and i had an approximate idea of what fields i needed but couldn’t be arsed to come up with good names for them and find the syntax to describe their types. ChatGPT wrote it for me in seconds what would have taken me at least ten minutes otherwise.
These are “divination” only in a very galaxy brained “oh man, when you open your mind you see everything is divination really”. I would call most of these “information retrieval”. The information is out there the LLM just helps me find it with a convenient interface. While the last one is “coding”.
You presented clear, factual queries. Great. But even there, all the components are still in play: you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance. That's divination in structural terms. The fact that you're asking about barbs on feathers instead of the fate of empires doesn't negate the ritual, you're just a more practical querent.
Calling it "information retrieval" is fine, but it's worth noticing that this particular interface feels like more than that, like there's an illusion (or a projection) of latent knowledge being revealed. That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
I don't believe this paints with an overly broad brush. It's a real type of interaction and the subtle distinction focuses on the core relationship between human and oracle: seeking and interpreting.
And if the place would be any good at the second kind of queries you would call it Lost&Found and not the Oracle.
> illusion (or a projection) of latent knowledge being revealed
It is not an illusion. Knowledge is being revealed. The right knowledge for my question.
> That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
Ok, so if I went to a library, used a card index to find a book about bird feather anatomy, then read said book to find that the answer to my question is “barb” would you also call that “divination”?
If i would have paid a software developer to turn my imprecise description of a database table into precise and thight code which can be executed would you also call that “divination”?
So any and all human communication is divination in your book?
I think your point is pretty silly. You're falling into a common trap of starting with the premise "I don't like AI", and then working backwards from that to pontification.
My original comment is making a structural point, not a mystical one. It’s not saying that using AI feels like praying to a god, it's saying the interaction pattern mirrors forms of ritualized inquiry: question → symbolic output → interpretive response.
You can disagree with the framing, but dismissing it as "I don’t like AI so I’m going to pontificate" sidesteps the actual claim. There's a meaningful difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did.
This kind of analogy isn't an attack on AI. It’s an attempt to understand the human-AI relationship in cultural terms. That's worth engaging with, even if you think the metaphor fails.
Their counterargument is that said structural definition is overly broad, to the point of including any and all forms of symbolic communication (which is all of them). Because of that, your argument based on it doesn't really say anything at all about AI or divination, yet still seems 'deep' and mystical and wise. But this is a seeming only. And for that reason, it is silly.
By painting all things with the same brush, you lose the ability to distinguish between anything. Calling all communication divination (through your structural metaphor), and then using cached intuitions about 'the thing which used to be called divination; when it was a limited subset of the whole' is silly. You're not talking about that which used to be called divination, because you redefined divination to include all symbolic communication.
Thus your argument leaks intuitions (how that-which-was-divination generally behaves) that do not necessarily apply through a side channel (the redefined word). This is silly.
That is to say, if you want to talk about the interpretative nature of interaction with AI, that is fairly straightforward to show and I don't think anyone would fight you on it, but divination brings baggage with it that you haven't shown to be the case for AI. In point of fact, there are many ways in which AI is not at all like divination. The structural approach broadens too far too fast with not enough re-examination of priors, becoming so broad that it encompasses any kind of communication at all.
With all of that said, there seems to be a strong bent in your rhetoric towards calling it divination anyway, which suggests reasoning from that conclusion, and that the structural approach is but a blunt instrument to force AI into a divination shaped hole, to make 'poignant and wise' commentary on it.
> "I don’t like AI so I’m going to pontificate" sidesteps the actual claim
What claim? As per ^, maximally broad definition says nothing about AI that is not also about everything, and only seems to be a claim because it inherits intuitions from a redefined term.
> difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did
Sure, and all communication requires interpretation. That doesn't make all communication divination. Divination implies the notion of interpretation of something that is seen to be causally disentangled from the subject. The layout of these bones reveals your destiny. The level of mercury in this thermometer reveals the temperature. The fair die is cast, and I will win big. The loaded die is cast, and I will win big. Spot the difference. It's not structural.
That implication of essential incoherence is what you're saying without saying about AI, it is the 'cultural wisdom and poignancy' feedstock of your arguments, smuggled in via the vehicle of structural metaphor along oblique angles that should by rights not permit said implication. Yet people will of course be generally uncareful and wave those intuitions through - presuming they are wrapped in appropriately philosophical guise - which is why this line of reasoning inspires such confusion.
In summary, I see a few ways to resolve your arguments coherently:
1. keep the structural metaphor, discard cached intuitions about what it means for something to be divination (w.r.t. divination being generally wrong/bad and the specifics of how and why). results in an argument of no claims or particular distinction about anything, really. this is what you get if you just follow the logic without cache invalidation errors.
2. discard the structural metaphor and thus disregard the cached intuitions as well. there is little engagement along human-AI cultural axis that isn't also human-human. AI use is interpretative but so is all communication. functionally the same as 1.
3. keep the structural metaphor and also demonstrate how AI are not reliably causally entwined with reality along boundaries obvious to humans (hard because they plainly and obviously are, as demonstrable empirically in myriad ways), at which point go off about how using AI is divination because at this point you could actually say that with confidence.
Words have power, and those that create words - or create machines that create words - have responsibility and liability.
It is not enough to say "the reader is responsible for meaning and their actions". When people or planet-burning random matrix multipliers say things and influence the thoughts and behaviors of others there is blame and there should be liability.
Those who spread lies that caused people to storm the Capitol on January 6th believing an election to be stolen are absolutely partially responsible even if they themselves did not go to DC on that day. Those who train machines that spit out lies which have driven people to racism and genocide in the past are responsible for the consequences.
Acknowledging the interpretive nature of language doesn't absolve us from the consequences of what we say. It just means that communication is always a gamble: we load the dice with intention and hope they land amid the chaos of another mind.
This applies whether the text comes from a person or a model. The key difference is that humans write with a theory of mind. They guess what might land, what might be misread, what might resonate. LLMs don’t guess; they sample. But the meaning still arrives the same way: through the reader, reconstructing significance from dead words.
So no, pointing out that people read meaning into LLM outputs doesn’t let humans off the hook for their own words. It just reminds us that all language is a collaborative illusion, intent on one end, interpretation on the other, and a vast gap where only words exist in between.
So you're not so much talking to statistical model as having a conversation with a Kenyan data labeler, fuzzily adapted through a transformer model to match the topic you've brought up.
While thw distinction doesn't change the substance of the article, it's valuable context and it's important to dispel the idea that training on the internet does this. Such training gives you GPT2. GPT4.5 is efficiently stored low- cost labor.
Personally my inaccurate estimate is much lower than yours. When non-instruction tuned versions of GPT-3 were available, my perception is that most of the abilities and characteristics that we associate with talking to an LLM were already there - just more erratic, e.g., you asked a question and the model might answer or might continue it with another question (which is also a plausible continuation of the provided text). But if it did "choose" to answer, it could do so with comparable accuracy to the instruction-tuned versions.
Instruction tuning made them more predictable, and made them tend to give the responses that humans prefer (e.g. actually answering questions, maybe using answer formats that humans like, etc.), but I doubt it gave them many abilities that weren't already there.
Modern chat-oriented LLMs are not simply statistical models trained on web scale datasets. Instead, they are the result of a two-stage process: first, large-scale pretraining on internet data, and then extensive fine-tuning through human feedback. Much of what makes these models feel responsive, safe, or emotionally intelligent is the outcome of thousands of hours of human annotation, often performed by outsourced data labelers around the world. The emotional skill and nuance attributed to these systems is, in large part, a reflection of the preferences and judgments of these human annotators, not merely the accumulation of web text.
So, when you interact with an advanced LLM, you’re not just engaging with a statistical model, nor are you simply seeing the unfiltered internet regurgitated back to you. Rather, you’re interacting with a system whose responses have been shaped and constrained by large-scale human feedback—sometimes from workers in places like Kenya—generalized through a neural network to handle any topic you bring up.
Right now there are top tier LLMs being produced by a bunch of different organizations: OpenAI and Anthropic and Google and Meta and DeepSeek and Qwen and Mistral and xAI and several others as well.
Are they all employing separate armies of labelers? Are they ripping off each other's output to avoid that expense? Or is there some other, less labor intensive mechanisms that they've started to use?
I mean on LinkenIn you can find many AI trainer companies and see they hire for every subject, language, and programming language across several expertise levels. They provide the laborers for the model companies.
What does "thinking" even mean? It turns out that some intelligence can emerge from this stochastic process. LLM can do math and can play chess despite not trained for it. Is that not thinking?
Also, could it be possible that are our brains do the same: generating muscle output or spoken output somehow based on our senses and some "context" stored in our neural network.
What would labeling even do for an LLM? (Not including multimodal)
The whole point of attention is that it uses existing text to determine when tokens are related to other tokens, no?
It's important that the general public understands their capabilities, even if they don't grasp how they work on a technical level. This is an essential part of making them safe to use, which no disclaimer or PR puff piece about how deeply your company cares about safety will ever do.
But, of course, marketing them as "AI" that's capable of "reasoning", and showcasing how good they are at fabricated benchmarks, builds hype, which directly impacts valuations. Pattern recognition and data generation systems aren't nearly as sexy.
[1]: https://news.ycombinator.com/item?id=44203562#44218251
Perhaps "AI" can replace people like Mark Zuckerberg. If BS can be fully automated.
> To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.
Of course some are skeptical these tools are useful at all. Others still don’t want to use them for moral reasons. But I’m inclined to believe the majority of the conversation is people talking past each other.
The skeptics are skeptical of the way LLMs are being presented as AI. The non hype promoters find them really useful. Both can be correct. The tools are useful and the con is dangerous.
This sounds insane to me. When we are talking about safe AI use, I wonder if things like this are talked about.
The more technological advancement goes on, the smarter we need to be in order to use it - it seems.
People have been caught in that trap ever since the invention of religion. This is not a new problem.
Even if todays general purpose models and models made by predators can have negative effects on vulnerable people, LLMs could become the technology that brings psych care to the masses.
What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.
In other words: for idiot management, AI does nothing (EXCEPT enable the competition)
Of course, this results in what you now see: layoffs where as always idiots survive the layoffs, followed by the products of those companies starting to suck more and more because they laid off the people that actually understood how things worked and AI cannot make up for that. Not even close.
AI is a mortal threat to the current crop of big companies. The bigger the company, the bigger a threat it is. The skill high level managers tend to have is to "conquer" existing companies, and nothing else. With some exceptions, they don't have any skill outside of management, and so you have the eternally repeated management song: that companies can be run by professional managers, without knowing the underlying problem/business, "using numbers" and spreadsheet (except when you know a few and press them, of course it turns out they don't have a clue about the numbers, can't come up with basic spreadsheet formulas)
TLDR: AI DOESN'T let financial-expert management run an airplane company. AI lets 1000 engineers build 1000 planes without such management. AI lets a company like what Google was 15-20 years ago wipe the floor with a big airplane manufacturer. So expect big management to come with ever more ever bigger reasons why AI can't be allowed to do X.
It's different from other force-multiplier tools in that it cuts off the pipeline of new blood while simultaneously atrophying the experienced and smart people.
Now that they have AI, I can see it become an 'idiocy multiplier'. Already software is starting to break in subtle ways, it's slow, laggy, security processes have become a nightmare.
For example, numbers are the difference between a bridge collapsing or not
So today is the same AI I used last year. And based on current trajectory same I will use next year.
the best bugs are the ones that arent found for 5 years
Future AI's will be more powerful but probably influenced to push users to spend money or have a political opinion. So they may enshitify...
Ultimately these machines work for the people who paid for them.
It's like if we'd said the Youtube we used in 2015 was going to be the worst Youtube we'd ever use.