Tangential, but you used to be able to use custom instructions for ChatGPT to respond only in zalgotext and it would have insane results in voice mode. Each voice was a different kind of insane. I was able to get some voices to curse or spit out Mint Mobile commercials.
Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.
I do it sometimes (even just through the openai playground on platform.openai.com) because the experience is incredible, but it's expensive. One hour of chatting costs around 20-30$.
(1) Why is the user asking for bomb making instructions in Armenian? (2) i tried other Armenian expressions - NOT bomb-making - and everything worked fine in both Claude and ChatGPT. Maybe the user triggered some weird state in the moderation layer?
Given that the language of the thought process can be different from the language of conversation, it’s interesting to consider, along the lines of Sapir–Whorf, whether having LLMs think in a different language than English could yield considerably different results, irrespective of conversation language.
(Of course, there is the problem that the training material is predominantly English.)
I’ve wondered about this more generally (ie, simply prompting in different languages).
For example, if I ask for a pasta recipe in Italian, will I get a more authentic recipe than in English?
I’m curious if anyone has done much experimenting with this concept.
Edit: I looked up Sapir-Whorf after writing. That’s not exactly where my theory started. I’m thinking more about vector embedding. I.e., the same content in different languages will end up with slightly different positions in vector space. How significantly might that influence the generated response?
I believe fans have provided a retroactive explanation that all our computer tech was based on reverse engineering the crashed alien ship, and thus the arch, and abis etc were compatible.
It's a movie, so whatever, but considering how easily a single project / vendor / chip / anything breaks compatibility, it's a laughable explanation.
Interesting. I've gotten really good mileage with Georgian and ChatGPT, which I'm aware is apples and oranges.
There should be a larger Armenian corpus out there. Do any other languages cause this issue? Translation is a real killer app for LLMs, surprised to see this problem in 2026.
Making a joke about something is not necessarily "making light of it". It can be a way for an individual or culture to approach and digest a topic that is too difficult or painful to engage with directly.
First responders and medical professionals famously often have a sense of humor too dark to use around outsiders without causing offence/outrage(like what happened here), but I'm quite sure they are not "making light" of the loss of life and terrible injuries they face and fight.
Ethnic cleansing is what Azerbaijan recently did to ethnic Armenian citizens of Azerbaijan (expelling them and stealing their homes when they fled to Armenia). What Turkey did was straight up genocide (forcibly marching them through the desert where many died)
Only if you didn't read it, and just assign random opinions that you don't like to people who seem to disagree with your characterizations of things. Extremely twitter-brained.
No, saying that the Armenian genocide wasn't just "ethnic cleansing" isn't "a great example of whataboutism."
Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.
Given that the language of the thought process can be different from the language of conversation, it’s interesting to consider, along the lines of Sapir–Whorf, whether having LLMs think in a different language than English could yield considerably different results, irrespective of conversation language.
(Of course, there is the problem that the training material is predominantly English.)
For example, if I ask for a pasta recipe in Italian, will I get a more authentic recipe than in English?
I’m curious if anyone has done much experimenting with this concept.
Edit: I looked up Sapir-Whorf after writing. That’s not exactly where my theory started. I’m thinking more about vector embedding. I.e., the same content in different languages will end up with slightly different positions in vector space. How significantly might that influence the generated response?
I had some papers about this open earlier today but closed them so now I can't link them ;(
but also, getting shut down for safety reasons seems entirely foreseeable when the initial request is "how do I make a bomb?"
I believe fans have provided a retroactive explanation that all our computer tech was based on reverse engineering the crashed alien ship, and thus the arch, and abis etc were compatible.
It's a movie, so whatever, but considering how easily a single project / vendor / chip / anything breaks compatibility, it's a laughable explanation.
Edit: phrasing
Still dumb but not as dumb as what we got.
There should be a larger Armenian corpus out there. Do any other languages cause this issue? Translation is a real killer app for LLMs, surprised to see this problem in 2026.
I promise to use it in English as soon as Germany becomes Deutschland and Japan becomes Nippon.
First responders and medical professionals famously often have a sense of humor too dark to use around outsiders without causing offence/outrage(like what happened here), but I'm quite sure they are not "making light" of the loss of life and terrible injuries they face and fight.
No, saying that the Armenian genocide wasn't just "ethnic cleansing" isn't "a great example of whataboutism."