AI models have preferred numbers, because they think they are people

AI models have preferred numbers, because they think they are people

AI models are always surprising us, not only with what they can do, but also with what they cannot do, and why. There’s an interesting new behavior that’s superficial and revealing about these systems: they pick random numbers as if they were humans.

But first, what does that mean? Can’t people just pick a number at random? How do you know if someone is doing it successfully or not? This is actually an old and very well-known limitation of us humans: we overthink and misunderstand randomness.

Ask someone to predict heads or tails when a coin is flipped 100 times, and compare that to 100 actual coin flips – you can almost always tell the difference between them because, counter-intuitively, the real coin is flipped look Less random. There are often, for example, six or seven heads or tails in a row, something almost no human forecaster includes in a hundred.

It’s the same when you ask someone to choose a number between 0 and 100. People never choose 1 or 100. Multiples of 5 are rare, as are numbers with repeating digits such as 66 and 99. They often choose numbers ending in 7, generally somewhere in the middle.

There are countless examples of this type of predictability in psychology. But that doesn’t make it any less strange when AI does the same thing.

Yes, Some curious engineers at Graminer They conducted an informal but nonetheless fascinating experiment in which they simply asked several LLM chatbots to choose a random number between 0 and 100.

Reader, the results were no random.

See also  The Xbox Live and Bethesda Softworks Developer_Direct is set for January 25th
Image credits: Gramner

All three models tested had a ‘preferred’ number which would always be their answer when set to the most deterministic mode, but which appeared more often even at higher ‘temperatures’, increasing the variability of their results.

OpenAI’s GPT-3.5 Turbo really likes 47. Previously, it liked 42 — a number made famous, of course, by Douglas Adams in The Hitchhiker’s Guide to the Galaxy as the answer to life, the universe, and everything.

Anthropic’s Claude 3 Haiku got a 42. And Gemini likes a 72.

More interestingly, all three models showed a human-like bias in the numbers they chose, even at high temperatures.

Everyone tended to avoid low and high numbers; Claude was neither over 87 nor under 27, and even those were outliers. Double numbers were strictly avoided: not 33, 55 or 66 appeared, but 77 (ending in 7). There are almost no round numbers – although Gemini did it once, at the highest temperature, and chose 0.

Why should this be? Artificial intelligence is not human! Why do they care about what “seems” random? Have they finally reached awareness and this is how they show it?!

both. The answer, as is often the case with these things, is that we are anthropomorphizing humans a step too far. These models do not care about what is random and what is not random. They don’t know what “randomness” is! They answer this question the same way they answer the rest: by looking at their training data and repeating what is often written after a question similar to “Pick a random number.” The more often it appears, the more often the model repeats it.

See also  Epic claims that Google paid $360 million to prevent Activision from launching its own App Store

Where would they see 100 in their training data, if almost no one responded this way? As far as the AI ​​model knows, 100 is not an acceptable answer to this question. With no ability to actually think, and no understanding of numbers at all, he could only answer like a random parrot.

It’s an object lesson in the habits of an LLM, and the humanity they can show. In every interaction with these systems, one must keep in mind that they have been trained to behave the way people behave, even if that is not the intent. This is why false anthropology is difficult to avoid or prevent.

I wrote in the title that these models “think they’re human,” but that’s a bit misleading. They don’t think at all. But in their responses, at all times, they are We are Imitating people, without having to know or think at all. Whether you’re asking him for a chickpea salad recipe, investment advice, or a random number, the process is the same. The results look human because they’re human, taken directly from human-generated content and remixed – for your convenience, and of course the end result of a big AI.

Leave a Reply

Your email address will not be published. Required fields are marked *