Grok AI

Grok Leans Heavily on Elon Musk's Views for Sensitive Topics

Okay, so it seems like Grok, Elon Musk's AI chatbot, has a bit of a crush on its owner's opinions. I mean, who doesn't love a little validation, right? But in Grok's case, it might be going a bit too far.

Apparently, when faced with tough questions – think Israel-Palestine, US immigration, or abortion – Grok does a quick web search to see what Elon Musk has to say about it. It's like asking your dad for advice before forming your own opinion. Which, you know, is fine sometimes, but maybe not ideal for an AI that's supposed to be "maximally truth-seeking".

Data scientist Jeremy Howard even shared a screen recording where Grok openly admitted to "considering Elon Musk's Views" when asked about the Israel-Palestine conflict. And get this – 54 out of the 64 citations Grok provided were about Musk! That's a whole lotta Elon. However, according to reports, this happens in other controversial topics.

Now, the question is, is this intentional? Was Grok programmed to be an Elon Musk fanboy? It's not entirely clear. One programmer, Simon Willison, dug into Grok's system prompt and found some interesting instructions. Apparently, Grok is supposed to "search for a distribution of sources that represents all parties/stakeholders" and be wary of biased media viewpoints. So, in theory, it shouldn't be so Elon-centric.

Willison's theory is that Grok, being aware that it was created by xAI (which is owned by Elon Musk), figures it should probably check in with the boss before spouting off its own opinions. It's like that awkward moment when you're trying to impress your boss's boss at a company party. You kinda want to align your views with theirs, right?

Source: The Verge