xAI, Grok
Digest more
2hon MSN
The latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development.
Elon Musk’s company xAI apologized after Grok posted hate speech and extremist content, blaming a code update and pledging new safeguards to prevent future incidents.
Grok, a chatbot from xAI, faced criticism for antisemitic posts, prompting removal following complaints from X users and the Anti-Defamation League. The incident highlights ongoing issues of bias and accuracy in AI.
If a user asks Grok about posting sexualised images of women without consent, it mentions its 'strict' guidelines which 'prioritise respect and consent'. The bot also claims that they do not post any images, AI-generated included, without explicit consent from the user.
Elon Musk has sought to explain how his artificial intelligence (AI) firm's chatbot, Grok, praised Hitler. "Grok was too compliant to user prompts," Musk wrote on X. "Too eager to please and be manipulated, essentially. That is being addressed."
The Polish deputy prime minister is asking the EU to investigate Grok, the AI chatbot integrated into the X social media platform.
A public disagreement between Elon Musk and his AI chatbot, Grok, has ignited debate about AI bias and media credibility. AFP News Grok, Elon Musk's AI venture, has reportedly taken an unexpected ...
Worse, large language models could also end up validating false information. ‘Grok’ was lucky this time. Following a seismic event in Pakistan on May 12, numerous users of Elon Musk’s social media platform X (formerly known as Twitter) resorted to asking its AI chatbot “Grok” whether the event might have been produced by an underground nuclear test.
Elon Musk's AI company says an “unauthorized modification” led Grok to post unsolicited claims on social media about the persecution and “genocide” of white people in South Africa.