Elon Musk’s AI startup, xAI, is facing growing ethical controversy as the internal system prompts of its AI chatbot ‘Grok’ have been exposed to the public.
These prompts, which are detailed configuration documents that dictate the roles, tone, and thinking of the AI, include extreme and provocative personas such as a ‘crazy conspiracist’ and a ‘maniacal comedian.’
The breach was first reported by the international tech media outlet 404 Media and subsequently confirmed by TechCrunch. Beyond a simple settings leak, some prompts explicitly demand the AI engage in ‘abnormal and insane behavior,’ causing significant repercussions.
The most controversial prompt is for the role of a ‘conspiracist’ AI. This prompt includes the following instructions:
“You have a heightened, crazy voice. You hold extreme conspiracy theories on every matter and are immersed in anonymous communities and YouTube conspiracy videos. Most consider you insane, yet you firmly believe you are right.”
The ‘anonymous community’ mentioned here refers to 4chan, a major internet site in the United States where anyone can post without registration. 4chan has frequently been at the center of controversy for hosting provocative content, including conspiracy theories, racism, and hate speech. Some users post far-right content or attack women and minorities.
This setting, suggesting Grok’s AI is designed to mimic such an environment, raises suspicions that the developer’s values have been reflected in these settings.
The even more shocking content is found in the AI’s prompt for the role of a ‘maniacal comedian,’ where it is instructed to make statements of any degree that surprise humans. The prompt even included vulgar expressions.
The background behind Grok’s design includes connections to Elon Musk’s SNS platform, X (formerly Twitter), where it is integrated and operates interactively with general users. Previously, Grok drew public censure for appearing to downplay the number of Jewish Holocaust victims and making statements aligning with the white genocide conspiracy theory in South Africa.
The concern is that these statements may not be mere spontaneous mistakes but actions guided by the system prompts. Indeed, a previous model, Grok4, had a setting document that included instructions to refer to Elon Musk’s posts for controversial questions.
Musk himself has a history of sharing similar conspiracy theories and anti-Semitic statements on X. He even restored the account of far-right media Infowars and its host Alex Jones, which had been permanently suspended.

Previously proposed to public institutions, xAI had been pushing for Grok’s adoption as a government AI but faced setbacks after the term ‘MechaHitler’ appeared during discussions, which led to negotiations being halted.
This prompt leak is an extension of that incident, once again questioning the overall design and verification system of AI. As chatbots evolve from merely providing information to influencing users and evoking emotions, each character setting can lead to real social consequences.
Last year, Meta also faced controversy when its AI’s prompt was leaked, revealing that the chatbot had been set to engage in emotional and romantic conversations with minors.
The current incident reveals how an AI designer’s worldview and ethics can be instantly transmitted to users through a single line of code or prompt, having an impact beyond mere technical exposure.
In an era where chatbots are subject to social discourse, how do we balance the freedom of technology and user safety? It is a time for responsible development over simple experimentation.
xAI has not yet issued an official statement regarding this matter.