How Elon Musk’s AI Chatbot Is Spreading Chaos Online

How Elon Musk’s AI Chatbot Is Spreading Chaos Online


It was just A few days ago, Elon Musk’s social media company X (formerly known as Twitter) released the latest version of its artificial intelligence (AI) chatbot Grok. Out on August 13, the new update, Grok-2, allows users to create AI images with simple text prompts. The problem? The model doesn’t have any of the middle ground that other popular AI models have. Simply put, people can do almost anything with Grok. And they can.

Grok is a generative AI model — a system that learns on its own and generates new content based on what it has learned. Over the past two years, advances in data processing and computer science have made AI models incredibly popular in the tech space, with both startups and established companies like Meta developing their own versions of the tool. But for X, this progress has been marked by concerns from users and professionals that the AI ​​bot is taking things too far. In the days since the Grok update, X has been awash with wild user-generated AI content, some of the most viral of which has involved political figures.

There were AI images of former President Donald Trump fondling pregnant Vice President Kamala Harris, Musk with Mickey Mouse holding an AK-47 surrounded by pools of blood, and countless examples of racy and violent content. However, when concerned X users pointed out the AI ​​bot’s seemingly unfettered capabilities, Musk took a nonchalant approach, calling it “the most fun AI in the world.” Now, when users point out political content, Musk simply comments, either with “awesome” or a laughing emoji. In one instance, when an X user posted an AI image of Musk pregnant with Trump’s child, the X owner responded with more laughing emojis and wrote, “Well, if I live by the sword, I must die by the sword.”

As researchers continue to advance the field of generative AI, there has been ongoing and increasingly alarming conversations about its ethical implications. During this US presidential election season, experts have also expressed concerns about how AI could influence or help spread problematic lies to voters. Musk in particular has come under fire for sharing manipulated content. In July, the X owner posted a digitally altered clip of Vice President Harris, which used her voice to claim that Harris had called President Joe Biden “senile” and called Harris “the ultimate diversity employee.” Musk did not add a disclaimer that the post was manipulated, and shared it with his 194 million followers — a post that runs counter to X’s stated guidelines that prohibit “synthetic, manipulated, or out-of-context media that could deceive or confuse people and lead to harm.”

While there have been issues with other generative models in the past, some of the more popular ones, like ChatGPT, have developed stricter rules about the images they will allow users to generate. OpenAI, the company behind the model, doesn’t allow people to generate images by mentioning political figures or celebrities by name. The guidelines also prohibit people from using AI to develop or use weapons. However, users on X have alleged that Grok will generate images that promote violence and racism, such as ISIS flags, politicians wearing Nazi insignia, and even dead bodies.

Nikola Panovic, associate professor of computer science at the University of Michigan, Ann Arbor, says: Rolling Stone The problem with Grok is not just that the model lacks protective barriers, but that it is widely accessible as a robot that can be used with little or no training or tutorials.

“There is certainly a risk that these types of tools are now available to a wider audience. They can be used effectively to spread misleading and disinformation,” he says. “What makes it particularly difficult is that [models] “We’re getting closer to being able to generate something that’s actually realistic, maybe even plausible, but the general public may not have the ability to spot misinformation as misinformation. We’re now getting closer to the point where we have to look at some of these images more closely and try to understand the context better so that we as the public can spot when an image is not real.”

Most popular

X representatives did not respond to Rolling StoneRequest for comment. Grok-2 and its mini version are currently in beta on X and only available to participants who pay for X premium, but the company has announced plans to continue developing the models further.

“This remains a broader debate about some of the standards or ethics related to the creation of [and deploying] “This kind of model is common,” Panovich adds. “But I rarely hear a question like, ‘What is the responsibility of the AI ​​platform owner who now takes this kind of technology and releases it to the general public?’ And I think that’s something we need to discuss as well.”





.

Leave a Reply

Your email address will not be published. Required fields are marked *