Recently, Grok pushed out its new Companions feature, which attracted yet more controversy. Companions is the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns. This article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots.Recently, Grok pushed out its new Companions feature, which attracted yet more controversy. Companions is the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns. This article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots.

Disproving the "Innovation Against Safety" Doctrine in AI Regulation

2025/11/03 01:00

Over the past decade or so, the breakneck pace of AI development has no doubt guaranteed the well-being of millions of people, and, with slight effort to stay on such a trajectory, the technology will certainly stay this way for decades more to come.

\ In my opinion, however, recent actions undertaken by many AI companies, as well as the governments of many leading AI developers, in aggregate constitute a deviation from the path to the benefit of humanity. Yet, with some new research pointing towards the potential harms of AI chatbots, it is necessary that we begin considering the possibility of regulation to limit the extent of their availability.

\ Inspired by the implications of the Grok Companions feature, this article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots, and proposes how future legislation might control or prohibit safety lapses within current chatbot models.

Grok’s Troubles

Grok has always maintained a spot as one of the most contentious commercial AI models since its inception, periodically becoming a symbolic spotlight for the issue of corporate control over AI models in Elon Musk’s hilariously unsuccessful attempts to use it as a tool to advance a pro-right agenda on X.

\ Yet, recently, Grok pushed out its new Companions feature, which attracted yet more controversy. On the surface, the Companions feature is a series of chatbots in reminiscence of previous chatbot services like those offered by Meta AI and Character.AI, yet it outdoes all these in a surprisingly absurd way. The first two companions include Rudi, a swearing Red Panda, and Ani, a blonde anime girl, both made up of a fine-tuned version of Grok as well as an accompanying avatar.

\ Speculative media have, unsurprisingly, focused most of their attention on Ani. A variety of online reports corroborate the chatbot’s inherently romantic features, with several reviewers taking particular note of the ‘love levels’ a user may achieve to unlock increasingly sexual conversations, along with accompanying changes to the avatar. WIRED reviewers also noted the AI model’s readiness to openly talk about BDSM topics, as well as its clingy style of speech and inconsistent child filter.

\ Since I do not have the willingness to purchase the 30$ per month SuperGrok subscription to access the Companions feature, I was unable to independently verify some of the claims about the chatbot; the internet, on the other hand, seemed to agree on one thing: this particular chatbot was excessively bold. Rudi, for how questionable it seems, attracted far less controversy. The cartoon Red Panda tends to sling insults and dark jokes that many found unfunny and ridiculous. Many reviewers tended to sideline this character, instead dismissing it as a less important one, mostly catered towards Gen-Z kids.

\ To tell the truth, I found both chatbot characters rather dull. Instead, what interested me was the distinct process and reception of this otherwise dime-a-dozen romantic chatbot. First of all, Companions is, among the products released by the “industry leaders” of AI (e.g., OpenAI, DeepMind, Anthropic, Meta), the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns from alleging long-term psychological effects to exploiting vulnerable demographics.

\ The distinct paucity of regulation surrounding chatbots like these stood out to me immediately, in addition to the fact that, other than answering to a few dissenting voices, xAI was able to release the product with impunity. This all points toward the major question of technology regulation: Should new technology be closely watched to safeguard users, or given free rein to grow and be developed?

Responsibility and Innovation

As with all incipient technologies, the psychological effects of AI chatbot use on humans are neither scientifically proven nor empirically apparent. Many people have long surmised that such technologies could potentially exacerbate existing problems, and initial reports have found a negative correlation between well-being and chatbot usage.

\ Despite this, these relatively unknown technologies are still well in the process of invading the mainstream media. In considering whether or not these technologies are indeed harmful or not, technology commentators and policymakers alike overlook the crucial point that such a consideration should, idealistically, never be a necessary concern in the first place within commercial technologies. Airline passengers would not be happy knowing that their plane might experience catastrophic failure.

\ Likewise, clinical trial participants would not bode well with knowing that numerous animals had not preceded them in the testing process. One of the most key principles of engineering is that regardless of anything, safety always comes first. To get an idea about the potential dangers of these chatbots, in any case, we only need to look at the examples of two teens whose suicides have been linked to AI being complicit in their suicidal ideation.

\ Many proponents of the current “develop now, fix later” doctrine point to the obvious: we’re locked in a race of innovation with China. My response to this is one of complete agreement: we are, in fact, locked in an AI “arms race”, and the products of our time will likely be adapted within the arsenals of cyber-warfare, among many other things. Despite this, I contend that the need for innovation is not a case to disregard safety—we can never assume that rapid technological progress is mutually exclusive with consumer safety. I anticipate and object to two notable objections to this claim:

\

\ There are plenty of ways to test the reliability and safety of products within beta-testing settings. While these tests have no doubt been conducted (notably, OpenAI rolls out new models to Pro users before other types of users), it is not an overstatement to say that the mass deployment of many commercially available chatbots is conducted in such a way that disregards user safety, with many ChatGPT models failing to divert or end conversations even when users signal distress. Even if commercial deployment were necessary to find many of these issues, it would be much more reasonable if adequate safeguards were taken to ensure the safety of vulnerable user groups, which is currently not the case.

\

\ Chat transcripts are usually not processed verbatim as part of RLHF processes used by companies like OpenAI and Google. While they may in fact inform the safety and engagement model of corresponding chatbots, separate data pipelines, mostly high-quality, technical data created or verified by humans, influence the aspects of AI training most pertinent to developing reasoning performance and other types of specialized knowledge (e.g., coding, math solving, etc.). There is, therefore, a scant case to claim that the widespread distribution of these AI chatbots is a prerequisite to the rapid advancement of AI capabilities.

\ Hopefully, I have shown that the need for innovation isn’t the root cause of these safety lapses—rather, the concerted lack of effort on safety protocols and testing is. Yet, the practical course of action to correct this persistent quality remains a matter of debate.

The Role of Regulation

The obvious solution to the aforementioned lack of safety standards is to simply increase government regulation of the practice of training and distributing chatbots. What is not obvious, however, is how this highly ambiguous proposal would be done in practice. In the early 20th century, the United States learned through Prohibition the important lesson that harsh, all-encompassing bans on a harmful product don’t work. Instead, banning alcohol without stripping the substance of its desirability simply led to a black market fever, increasing instead of decreasing the total alcohol consumption.

\ In the late 20th century, to combat the mass consumption of cigarettes, the US government took a different approach: instead of outright banning the use of cigarettes, they reduced the social desirability of tobacco products through publishing widely circulated reports detailing how they might cause skin cancer, mandating cigarette companies to place visible disclaimers on every product, and limiting the pervasiveness of cigarette advertisements. These subtle actions resulted in a continuous decline of cigarette consumption from a historic peak of almost 4000 to roughly 800 cigarettes per capita per annum.

\ To take away from history, governmental control over unsafe chatbots should go beyond legal barriers of consumption and development. They should also seek to lessen the perceived social permissibility of consuming these products, whether through campaigns or public research. Despite this, it is still unclear the degree to which the government can actually influence wider social shifts, with current public opinion directed towards viral social media trends to a greater extent than towards political-economic shifts. In all, there is really no downside to a few promptly instated, yet well-constructed, regulations on AI chatbots in the current world.


Written by Thomas Yin

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Laver Cup Begins in San Francisco, But Can’t Match Ryder Cup Fever

The Laver Cup Begins in San Francisco, But Can’t Match Ryder Cup Fever

The post The Laver Cup Begins in San Francisco, But Can’t Match Ryder Cup Fever appeared on BitcoinEthereumNews.com. SAN FRANCISCO, CALIFORNIA – MARCH 8: Roger Federer stands outside Chase Center as part of the Laver Cup San Francisco Launch for 2025 on March 8, 2024 in San Francisco, California. (Photo by Loren Elliott/Getty Images for Laver Cup) Getty Images for Laver Cup The Laver Cup is back in the United States as its eighth edition takes place at the Chase Centre in San Francisco, starting on Friday. Andre Agassi takes the captaincy reins from John McEnroe for Team World, which features top ten stars Taylor Fritz and Alex de Minaur alongside Brazilian wonderkid Joao Fonseca. Team Europe’s lineup boasts new world No. 1 Carlos Alcaraz and world No. 3 Alexander Zverev, with Casper Ruud and Holger Rune making an impressive front four on the grid. Europe have won the event five times in the seven iterations so far under the watchful eye of Bjorn Borg. The charismatic former French Open champion Yannick Noah skippers the team as Tim Henman comes in as vice-captain. “I’ve talked to many of the players and they say it’s one of the most fun events to play in, a great event,” said Rune ahead of the Open Practice Day. The Laver Cup was the brainchild of Roger Federer and his longtime agent Tony Godsick. The original concept was to bring about the best of the past, present and future of tennis in a weekend event that could match the team and individual dynamic of the Ryder Cup. The singles and fourballs of golf can easily be swapped into tennis terminology as nine singles and three doubles build to a (potentially) thrilling last day of competition with the first to reach 13 points declared the winner. In the Ryder Cup, 14 and a half points are needed to win outright. Laver Cup CEO Steve…
Share
BitcoinEthereumNews2025/09/19 07:41
Edges higher ahead of BoC-Fed policy outcome

Edges higher ahead of BoC-Fed policy outcome

The post Edges higher ahead of BoC-Fed policy outcome appeared on BitcoinEthereumNews.com. USD/CAD gains marginally to near 1.3760 ahead of monetary policy announcements by the Fed and the BoC. Both the Fed and the BoC are expected to lower interest rates. USD/CAD forms a Head and Shoulder chart pattern. The USD/CAD pair ticks up to near 1.3760 during the late European session on Wednesday. The Loonie pair gains marginally ahead of monetary policy outcomes by the Bank of Canada (BoC) and the Federal Reserve (Fed) during New York trading hours. Both the BoC and the Fed are expected to cut interest rates amid mounting labor market conditions in their respective economies. Inflationary pressures in the Canadian economy have cooled down, emerging as another reason behind the BoC’s dovish expectations. However, the Fed is expected to start the monetary-easing campaign despite the United States (US) inflation remaining higher. Investors will closely monitor press conferences from both Fed Chair Jerome Powell and BoC Governor Tiff Macklem to get cues about whether there will be more interest rate cuts in the remainder of the year. According to analysts from Barclays, the Fed’s latest median projections for interest rates are likely to call for three interest rate cuts by 2025. Ahead of the Fed’s monetary policy, the US Dollar Index (DXY), which tracks the Greenback’s value against six major currencies, holds onto Tuesday’s losses near 96.60. USD/CAD forms a Head and Shoulder chart pattern, which indicates a bearish reversal. The neckline of the above-mentioned chart pattern is plotted near 1.3715. The near-term trend of the pair remains bearish as it stays below the 20-day Exponential Moving Average (EMA), which trades around 1.3800. The 14-day Relative Strength Index (RSI) slides to near 40.00. A fresh bearish momentum would emerge if the RSI falls below that level. Going forward, the asset could slide towards the round level of…
Share
BitcoinEthereumNews2025/09/18 01:23