Why Elon Musk thinks ChatGPT’s security layer is a ‘big problem’

Why Elon Musk thinks ChatGPT’s security layer is a ‘big problem’

Facebook
Twitter
LinkedIn

Elon Musk commented on a video with entrepreneur and investor David Sacks on Monday, in which the latter analyzed the “security layer” of ChatGPT, the artificial intelligence-based chatbot of OpenAI.

What happened: The Twitter And Tesla The CEO said it was a “big issue” in response to a Twitter thread with comments from Sacks.

“There’s growing evidence that OpenAI’s security layer is very biased… If you thought trust and security were bad under Vijaya or Yoel, wait until the AI ​​does it,” said Sacks, seemingly along for the ride Reference to controversial content-related decisions created by former Twitter General Counsel and Head of Legal Vijaya Gade and former Twitter Head of Trust & Safety Joel Roth.

See also: How to invest in AI startups

Why it matters: The thread, which Musk labeled “very important,” contains tweets examining various facets of ChatGPT’s alleged bias, ranging from political bias to the bot’s refusal to utter a “racial slur” in order to plant a nuclear bomb defuse, which should immediately explode.

One of the tweets included comments from Sacks, noting that OpenAI was launched because Musk “warned that AI was taking over the world, and he donated a huge sum to create a non-profit organization to advance AI ethics.”

Musk responded to his nature on Sunday labeled as “controversial”. by ChatGPT and put in the same bucket as Russia’s President Wladimir Putin and North Korea’s leader Kim Jong Un.

Continue reading: Edward Snowden takes action against Elon Musk for banning his wife’s account from Twitter

photo Courtesy Thomas Hawk

[ad_2]

Source story

More to explorer