Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
Was Sam Altman's Sacking By OpenAI's Board Over 'Q-Star' Breakthrough Seen As Threat To Humanity?
TYLER DURDEN
The mystery surrounding the brief dismissal of OpenAI CEO Sam Altman last Friday, who has since been reinstated, might revolve around a Reuters report that suggests Altman's removal was due to a breakthrough in artificial general intelligence (AGI), which could threaten humanity.
In the days before Altman was sent off into exile, several staff researchers penned a letter to the board about a significant breakthrough - called Q* and pronounced Q-Star - that allowed the AI model to "surpass humans in most economically valuable tasks."
Reuters sources said the AI milestone was one of the significant factors that led to the board's abrupt firing of Altman last Friday. Another concern was commercializing the advanced AI model without understanding the socio-economic consequences.
Also, before Altman was sacked, he might have referenced Q* at the Asia-Pacific Economic Cooperation in San Francisco:
"Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward," said Altman.
My prediction is that a few weeks ago the team at OpenAI demoed either a machine that showed consciousness or AGI and Sam didn’t immediately tell the board and their feelings were hurt.
An internal conflict at OpenAI has surfaced and is rooted in an ideological battle between those pushing for rapid AI advancement and those advocating for a slower, more responsible approach to development.
Reuters spoke with an OpenAI spokesperson who confirmed the existence of the project Q* and the letter to the board before Altman's firing.
So why is Q* a breakthrough?
Well, as tech blog 9to5Mac explains:
Currently, if you ask ChatGPT to solve a math problems, it will still use its predictive-text-on-steroids approach of compiling an answer by using a huge text database and deciding on a word-by-word basis how a human would answer. That means that it may or may not get the answer right, but either way doesn't have any mathematical skills.
OpenAI appears to have made a breakthrough in this area, successfully enabling an AI model to genuinely solve mathematical problems it hasn't seen before. This development is said to be known as Q*. Sadly the team didn't use a naming model smart enough to avoid something which looks like a pointer to a footnote, so I'm going to use the Q-Star version.
Q-Star's current mathematical ability is said to be that of a grade-school student, but it's expected that this ability will rapidly improve.
This technological development could be some of the first signs AGI, a form of AI that can surpass humans, is imminent for commercialization.
AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement. A recent Goldman report outlines how 300 million layoffs could be coming to the Western world because of AI.
The Q* breakthrough and the rapid advancement of this technology now make sense why the board abruptly fired Altman for his rush to develop this technology without studying the model's impact on how it threatens humanity.
Altman recently said, "I think this is like, definitely the biggest update for people yet. And maybe the biggest one we’ll have because from here on, like, now people accept that powerful AI is, is gonna happen, and there will be incremental updates… there was like the year the first iPhone came out, and then there was like every one since."
No comments:
Post a Comment