You are currently viewing OpenAI says it has begun training a new flagship AI model

OpenAI says it has begun training a new flagship AI model

OpenAI said on Tuesday that it has begun training a new flagship AI model that will succeed the GPT-4 technology that powers its popular online chatbot, ChatGPT.

The San Francisco startup, which is one of the world’s leading artificial intelligence companies, said in a blog post that it expects the new model to bring “the next level of capabilities” as it seeks to build “artificial general intelligence,” or AGI. a machine that can do anything a human brain can do. The new model will be the engine for AI products, including chatbots, digital assistants similar to Apple’s Siri, search engines and image generators.

OpenAI also said it is creating a new Safety and Security Committee to study how it should address the risks posed by the new model and future technologies.

“While we are proud to build and release models that lead the industry in both capability and safety, we welcome serious debate at this important time,” the company said.

OpenAI aims to move AI technology forward faster than its competitors, while assuaging critics who say the technology is becoming increasingly dangerous, helping to spread misinformation, replacing jobs and even threatening humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta, and Microsoft have been steadily increasing the power of AI technologies for more than a decade, demonstrating a noticeable jump roughly every two to three years.

OpenAI’s GPT-4, which was released in March 2023, enables chatbots and other software applications to answer questions, write emails, generate term papers and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.

Days after OpenAI showed off the updated version – called GPT-4o – actress Scarlett Johansson said she was using a voice that sounded “eerily similar to mine”. She said she rebuffed efforts by OpenAI’s CEO, Sam Altman, to license her voice for the product, and that she hired a lawyer and asked OpenAI to stop using the voice. The company said the voice was not Ms Johansson’s.

Technologies like GPT-4o learn their skills by analyzing vast amounts of digital data, including sounds, photos, videos, Wikipedia articles, books and news. The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content related to artificial intelligence systems.

Digitally “training” AI models can take months or even years. After training is complete, AI companies typically spend several more months testing the technology and fine-tuning it for public use.

That could mean the next OpenAI model won’t arrive for another nine months to a year or more.

As OpenAI trains its new model, its new safety and security committee will work to refine policies and processes to protect the technology, the company said. The committee includes Mr. Altman, as well as OpenAI board members Brett Taylor, Adam D’Angelo and Nicole Seligman. The company said the new policies could go into effect in late summer or fall.

Earlier this month, OpenAI said that Ilya Sutzkever, a co-founder and one of the leaders of the safety effort, was leaving the company. This has raised concerns that OpenAI is not sufficiently combating the dangers posed by AI

Dr. Sutskever joined three other board members in November to remove Mr. Altman from OpenAI, saying that Mr. Altman could no longer be trusted with the company’s plan to create artificial general intelligence for the good of mankind. After a lobbying campaign by Mr. Altman’s allies, he was reinstated five days later and has since reasserted control of the company.

Dr. Sutskever leads what OpenAI calls its Superalignment team, which explores ways to ensure that future AI models do no harm. Like others in the field, he was increasingly concerned that AI posed a threat to humanity.

Jan Leicke, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team’s future in doubt.

OpenAI has folded its long-term safety research into its larger efforts to ensure its technologies are safe. This work will be led by John Shulman, another co-founder who previously led the team that built ChatGPT. The new safety committee will oversee Dr. Shulman’s research and provide guidance on how the company will address technological risks.

Leave a Reply