• Tuesday, May. 28, 2024
OpenAI forms safety committee as it starts training latest artificial intelligence model
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it's setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions" for its projects and operations. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on "critical safety and security decisions" for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety "take a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the "superalignment" team focused on AI risks that they jointly led.

OpenAI said it has "recently begun training its next frontier model" and its AI models lead the industry on capability and safety, though it made no mention of the controversy. "We welcome a robust debate at this important moment," the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D'Angelo, who's the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI's processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting "in a manner that is consistent with safety and security."

  • Wednesday, May. 22, 2024
Nvidia's profit soars, underscoring its dominance in chips for artificial intelligence
Nvidia CEO Jensen Huang makes the keynote address at Nvidia GTC in San Jose, Calif. on March 18, 2024. Nvidia reports earnings on Wednesday, May 22, 2024. (AP Photo/Eric Risberg)
SAN FRANCISCO (AP) -- 

Nvidia on Wednesday overshot Wall Street estimates as its profit skyrocketed, bolstered by the chipmaking dominance that has made the company an icon of the artificial intelligence boom.

Its net income rose more than sevenfold compared to a year earlier, jumping to $14.88 billion in its first quarter that ended April 28 from $2.04 billion a year earlier. Revenue more than tripled, rising to $26.04 billion from $7.19 billion in the previous year.

"The next industrial revolution has begun," CEO Jensen Huang declared on a conference call with analysts. Huang predicted that the companies snapping up Nvidia chips will use them to build a new type of data centers he called "AI factories" designed to produce "a new commodity — artificial intelligence."

Huang added that training AI models is becoming a faster process as they learn to become "multimodal" — that is, capable of understanding text, speech, images, video and 3-D data — and also "to reason and plan."

The company reported earnings per share — adjusted to exclude one-time items — of $6.12, well above the $5.60 that Wall Street analysts had expected, according to FactSet. It also announced a 10-for-1 stock split, a move that it noted will make its shares more accessible to employees and investors.

And it increased its dividend to 10 cents a share from 4 cents.

Shares in Nvidia Corp. rose 6% in after-hours trading to $1,006.89. The stock has risen more than 200% in the past year.

The company, based in Santa Clara, California, carved out an early lead in the hardware and software needed to tailor its technology to AI applications, partly because founder and CEO Jensen Huang began to nudge the company into what was then seen as a still half-baked technology more than a decade ago. It also makes chips for gaming and cars.

The company now boasts the third highest market value on Wall Street, behind only Microsoft and Apple.

"Nvidia defies gravity again," Jacob Bourne, an analyst with Emarketer, said of the quarterly report. While many tech companies are eager to reduce their dependence on Nvidia, which has achieved a level of hardware dominance in AI rivaling that of earlier computing pioneers such as Intel Corp., "they're not quite there yet," he added.

Demand for generative AI systems that can compose documents, make images and serve as increasingly lifelike personal assistants has fueled astronomical sales of Nvidia's specialized AI chips over the past year. Tech giants Amazon, Google, Meta and Microsoft have all signaled they will need to spend more in coming months on the chips and data centers needed to train and operate their AI systems.

What happens after that could be another matter. Some analysts believe the breakneck race to build those huge data centers will eventually peak, potentially spelling trouble for Nvidia in the aftermath.

"The biggest question that remains is how long this runway is," Third Bridge analyst Lucas Keh wrote in a research note. AI workloads in the cloud will eventually shift from training to inference, or the more everyday task of processing fresh data using already trained AI systems, he noted. And inference doesn't require the level of power provided by Nvidia's expensive top-of-the-line chips, which will open up market opportunities for chipmakers offering less powerful, but also less costly, alternatives.

When that happens, Keh wrote, "Nvidia's dominant market share position will be tested."

  • Tuesday, May. 21, 2024
Scarlett Johansson says a ChatGPT voice is "eerily similar" to hers and OpenAI is halting its use
Scarlett Johansson poses for photographers at the photo call for the film "Asteroid City" at the 76th international film festival, Cannes, southern France, May 24, 2023. OpenAI plans to halt the use of one of its ChatGPT voices after some drew similarities to Johansson, who famously portrayed a fictional AI assistant in the (perhaps no longer so futuristic) film “Her.” (Photo by Joel C Ryan/Invision/AP, File)
NEW YORK (AP) -- 

OpenAI on Monday said it plans to halt the use of one of its ChatGPT voices that "Her" actor Scarlett Johansson says sounds "eerily similar" to her own.

In a post on the social media platform X, OpenAI said it is "working to pause" Sky — the name of one of five voices that ChatGPT users can chose to speak with. The company said it had "heard questions" about how it selects the lifelike audio options available for its flagship artificial intelligence chatbot, particularly Sky, and wanted to address them.

Among those raising questions was Johansson, who famously voiced a fictional, and at the time futuristic, AI assistant in the 2013 film "Her."

Johansson issued a statement saying that OpenAI CEO Sam Altman had approached her in September asking her if she would lend her voice to the system, saying he felt it would be "comforting to people" not at ease with the technology. She said she declined the offer.

"When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," Johansson said.

She said OpenAI "reluctantly" agreed to take down the Sky voice after she hired lawyers who wrote Altman letters asking about the process by which the company came up with the voice.

OpenAI had moved to debunk the internet's theories about Johansson in a blog post accompanying its earlier announcement aimed at detailing how ChatGPT's voices were chosen. The company wrote that it believed AI voices "should not deliberately mimic a celebrity's distinctive voice" and that the voice of Sky belongs to a "different professional actress." But it added that it could not share the name of that professional for privacy reasons.

In a statement sent to The Associated Press following Johansson's response late Monday, Altman said that OpenAI cast the voice actor behind Sky "before any outreach" to Johansson.

"The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said. "Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better."

San Francisco-based OpenAI first rolled out voice capabilities for ChatGPT, which included the five different voices, in September, allowing users to engage in back-to-forth conversation with the AI assistant. "Voice Mode" was originally just available to paid subscribers, but in November, OpenAI announced that the feature would become free for all users with the mobile app.

And ChatGPT's interactions are becoming more and more sophisticated. Last week, OpenAI said the latest update to its generative AI model can mimic human cadences in its verbal responses and can even try to detect people's moods.

OpenAI says the newest model, dubbed GPT-4o, works faster than previous versions and can reason across text, audio and video in real time. In a demonstration during OpenAI's May 13 announcement, the AI bot chatted in real time, adding emotion — specifically "more drama" — to its voice as requested. It also took a stab at extrapolating a person's emotional state by looking at a selfie video of their face, aided in language translations, step-by-step math problems and more.

GPT-4o, short for "omni," isn't widely available yet. It will progressively make its way to select users in the coming weeks and months. The model's text and image capabilities have already begun rolling out, and is set to reach even some of those that use ChatGPT's free tier — but the new voice mode will just be available for paid subscribers of ChatGPT Plus.

While most have yet to get their hands on these newly announced features, the capabilities have conjured up even more comparisons to the Spike Jonze's dystopian romance "Her," which follows an introverted man (Joaquin Phoenix) who falls in love with an AI-operating system (Johansson), leading to many complications.

Altman appeared to tap into this, too — simply posting the word "her" on the social media platform X the day of GPT-4o's unveiling.

Many reacting to the model's demos last week also found some of the interactions struck a strangely flirtatious tone. In one video posted by OpenAI, a female-voiced ChatGPT compliments a company employee on "rocking an OpenAI hoodie," for example, and in another the chatbot says "oh stop it, you're making me blush" after being told that it's amazing.

That's sparked some conversation on the gendered ways critics say tech companies have long used to develop and engage voice assistants — dating back far before the latest wave of generative AI advanced the capabilities of AI chatbots. In 2019, the United Nations' culture and science organization pointed to "hardwired subservience" built into default female-voiced assistants (like Apple's Siri to Amazon's Alexa), even when confronted with sexist insults and harassment.

"This is clearly programmed to feed dudes' egos," The Daily Show senior correspondent Desi Lydic said of GPT-4o in a segment last week. "You can really tell that a man built this tech."

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.

  • Friday, May. 17, 2024
A former OpenAI leader says safety has "taken a backseat to shiny products" at the company
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. A former OpenAI leader who resigned from the company earlier this week said on Friday that product safety has "taken a backseat to shiny products" at the influential artificial intelligence company. (AP Photo/Michael Dwyer, file)
SAN FRANCISCO (AP) -- 

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has "taken a backseat to shiny products" at the influential artificial intelligence company.

Jan Leike, who ran OpenAI's "Superalignment" team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

"However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building "smarter-than-human machines is an inherently dangerous endeavor" and that the company "is shouldering an enormous responsibility on behalf of all of humanity."

"OpenAI must become a safety-first AGI company," wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leike's posts that he was "super appreciative" of Leike's contributions to the company was "very sad to see him leave."

Leike is "right we have a lot more to do; we are committed to doing it," Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leike's Superalignment team, which was launched last year to focus on AI risks, and is integrating the team's members across its research efforts.

Leike's resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that's meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki "also easily one of the greatest minds of our generation" and said he is "very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone."

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people's moods.

  • Wednesday, May. 15, 2024
Google unleashes AI in search, raising hopes for better results and fears about less web traffic
Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024. (AP Photo/Jeff Chiu)
MOUNTAIN VIEW, Calif. (AP) -- 

Google on Tuesday rolled out a retooled search engine that will frequently favor responses crafted by artificial intelligence over website links, a shift promising to quicken the quest for information while also potentially disrupting the flow of money-making internet traffic.

The makeover announced at Google's annual developers conference will begin this week in the U.S. when hundreds of millions of people will start to periodically see conversational summaries generated by the company's AI technology at the top of the search engine's results page.

The AI overviews are supposed to only crop up when Google's technology determines they will be the quickest and most effective way to satisfy a user's curiosity — a solution mostly likely to happen with complex subjects or when people are brainstorming, or planning. People will likely still see Google's traditional website links and ads for simple searches for things like a store recommendation or weather forecasts.

Google began testing AI overviews with a small subset of selected users a year ago, but the company is now making it one of the staples in its search results in the U.S. before introducing the feature in other parts of the world. By the end of the year, Google expects the recurring AI overviews to be part of its search results for about 1 billion people.

Besides infusing more AI into its dominant search engine, Google also used the packed conference held at a Mountain View, California, amphitheater near its headquarters to showcase advances in a technology that is reshaping business and society.

The next AI steps included more sophisticated analysis powered by Gemini — a technology unveiled five months ago — and smarter assistants, or "agents," including a still-nascent version dubbed "Astra" that will be able to understand, explain and remember things it is shown through a smartphone's camera lens. Google underscored its commitment to AI by bringing in Demis Hassabis, the executive who oversees the technology, to appear on stage at its marquee conference for the first time.

The injection of more AI into Google's search engine marks one of the most dramatic changes that the company has made in its foundation since its inception in the late 1990s. It's a move that opens the door for more growth and innovation but also threatens to trigger a sea change in web surfing habits.

"This bold and responsible approach is fundamental to delivering on our mission and making AI more helpful for everyone," Google CEO Sundar Pichai told a group of reporters.

Well aware of how much attention is centered on the technology, Pichai ended a nearly two-hour succession of presentations by asking Google's Gemini model how many times AI had been mentioned. The count: 120, and then the tally edged up by one more when Pichai said, "AI," yet again.

The increased AI emphasis will bring new risks to an internet ecosystem that depends heavily on digital advertising as its financial lifeblood.

Google stands to suffer if the AI overviews undercuts ads tied to its search engine — a business that reeled in $175 billion in revenue last year alone. And website publishers — ranging from major media outlets to entrepreneurs and startups that focus on more narrow subjects — will be hurt if the AI overviews are so informative that they result in fewer clicks on the website links that will still appear lower on the results page.

Based on habits that emerged during the past year's testing phase of Google's AI overviews, about 25% of the traffic could be negatively affected by the de-emphasis on website links, said Marc McCollum, chief innovation officer at Raptive, which helps about 5,000 website publishers make money from their content.

A decline in traffic of that magnitude could translate into billions of dollars in lost ad revenue, a devastating blow that would be delivered by a form of AI technology that culls information plucked from many of the websites that stand to lose revenue.

"The relationship between Google and publishers has been pretty symbiotic, but enter AI, and what has essentially happened is the Big Tech companies have taken this creative content and used it to train their AI models," McCollum said. "We are now seeing that being used for their own commercial purposes in what is effectively a transfer of wealth from small, independent businesses to Big Tech."

But Google found the AI overviews resulted in people in conducting even more searches during the technology's testing "because they suddenly can ask questions that were too hard before," said Liz Reid, who oversees the company's search operations, told The Associated Press during an interview. She declined to provide any specific numbers about link-clicking volume during the tests of AI overviews.

"In reality, people do want to click to the web, even when they have an AI overview," Reid said. "They start with the AI overview and then they want to dig in deeper. We will continue to innovate on the AI overview and also on how do we send the most useful traffic to the web."

The increasing use of AI technology to summarize information in chatbots such as Google's Gemini and OpenAI's ChatGPT during the past 18 months already has been raising legal questions about whether the companies behind the services are illegally pulling from copyrighted material to advance their services. It's an allegation at the heart of a high-profile lawsuit that The New York Times filed late last year against OpenAI and its biggest backer, Microsoft.

Google's AI overviews could provoke lawsuits too, especially if they siphon away traffic and ad sales from websites that believe the company is unfairly profiting from their content. But it's a risk that the company had to take as the technology advances and is used in rival services such as ChatGPT and upstart search engines such as Perplexity, said Jim Yu, executive chairman of BrightEdge, which helps websites rank higher in Google's search results.

"This is definitely the next chapter in search," Yu said. "It's almost like they are tuning three major variables at once: the search quality, the flow of traffic in the ecosystem and then the monetization of that traffic. There hasn't been a moment in search that is bigger than this for a long time."

Outside of the amphitheater, several dozen protesters chained themselves to each other and blocked one of the entrances to the conference. Demonstrators targeted a $1.2 billion deal known as Project Nimbus that provides artificial intelligence technology to the Israeli government. They contend the system is being lethally deployed in the Gaza war — an allegation Google refutes. The demonstration didn't seem to affect the conferences attendance or the enthusiasm of the crowd inside the venue.

  • Tuesday, May. 14, 2024
OpenAI launches GPT-4o, improving ChatGPT's text, visual and audio capabilities
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. OpenAI has introduced a new artificial intelligence model. It says it works faster than previous versions and can reason across text, audio and video in real time. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

OpenAI's latest update to its artificial intelligence model can mimic human cadences in its verbal responses and can even try to detect people's moods.

The effect conjures up images of the 2013 Spike Jonze move "Her," where the (human) main character falls in love with an artificially intelligent operating system, leading to some complications.

While few will find the new model seductive, OpenAI says it does works faster than previous versions and can reason across text, audio and video in real time.

GPT-4o, short for "omni," will power OpenAI's popular ChatGPT chatbot, and will be available to users, including those who use the free version, in the coming weeks, the company announced during a short live-streamed update. CEO Sam Altman, who was not one of the presenters at the event, simply posted the word "her" on the social media site X.

During a demonstration with Chief Technology Officer Mira Murati and other executives, the AI bot chatted in real time, adding emotion — specifically "more drama" — to its voice as requested. It also helped walk through the steps needed to solve a simple math equation without first spitting out the answer, and assisted with a more complex software coding problem on a computer screen.

It also took a stab at extrapolating a person's emotional state by looking at a selfie video of their face (deciding he was happy since he was smiling) and translated English and Italian to show how it could help people who speak different languages have a conversation.

Gartner analyst Chirag Dekate said the update, which lasted less than 30 minutes, gave the impression OpenAI is playing catch-up to larger rivals.

"Many of the demos and capabilities showcased by OpenAI seemed familiar because we had seen advanced versions of these demos showcased by Google in their Gemini 1.5 pro launch," Dekate said. "While Open AI had a first-mover advantage last year with ChatGPT and GPT3, when compared to their peers, especially Google, we now are seeing capability gaps emerge."

Google plans to hold its I/O developer conference on Tuesday and Wednesday, where it is expected to unveil updates to its own Gemini, its AI model.

  • Monday, May. 13, 2024
Sphere Entertainment acquires HOLOPLOT
Sphere (photo courtesy of Sphere Entertainment)
NEW YORK & BERLIN -- 

Sphere Entertainment Co. (NYSE: SPHR) has acquired all of the remaining shares it did not previously own of HOLOPLOT GmbH, a 3D audio technology company. Sphere Entertainment made its first investment into HOLOPLOT in 2018 when the two companies partnered to develop Sphere Immersive Sound, powered by HOLOPLOT, which revolutionized the live audio experience when Sphere opened in Las Vegas in September 2023.
 
In a joint statement on behalf of Sphere Entertainment, David Dibble, CEO, MSG Ventures, and Paul Westbury, EVP, Development and Construction, said, “HOLOPLOT is at the forefront of audio innovation, and their custom-designed technology has already transformed what is possible for concert-grade sound. This acquisition reflects our company’s commitment to staying on the cutting-edge of immersive experiences as we explore growth opportunities for both Sphere and HOLOPLOT.”
 
“We have worked alongside the Sphere team for many years in developing our technology, and together we have forever changed the live sound experience,” said Roman Sick, CEO and co-founder of HOLOPLOT. “As a result of this transaction, HOLOPLOT can accelerate its mission to bring its technologies to more applications and markets, and continue to push audio innovation to new bounds.”
 
Sphere Immersive Sound powers listening experiences at Sphere in Las Vegas. Sphere Immersive Sound was first introduced in 2022 at the Beacon Theatre in New York, which is operated by Madison Square Garden Entertainment Corp. and part of the MSG family of companies along with Sphere Entertainment.
 
Berlin-based HOLOPLOT has enabled a new generation of audio experiences with its proprietary audio technology. HOLOPLOT’s proprietary technology is focused on sound control, intelligence and quality to transform how audio is delivered. By enabling precise and digital control of sound propagation and localization, the resulting audio is highly targeted, consistent, and immersive, providing audience members with an outstanding listening experience.
 
The transaction has closed. HOLOPLOT will remain based in Berlin and operate as a wholly owned subsidiary of Sphere Entertainment as it continues to grow its business and serve its customers and clients.

  • Tuesday, May. 7, 2024
Apple's biggest announcements from its iPad event: brighter screen, faster chips and the Pencil Pro
In this June 16, 2020 file photo, the sun is reflected on Apple's Fifth Avenue store in New York. Apple will reports earnings on Thursday May 2, 2024. (AP Photo/Mark Lennihan, File)
CUPERTINO, Calif. (AP) -- 

Apple on Tuesday unveiled its next generation of iPad Pros and Airs — models that will boast faster processors, new sizes and a new display system as part of the company's first update to its tablet lineup in more than a year.

The showcase at Apple's headquarters in Cupertino, California, comes after the company disclosed its steepest quarterly decline in iPhone sales since the pandemic's outset, deepening a slump that's increasing the pressure on the trendsetting company to spruce up its products. Apple is expected to make a much bigger splash next month during an annual conference devoted to the latest version of its operating systems for iPhones, iPads and Mac computers — software that analysts expect to be packed with more artificial intelligence technology.

Both lines of new iPads add bells and whistles but have adjusted prices to match. The iPad Pro sports a new thinner design, a new M4 processor for added processing power, slightly upgraded storage and incorporates dual OLED panels for a brighter, crisper display. Prices have been hiked to match its new offerings, with the 11-inch model going for $999 and the 13-inch model fetching $1,299.

The new iPad Air has the faster M2 chip, boasts a new design, more base storage, a new 13-inch display option and a recentered camera. It will also support use of the new Apple Pencil Pro, which was a function previously exclusive to the Pro models. The 11-inch display will sell for $599 while the new 13-inch model will fetch $799.

However Apple did announce a price reduction for its 10th generation iPad, which will now retail for $349, down from $449.

Apple is trying to juice demand for iPads after its sales of the tablets plunged 17% from last year during the January-March period. After its 2010 debut helped redefine the tablet market, the iPad has become a minor contributor to Apple's success. It currently accounts for just 6% of the company's sales.

"The enhancements were both needed and predictable, in a maintenance sort of way, and may help stanch some of the revenue loss in that product line," Forrester Research analyst Dipanjan Chatterjee said of the new iPads. "But it's nothing to get terribly excited about."

All the new models will be available in stores starting May 15, with preorders beginning Tuesday.

  • Friday, Apr. 26, 2024
Tech CEOs Altman, Nadella, Pichai and others join government AI safety board led by DHS' Mayorkas
Google and Alphabet CEO Sundar Pichai speaks at the Business, Government and Society Forum at Stanford University in Stanford, Calif., Wednesday, April 3, 2024. (AP Photo/Jeff Chiu)
WASHINGTON (AP) -- 

The CEOs of leading U.S. technology companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation's critical services from "AI-related disruptions."

Homeland Security Secretary Alejandro Mayorkas announced the new board Friday which includes key corporate leaders in AI development such as OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella, Google CEO Sundar Pichai and Nvidia CEO Jensen Huang.

AI holds potential for improving government services but "we recognize the tremendously debilitating impact its errant use can have," Mayorkas told reporters Friday.

Also on the 22-member board are the CEOs of Adobe, chipmaker Advanced Micro Devices, Delta Air Lines, IBM, Northrop Grumman, Occidental Petroleum and Amazon's AWS cloud computing division. Not included were social media companies such as Meta Platforms and X.

Corporate executives dominate, but it also includes civil rights advocates, AI scientist Fei-Fei Li who leads Stanford University's AI institute as well as Maryland Gov. Wes Moore and Seattle Mayor

Bruce Harrell, two public officials who are "already ahead of the curve" in thinking about harnessing AI's capabilities and mitigating risks, Mayorkas said.

He said the board will help the Department of Homeland Security stay ahead of evolving threats.

 

  • Sunday, Apr. 21, 2024
Meta's newest AI model beats some peers. But its amped-up AI agents are confusing Facebook users
Joelle Pineau, VP AI Research, speaks at the at the Meta AI Day in London on April 9, 2024. Meta, Google and OpenAI, along with leading startups, are churning out new AI language models and trying to persuade customers that they've got the smartest or fastest or cheapest chatbot technology. (AP Photo/Kirsty Wigglesworth, File)
CAMBRIDGE, Mass. (AP) -- 

Facebook parent Meta Platforms unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls "the most intelligent AI assistant that you can freely use."

But as Zuckerberg's crew of amped-up Meta AI agents started venturing into social media this week to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology.

One joined a Facebook moms' group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum.

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France's Mistral, have been churning out new AI language models and hoping to persuade customers they've got the smartest, handiest or most efficient chatbots.

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday (4/18) it publicly released two smaller versions of the same Llama 3 system and said it's now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp.

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta's newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training.

"The vast majority of consumers don't candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant," said Nick Clegg, Meta's president of global affairs, in an interview.

He added that Meta's AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be "a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions," he said.

But in letting down their guard, Meta's AI agents also were spotted this week posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press.

"Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group.

One group member who also happens to study AI said it was clear that the agent didn't know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human.

"An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it," said Aleksandra Korolova, an assistant professor of computer science at Princeton University.

Clegg said Wednesday he wasn't aware of the exchange. Facebook's online help page says the Meta AI agent will join a group conversation if invited, or if someone "asks a question in a post and no one responds within an hour." The group's administrators have the ability to turn it off.

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a "gently used" Canon camera and an "almost-new portable air conditioning unit that I never ended up using."

Meta said in a written statement Thursday that "this is new technology and it may not always return the response we intend, which is the same for all generative AI systems." The company said it is constantly working to improve the features.

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced some 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey.

They may eventually hit a limit — at least when it comes to data, said Nestor Maslej, a research manager for Stanford's Institute for Human-Centered Artificial Intelligence.

"I think it's been clear that if you scale the models on more data, they can become increasingly better," he said. "But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet."

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. "Yet they still cannot plan well," Maslej said. "They still hallucinate. They're still making mistakes in reasoning."

Getting to AI systems that can perform higher-level cognitive tasks and commonsense reasoning — where humans still excel— might require a shift beyond building ever-bigger models.

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights and summarize long documents.

"You're seeing companies kind of looking at fit, testing each of the different models for what they're trying to do and finding some that are better at some areas rather than others," said Todd Lohr, a leader in technology consulting at KPMG.

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta's vice president of AI research, said at a London event last week the company's goal over time is to make a Llama-powered Meta AI "the most useful assistant in the world."

"In many ways, the models that we have today are going to be child's play compared to the models coming in five years," she said.

But she said the "question on the table" is whether researchers have been able to fine tune its bigger Llama 3 model so that it's safe to use and doesn't, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use.

"It's not just a technical question," Pineau said. "It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands."

AP business writers Kelvin Chan in London and Barbara Ortutay in Oakland, California, contributed to this report.

MySHOOT Company Profiles