By Mary Clare Jalonick & Matt O'Brien
WASHINGTON (AP) --Senate Majority Leader Chuck Schumer has been talking for months about accomplishing a potentially impossible task: passing bipartisan legislation within the next year that encourages the rapid development of artificial intelligence and mitigates its biggest risks. On Wednesday, he's convening a meeting of some of the country's most prominent technology executives, among others, to ask them how Congress should do it.
The closed-door forum on Capitol Hill will include almost two dozen tech leaders and advocates, and some of the industry's biggest names: Meta's Mark Zuckerberg and X and Tesla's Elon Musk as well as former Microsoft CEO Bill Gates. All 100 senators are invited, but the public is not.
Schumer, D-N.Y., who's leading the forum with Republican Sen. Mike Rounds of South Dakota, won't necessarily take the tech executives' advice as he works with Republicans and fellow Democrats to try and ensure some oversight of the burgeoning sector. But he's hoping that they will give senators some realistic direction as he tries to do what Congress hasn't done for many years — pass meaningful regulation of the tech industry.
"It's going to be a fascinating group because they have different points of view," Schumer said in an interview ahead of the forum. "Hopefully we can weave it into a little bit of some broad consensus."
Rounds, who spoke to AP with Schumer on Tuesday, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop "on the positive side" while also taking care of potential issues surrounding data transparency and privacy.
"AI is not going away, and it can do some really good things or it can be a real challenge," Rounds said.
Schumer says regulation of artificial intelligence will be "one of the most difficult issues we can ever take on," and ticks off the reasons why: It's technically complicated, it keeps changing and it "has such a wide, broad effect across the whole world," he said.
Congress has a lackluster track record when it comes to regulating technology. Lawmakers have lots of proposals — many of them bipartisan — but have mostly failed to agree on major legislation to regulate the industry as powerful tech companies have resisted.
Many lawmakers point to the failure to pass any legislation surrounding social media — bills have stalled in both chambers that would better protect children, regulate activity around elections and mandate stricter privacy standards, among other measures.
"We don't want to do what we did with social media, which is let the techies figure it out, and we'll fix it later," says Senate Intelligence Committee Chairman Mark Warner, D-Va., on the AI push.
Schumer's bipartisan working group — comprised of Rounds, Democratic Sen. Martin Heinrich of New Mexico and Republican Sen. Todd Young of Indiana — is hoping that the rapid growth of artificial intelligence will create more urgency. Sparked by the release of ChatGPT less than a year ago, businesses across many sectors have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
"You have to have some government involvement for guardrails," Schumer said. "If there are no guardrails, who knows what could happen."
Schumer says Wednesday's forum will focus on big ideas like whether the government should be involved at all, and what questions Congress should be asking. Each participant will have three minutes to speak on a topic of their choosing, and Schumer and Rounds will moderate open discussions among the group in the morning and afternoon.
Some of Schumer's most influential guests, including Musk and Sam Altman, CEO of ChatGPT-maker OpenAI, have signaled more dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.
But for many lawmakers and the people they represent, AI's effects on employment and navigating a flood of AI-generated misinformation are more immediate effects.
A recent report from the market research group Forrester projected that generative AI technology could replace 2.4 million jobs in the U.S. by 2030, many of them white-collar roles not affected by previous waves of automation. This year alone the number of lost jobs could total 90,000, the report said, though far more jobs will be reshaped than eliminated.
AI experts have also warned of the growing potential of AI-generated online disinformation to influence elections, including the upcoming 2024 presidential race.
On the more positive side, Rounds says he would like to see the empowerment of new medical technologies that could save lives and allow medical professionals to access more data. That topic is "very personal to me," Rounds says, after his wife died of cancer two years ago.
Many members of Congress agree that legislation will probably be needed in response to the quick escalation of artificial intelligence tools in government, business and daily life. But there is little consensus on what that should be, or what might be needed. There is also some division — some members worry more about overregulation, and others worry more about the potential risks of an unchecked industry.
"I am involved in this process in large measure to ensure that we act, but we don't act more boldly or over-broadly than the circumstances require," says Sen. Young, one of the members of Schumer's working group. "We should be skeptical of government, which is why I think it's important that you got Republicans at the table."
Young says that Schumer has reassured him that he will be "hypersensitive to overshooting as we address some of the potential harms of AI."
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world's first set of comprehensive rules for artificial intelligence. The EU's AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
In the United States, most major tech companies have expressed support for AI regulations, though they don't necessarily agree on what that means.
"We've always said that we think that AI should get regulated," said Dana Rao, general counsel and chief trust officer for software company Adobe. "We've talked to Europe about this for the last four years, helping them think through the AI Act they're about to pass. There are high-risk use cases for AI that we think the government has a role to play in order to make sure they're safe for the public and the consumer."
Adobe, which makes Photoshop and the new AI image-generator Firefly, is proposing its own federal legislation: an "anti-impersonation" bill to protect artists as well as AI developers from the misuse of generative AI tools to produce derivative works without a creator's consent.
Senators say they will figure out a way to regulate the industry, despite the odds.
"Make no mistake. There will be regulation. The only question is how soon, and what," said Sen. Richard Blumenthal, D-Conn., at a Tuesday hearing on legislation he wrote with Republican Sen. Josh Hawley of Missouri.
Blumenthal's framework calls for a new "licensing regime" that would require tech companies to seek licenses for high-risk AI systems. It would also create an independent oversight body led by experts and hold companies liable when their products breach privacy or civil rights or endanger the public.
"Risk-based rules, managing the risks, is what we need to do here," Blumenthal said.
O'Brien reported from Providence, Rhode Island. Associated Press writers Ali Swenson in New York and Kelvin Chan in London contributed to this report.
Rom-Com Mainstay Hugh Grant Shifts To The Dark Side and He’s Never Been Happier
After some difficulties connecting to a Zoom, Hugh Grant eventually opts to just phone instead.
"Sorry about that," he apologizes. "Tech hell." Grant is no lover of technology. Smart phones, for example, he calls the "devil's tinderbox."
"I think they're killing us. I hate them," he says. "I go on long holidays from them, three or four days at at time. Marvelous."
Hell, and our proximity to it, is a not unrelated topic to Grant's new film, "Heretic." In it, two young Mormon missionaries (Chloe East, Sophie Thatcher) come knocking on a door they'll soon regret visiting. They're welcomed in by Mr. Reed (Grant), an initially charming man who tests their faith in theological debate, and then, in much worse things.
After decades in romantic comedies, Grant has spent the last few years playing narcissists, weirdos and murders, often to the greatest acclaim of his career. But in "Heretic," a horror thriller from A24, Grant's turn to the dark side reaches a new extreme. The actor who once charmingly stammered in "Four Weddings and a Funeral" and who danced to the Pointer Sisters in "Love Actually" is now doing heinous things to young people in a basement.
"It was a challenge," Grant says. "I think human beings need challenges. It makes your beer taste better in the evening if you've climbed a mountain. He was just so wonderfully (expletive)-up."
"Heretic," which opens in theaters Friday, is directed by Scott Beck and Bryan Woods, co-writers of "A Quiet Place." In Grant's hands, Mr. Reed is a divinely good baddie — a scholarly creep whose wry monologues pull from a wide range of references, including, fittingly, Radiohead's "Creep."
In an interview, Grant spoke about these and other facets of his character, his journey... Read More