• Tuesday, Nov. 28, 2023
Amazon launches Q, a business chatbot powered by generative AI
In this Feb. 14, 2019 file photo, people stand in the lobby for Amazon offices in New York. Amazon finally has its answer to ChatGPT. The tech giant said Tuesday, Nov. 28, 2023, it will launch Q – a generative-AI powered chatbot for businesses. (AP Photo/Mark Lennihan, File)
NEW YORK (AP) -- 

Amazon finally has its answer to ChatGPT.

The tech giant said Tuesday it will launch Q — a business chatbot powered by generative artificial intelligence.

The announcement, made in Las Vegas at an annual conference the company hosts for its AWS cloud computing service, represents Amazon's response to rivals who've rolled out chatbots that have captured the public's attention.

San Francisco startup OpenAI's release of ChatGPT a year ago sparked a surge of public and business interest in generative AI tools that can spit out emails, marketing pitches, essays, and other passages of text that resemble the work of humans.

That attention initially gave an advantage to OpenAI's chief partner and financial backer, Microsoft, which has rights to the underlying technology behind ChatGPT and has used it to build its own generative AI tools known as Copilot. But it also spurred competitors like Google to launch their own versions.

These chatbots are a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they've learned from a vast database of digital books, online writings and other media.

Amazon said Tuesday that Q can do things like synthesize content, streamline day-to-day communications and help employees with tasks like generating blog posts. It said companies can also connect Q to their own data and systems to get a tailored experience that's more relevant to their business.

The technology is currently available for preview.

While Amazon is ahead of rivals Microsoft and Google as the dominant cloud computing provider, it's not perceived as the leader in the AI research that's led to advancements in generative AI.

A recent Stanford University index that measured the transparency of the top 10 foundational AI models, including Amazon's Titan, ranked Amazon at the bottom. Stanford researchers said less transparency can make it harder for customers that want to use the technology to know if they can safely rely on it, among other problems.

The company, meanwhile, has been forging forward. In September, Amazon said it would invest up to $4 billion in the AI startup Anthropic, a San Francisco-based company that was founded by former staffers from OpenAI.

The tech giant also has been rolling out new services, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

  • Friday, Nov. 17, 2023
ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company
Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. The board of ChatGPT-maker Open AI says it has pushed out Altman, its co-founder and CEO, and replaced him with an interim CEO(AP Photo/Eric Risberg, File)

ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was "not consistently candid in his communications" with the board of directors.

"The board no longer has confidence in his ability to continue leading OpenAI," the artificial intelligence company said in a statement.

In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley's sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry's future.

Mira Murati, OpenAI's chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board's chairman, would step down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman posted a message he sent to OpenAI employees in which he wrote, "based on today's news, i quit."

In another X post on Friday night, Brockman said Altman was asked to join a video meeting at noon Friday with the company's board members, minus Brockman, during which OpenAI co-founder and Chief Scientist Ilya Sutskever informed Altman he was being fired.

"Sam and I are shocked and saddened by what the board did today," Brockman wrote, adding that he was informed of his removal from the board in a separate call with Sutskever a short time later.

OpenAI declined to answer questions on what Altman's alleged lack of candor was about. The statement said his behavior was hindering the board's ability to exercise its responsibilities.

Altman posted Friday on X: "i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what's next later."

The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP's text archives.

Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT's explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

He's sat with multiple heads of state to discuss AI's potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

He predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

The company said its board consists of OpenAI's chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D'Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

OpenAI's key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won't affect its relationship.

"We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers," said an emailed Microsoft statement.

While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

"Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself," read YCombinator co-founder Paul Graham's 2014 announcement that Altman would become its president. Graham said at the time that Altman was "one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent."

OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a "potential future conflict for Elon" due to Tesla's work on building self-driving systems.

While OpenAI's board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

Altman's exit "is indeed shocking as he has been the face of" generative AI technology, said Gartner analyst Arun Chandrasekaran.

He said OpenAI still has a "deep bench of technical leaders" but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

Forrester analyst Rowan Curran speculated that Altman's departure, "while sudden," did not likely reflect deeper business problems.

"This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business," Curran said.

Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that's been scanning people's eyeballs with the goal of creating a vast digital identity and financial network.

Matt O'Brien is an AP technology writer. AP business writers Haleluya Hadero in New York, Kelvin Chan in London and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

  • Friday, Nov. 17, 2023
Corporate, global leaders peer into a future expected to be reshaped by AI, for better or worse
Open AI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP Photo/Eric Risberg)
SAN FRANCISCO (AP) -- 

President Joe Biden and other global leaders have spent the past few days melding minds with Silicon Valley titans in San Francisco, their discussions frequently focusing on artificial intelligence, a technology expected to reshape the world, for better or worse.

For all the collective brainpower on hand for the Asia-Pacific Economic Cooperation conference, there were no concrete answers to a pivotal question: Will AI turn be the springboard that catapults humanity to new heights, or the dystopian nightmare that culminates in its demise?

"The world is at an inflection point — this is not a hyperbole," Biden said Thursday at a CEO summit held in conjunction with APEC. "The decisions we make today are going to shape the direction of the world for decades to come."

Not surprisingly, most of the technology CEOs who appeared at the summit were generally upbeat about AI's potential to unleash breakthroughs that will make workers more productive and eventually improve standards of living.

None were more bullish than Microsoft CEO Satya Nadella, whose software company has invested more than $10 billion in OpenAI, the startup behind the AI chatbot ChatGPT.

Like many of his peers, Nadella says he believes AI will turn out to be as transformative as the advent of personal computers were during the 1980s, the internet's rise during the 1990s and the introduction of smartphones during the 2000s.

"We finally have a way to interact with computing using natural language. That is, we finally have a technology that understands us, not the other way around," Nadella said at the CEO summit. "As our interactions with technology become more and more natural, computers will increasingly be able to see and interpret our intent and make sense of the world around us."

Google CEO Sundar Pichai, whose internet company is increasingly infusing its influential search engine with AI, is similarly optimistic about humanity's ability to control the technology in ways that will make the world a better place.

"I think we have to work hard to harness it," Pichai said. "But that is true of every other technological advance we've had before. It was true for the industrial revolution. I think we can learn from those things."

The enthusiasm exuded by Nadella and Pichai has been mirrored by investors who have been betting AI will pay off for Microsoft and Google. The accelerating advances in AI are the main reasons why the stock prices of both Microsoft and Google's corporate parent, Alphabet Inc., have both soared by more than 50% so far this year. Those gains have combined to produce an additional $1.6 trillion in shareholder wealth.

But the perspective from outside the tech industry is more circumspect.

"Everyone has learned to spell AI, they don't really know what quite to do about it," said former U.S. Secretary of State Condoleezza Rice, who is now director of the Hoover Institution at Stanford University. "They have enormous benefit written all over them. They also have a lot of cautionary tales about how technology can be misused."

Robert Moritz, global chairman of the consulting firm PricewaterhouseCoopers, said there are legitimate concerns about the "Doomsday discussions" centered on the effects of AI, potentially about the likelihood of supplanting the need for people to perform a wide range of jobs.

Companies have found ways to train people who lose their jobs in past waves of technological upheaval, Moritz said, and that will have to happen again or "we will have a mismatch, which will bring more unrest, which we cannot afford to have."

San Francisco, APEC's host city, is counting on the multibillion-dollar investments in AI and the expansion of payrolls among startups such as OpenAI and Anthropic to revive the fortunes of a city that's still struggling to adjust to a pandemic-driven shift that has led to more people working from home.

"We are in the spring of yet another innovative boom," San Francisco Mayor London Breed said, while boasting that eight of the biggest AI-centric companies are based in the city.

The existential threat to humanity posed by AI is one of the reasons that led tech mogul Elon Musk to spend some of his estimated fortune of $240 billion to launch a startup called xAI during the summer. Musk had been scheduled to discuss his hopes and fears surrounding AI during the CEO summit with Salesforce CEO Marc Benioff, but canceled Thursday because of an undisclosed conflict.

OpenAI CEO Sam Altman predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." But he also acknowledged the need for guardrails to protect humanity from the existential threat posed by the quantum leaps being taking by computers.

"I really think the world is going to rise to the occasion and everybody wants to do the right thing," Altman said.

  • Tuesday, Nov. 7, 2023
ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition
Sam Altman, left, CEO of OpenAI, appears onstage with Microsoft CEO Satya Nadella at OpenAI DevDay, OpenAI's first developer conference, on Monday, Nov. 6, 2023 in San Francisco. (AP Photo/Barbara Ortutay)
SAN FRANCISCO (AP) -- 

Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday, launching a new line of chatbot products that can be customized to a variety of tasks.

"Eventually, you'll just ask the computer for what you need and it'll do all of these tasks for you," said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI's inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

At the event held in a cavernous former Honda dealership in OpenAI's hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that it says is more capable and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions that couldn't answer questions about anything after 2021.

It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what's in images to people who are blind or have low vision.

ChatGPT has more than 100 million weekly active users and 2 million developers, spread "entirely by word of mouth," Altman said.

He also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

Alyssa Hwang, a computer science researcher at the University of Pennsylvania who got an early glimpse at the GPT vision tool, said it was "so good at describing a whole lot of different kinds of images, no matter how complicated they were," but also needed some improvements.

For instance, in trying to test its limits, Hwang appended an image of steak with a caption about chicken noodle soup, confusing the chatbot into describing the image as having something to do with chicken noodle soup.

"That could lead to some adversarial attacks," Hwang said. "Imagine if you put some offensive text or something like that in an image, you'll end up getting something you don't want."

That's partly why OpenAI has given researchers such as Hwang early access to help discover flaws in its newest tools before their wide release. Altman on Monday described the company's approach as "gradual iterative deployment" that leaves time to address safety risks.

The path to OpenAI's debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that's sparked excitement, fear and a push for international safeguards to guide AI's rapid advancement.

The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such "high levels of performance" that they could pose serious safety risks.

The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience "we love you guys."

In his comments, Nadella emphasized Microsoft's role as a business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

"I think we have the best partnership in tech. I'm excited for us to build AGI together," Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

While some commercial chatbots, including Microsoft's Bing, are now built atop OpenAI's technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

ChatGPT's newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

Grok is only available to a limited number of early users but promises to answer "spicy questions" that other chatbots decline due to safeguards meant to prevent offensive responses.

Asked for comment on the timing of Grok's release by a reporter, Altman said "Elon's gonna Elon."

Much of what OpenAI announced Monday was attempting to address the concerns of businesses looking to integrate ChatGPT-like technology into their operations, said Gartner analyst Arun Chandrasekaran.

Getting cheaper products "was clearly one of the big asks," as was being able to customize AI models to tap into an organization's own internal data sources, Chandrasekaran said. He said another appeal to businesses was a "Copyright Shield" in which OpenAI promises to pay the costs of defending its customers from copyright lawsuits tied to the way OpenAI's models are trained on troves of written works and imagery pulled from the internet.

Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy's total output of goods and services.

Altman described a future of AI agents that could help people with various tasks at work or home.

"We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf," he said.

O'Brien reported from Providence, Rhode Island.

  • Wednesday, Nov. 1, 2023
Countries at a U.K. summit pledge to tackle AI's potentially "catastrophic" risks
Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. (Leon Neal/Pool Photo via AP)
BLETCHLEY PARK, England (AP) -- 

Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially "catastrophic" risks posed by galloping advances in artificial intelligence.

The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge "frontier" AI that some scientists warn could pose a risk to humanity's very existence.

British Prime Minister Rishi Sunak said the declaration was "a landmark achievement that sees the world's greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren."

But U.S. Vice President Kamala Harris urged Britain and other countries to go further and faster, stressing the transformations AI is already bringing and the need to hold tech companies accountable — including through legislation.

In a speech at the U.S. Embassy, Harris said the world needs to start acting now to address "the full spectrum" of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

"There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential," she said, citing a senior citizen kicked off his health care plan because of a faulty AI algorithm or a woman threatened by an abusive partner with deep fake photos.

The AI Safety Summit is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

Harris is due to attend the summit on Thursday, joining government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party.

Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work towards "shared agreement and responsibility" about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.

China's Vice Minister of Science and Technology Wu Zhaohui said AI technology is "uncertain, unexplainable and lacks transparency."

"It brings risks and challenges in ethics, safety, privacy and fairness. Its complexity is emerging.," he said, noting that Chinese President Xi Jinping last month launched the country's Global Initiative for AI Governance.

"We call for global collaboration to share knowledge and make AI technologies available to the public under open source terms," he said.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the "godfathers" of AI, are also attending the meeting at Bletchley Park, a former top secret base for World War II codebreakers that's seen as a birthplace of modern computing.

Attendees said the closed-door meeting's format has been fostering healthy debate. Informal networking sessions are helping to build trust, said Mustafa Suleyman, CEO of Inflection AI.

Meanwhile, at formal discussions "people have been able to make very clear statements, and that's where you see significant disagreements, both between countries of the north and south (and) countries that are more in favor of open source and less in favor of open source," Suleyman told reporters.

Open source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an an open source system has been released, "anybody can use it and tune it for malicious purposes," Bengio said on the sidelines of the meeting.

"There's this incompatibility between open source and security. So how do we deal with that?"

Only governments, not companies, can keep people safe from AI's dangers, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

In contrast, Harris stressed the need to address the here and now, including "societal harms that are already happening such as bias, discrimination and the proliferation of misinformation."

She pointed to President Biden's executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest.

Harris also encouraged other countries to sign up to a U.S.-backed pledge to stick to "responsible and ethical" use of AI for military aims.

"President Biden and I believe that all leaders … have a moral, ethical and social duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits," she said.

Lawless reported from London.

  • Tuesday, Oct. 31, 2023
Biden wants to move fast on AI safeguards and signs an executive order to address his concerns
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris applauds at right. (AP Photo/Evan Vucci)
WASHINGTON (AP) -- 

President Joe Biden on Monday signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.

Before signing the order, Biden said AI is driving change at "warp speed" and carries tremendous potential as well as perils.

"AI is all around us," Biden said. "To realize the promise of AI and avoid the risk, we need to govern this technology."

The order is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. The order — which will likely need to be augmented by congressional action — seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

Using the Defense Production Act, the order requires leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The extensive order touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

White House chief of staff Jeff Zients recalled Biden giving his staff a directive when formulating the order to move with urgency.

"We can't move at a normal government pace," Zients said the Democratic president told him. "We have to move as fast, if not faster, than the technology itself."

In Biden's view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

With the European Union nearing final passage of a sweeping law to rein in AI harms and Congress still in the early stages of debating safeguards, the Biden administration is "stepping up to use the levers it can control," said digital rights advocate Alexandra Reeve Givens, president of the Center for Democracy & Technology. "That's issuing guidance and standards to shape private sector behavior and leading by example in the federal government's own use of AI."

The order builds on voluntary commitments already made by technology companies. It's part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate text, images and sounds.

The guidance within the order is to be implemented and fulfilled over the range of 90 days to 365 days.

Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters, including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology's capabilities at multiple gatherings.

"He was as impressed and alarmed as anyone," deputy White House chief of staff Bruce Reed said in an interview. "He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he's seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation."

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film "Mission: Impossible — Dead Reckoning Part One." The film's villain is a sentient and rogue AI known as "the Entity" that sinks a submarine and kills its crew in the movie's opening minutes.

"If he hadn't already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about," said Reed, who watched the film with the president.

Governments around the world have raced to establish protections, some of them tougher than Biden's directives. After more than two years of deliberation, the EU is putting the final touches on a comprehensive set of regulations that targets the riskiest applications with the tightest restrictions. China, a key AI rival to the U.S., has also set some rules.

U.K. Prime Minister Rishi Sunak hopes to carve out a prominent role for Britain as an AI safety hub at a summit starting Wednesday that Vice President Kamala Harris plans to attend. And on Monday, officials from the Group of Seven major industrial nations agreed to a set of AI safety principles and a voluntary code of conduct for AI developers.

The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft, and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI's real-world harms.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement's use of AI tools, including at U.S. borders.

"These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology," Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

While the EU's forthcoming AI law is set to ban real-time facial recognition in public, Biden's order appears to simply ask for federal agencies to review how they're using AI in the criminal justice system, falling short of the stronger language sought by some activists.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure "we're holding the tech industry and tech billionaires accountable" so that algorithmic tools "work for all of us and not just a few," said ReNika Moore, director of the ACLU's racial justice program, who attended Monday's signing.

After seeing the text of the order, Moore applauded how it addressed discrimination and other AI harms in workplaces and housing, but said the administration "essentially kicks the can down the road" in protecting people from law enforcement's growing use of the technology.

  • Wednesday, Oct. 25, 2023
Vodafone Studios boosts A-V production via Blackmagic
Vodafone Germany studio
FREMONT, Calif. -- 

A keystone of the 400m2 creative production environment designed and built by Vodafone Germany and systems integrator Sigma-AV is a 15m curved LED wall for virtual production. Conceived as an idea in 2020, work to implement this project began in mid-2022, with the space just coming online. “There was a clear will internally to become masters of our own content creation,” stated Lukas Loss, digital content producer at Vodafone Germany.

“Previously, we have delivered a vast amount of video, event delivery and TVC production using external studios and partners. That production was expensive and lacked flexibility,” according to Loss. “With the building of our own studios, we could lower production costs and preparation time while simultaneously raising the scope of what our trained production team could deliver internally. Through the pandemic and beyond, we soon realized that virtual meetups and hybrid event delivery would offer a more flexible model for conferences in the future. As a tech company, we wanted to build a state of the art, future proof studio with extended reality (XR) that would allow that. But beyond that, creating an XR studio with an LED wall and green screen space unlocks new creative possibilities internally. “ 

A 15m curved LED wall for XR live production and events is at the heart of the main space. It also features a lounge area for talking heads or interviews, a master control room (MCR) for eight operators and a server room. The second studio area is a smaller green screen space with a pack shot area and an audiovisual podcast studio designed for up to four people.

Blackmagic Design was selected as one of the preferred hardware partners for video. Vodafone elected to deploy the URSA Broadcast G2 camera for its versatility. “We get the best of both worlds; 4K broadcast style live production for streaming or 6K cinematic production with shallow depth of field,” said Loss.

Combined with Blackmagic Fiber Converters, each camera channel requires just two cables: one for the camera, another for the tracking system. The remaining challenge for Loss was ensuring production didn’t run into moiré issues.

“We conducted testing to determine which type of cameras and LED resolutions would fit our budget, avoid any moiré and still give us the best image quality possible. In Blackmagic and Samsung, we have found the ideal combination to balance those requirements.”

Supplementing those is a Blackmagic Studio Camera 4K Pro paired with the Blackmagic Studio Converter and a 21” teleprompter screen. In the control room, an ATEM Constellation 8K live production switcher and ATEM 2 M/E Advanced Panel run the show, with a Smart Videohub 12G 40x40 for routing video and remote camera control via an ATEM Camera Control Panel.

  • Wednesday, Oct. 25, 2023
MRMC launches SR-1 camera robot
MRMC's SR-1 camera robot
SURREY, U.K. -- 

MRMC, a Nikon Company that  provides camera robotics solutions, has launched the SR-1, a pan-tilt head designed for use in locations that are inaccessible or hazardous for camera operators. The system is designed as a next-generation, remote production tool that will enhance the creativity of shots, help capture new angles and easily achieve shots that would be “impossible” by hand.

The compact lightweight SR-1 is easy to transport, set up, mount, and control. It is compatible with Nikon cameras including the Z9, D5 and D6. The head has an axis speed of 30 degrees per second and a pan range of approximately 120 degrees. The system is camera and lens-agnostic, giving users more choice and flexibility to use equipment they already own. The SR-1 has full IP control, is controllable via MRMC’s MHC or third-party systems and is compatible with MRMC’s Polymotion Chat automated tracking software.

Paddy Taylor, head of broadcast solutions for MRMC, said the SR-1 is “perfect for use in situations where it is difficult or dangerous for a human operator to be present, such as in hazardous environments or at great heights. The SR-1 is also a great option for capturing dynamic shots that would be difficult to achieve manually.”

  • Monday, Oct. 23, 2023
Biden names technology hubs for 32 states and Puerto Rico to help the industry and create jobs
President Joe Biden walks to the podium during an event on the economy in the South Court Auditorium of the Eisenhower Executive Office Building on the White House complex, Monday, Oct. 23, 2023. (AP Photo/Jacquelyn Martin)
WASHINGTON (AP) -- 

The Biden administration on Monday designated 31 technology hubs spread acorss 32 states and Puerto Rico to help spur innovation and create jobs in the industries that are concentrated in these areas.

"We're going to invest in critical technologies like biotechnology, critical materials, quantum computing, advanced manufacturing — so the U.S. will lead the world again in innovation across the board," President Joe Biden said. "I truly believe this country is about to take off."

The tech hubs are the result of a process that the Commerce Department launched in May to distribute a total of $500 million in grants to cities.

The $500 million came from a $10 billion authorization in last year's CHIPS and Science Act to stimulate investments in new technologies such as artificial intelligence, quantum computing and biotech. It's an attempt to expand tech investment that is largely concentrated around a few U.S. cities — Austin, Texas; Boston; New York; San Francisco; and Seattle — to the rest of the country.

"I have to say, in my entire career in public service, I have never seen as much interest in any initiative than this one," Commerce Secretary Gina Raimondo told reporters during a Sunday conference call to preview the announcement. Her department received 400 applications, she said.

"No matter where I go or who I meet with — CEOs, governors, senators, congresspeople, university presidents — everyone wants to tell me about their application and how excited they are," said Raimondo.

The program, formally the Regional Technology and Innovation Hub Program, ties into the president's economic argument that people should be able to find good jobs where they live and that opportunity should be spread across the country, rather than be concentrated. The White House has sought to elevate that message and highlight Biden's related policies as the Democratic president undertakes his 2024 reelection bid.

The 31 tech hubs reach Oklahoma, Rhode Island, Massachusetts, Montana, Colorado, Illinois, Indiana, Wisconsin, Virginia, New Hampshire, Missouri, Kansas, Maryland, Alabama, Pennsylvania, Delaware, New Jersey, Minnesota, Louisiana, Idaho, Wyoming, South Carolina, Georgia, Florida, New York, Nevada, Missouri, Oregon, Vermont, Ohio, Maine, Washington and Puerto Rico.

  • Thursday, Oct. 12, 2023
Sony's Access controller for the PlayStation aims to make gaming easier for people with disabilities
Martin Shane uses a Sony Access controller, left, to play a video game at Sony Interactive Entertainment headquarters Thursday, Sept. 28, 2023, in San Mateo, Calif. (AP Photo/Godofredo A. Vásquez)
SAN MATEO, Calif. (AP) -- 

Paul Lane uses his mouth, cheek and chin to push buttons and guide his virtual car around the "Gran Turismo" racetrack on the PlayStation 5. It's how he's been playing for the past 23 years, after a car accident left him unable to use his fingers.

Playing video games has long been a challenge for people with disabilities, chiefly because the standard controllers for the PlayStation, Xbox or Nintendo can be difficult, or even impossible, to maneuver for people with limited mobility. And losing the ability to play the games doesn't just mean the loss of a favorite pastime, it can also exacerbate social isolation in a community already experiencing it at a far higher rate than the general population.

As part of the gaming industry's efforts to address the problem, Sony has developed the Access controller for the PlayStation, working with input from Lane and other accessibility consultants. Its the latest addition to the accessible-controller market, whose contributors range from Microsoft to startups and even hobbyists with 3D printers.

"I was big into sports before my injury," said Cesar Flores, 30, who uses a wheelchair since a car accident eight years ago and also consulted Sony on the controller. "I wrestled in high school, played football. I lifted a lot of weights, all these little things. And even though I can still train in certain ways, there are physical things that I can't do anymore. And when I play video games, it reminds me that I'm still human. It reminds me that I'm still one of the guys."

Putting the traditional controller aside, Lane, 52, switches to the Access. It's a round, customizable gadget that can rest on a table or wheelchair tray and can be configured in myriad ways, depending on what the user needs. That includes switching buttons and thumbsticks, programming special controls and pairing two controllers to be used as one. Lane's "Gran Turismo" car zooms around a digital track as he guides it with the back of his hand on the controller.

"I game kind of weird, so it's comfortable for me to be able to use both of my hands when I game," he said. "So I need to position the controllers away enough so that I can be able to to use them without clunking into each other. Being able to maneuver the controllers has been awesome, but also the fact that this controller can come out of the box and ready to work."

Lane and other gamers have been working with Sony since 2018 to help design the Access controller. The idea was to create something that could be configured to work for people with a broad range of needs, rather than focusing on any particular disability.

"Show me a person with multiple sclerosis and I'll show you a person who can be hard of hearing, I can show someone who has a visual impairment or a motor impairment," said Mark Barlet, founder and executive director of the nonprofit AbleGamers. "So thinking on the label of a disability is not the approach to take. It's about the experience that players need to bridge that gap between a game and a controller that's not designed for their unique presentation in the world."

Barlet said his organization, which helped both Sony and Microsoft with their accessible controllers, has been advocating for gamers with disabilities for nearly two decades. With the advent of social media, gamers themselves have been able to amplify the message and address creators directly in forums that did not exist before.

"The last five years I have seen the game accessibility movement go from indie studios working on some features to triple-A games being able to be played by people who identify as blind," he said. "In five years, it's been breathtaking."

Microsoft, in a statement, said it was encouraged by the positive reaction to its Xbox Adaptive controller when it was released in 2018 and that it is "heartening to see others in the industry apply a similar approach to include more players in their work through a focus on accessibility."

The Access controller will go on sale worldwide on Dec. 6 and cost $90 in the U.S.

Alvin Daniel, a senior technical program manager at PlayStation, said the device was designed with three principles in mind to make it "broadly applicable" to as many players as possible. First, the player does not have to hold the controller to use it. It can lay flat on a table, wheelchair tray or be mounted on a tripod, for instance. It was important for it to fit on a wheelchair tray, since once something falls off the tray, it might be impossible for the player to pick it up without help. It also had to be durable for this same reason — so it would survive being run over by a wheelchair, for example.

Second, it's much easier to press the buttons than on a standard controller. It's a kit, so it comes with button caps in different sizes, shapes and textures so people can experiment with reconfiguring it the way it works best for them. The third is the thumbsticks, which can also be configured depending on what works for the person using it.

Because it can be used with far less agility and strength than the standard PlayStation controller, the Access could also be a gamechanger for an emerging population: aging gamers suffering from arthritis and other limiting ailments.

"The last time I checked, the average age of a gamers was in their forties," Daniel said. "And I have every expectation, speaking for myself, that they'll want to continue to game, as I'll want to continue to game, because it's entertainment for us."

After his accident, Lane stopped gaming for seven years. For someone who began playing video games as a young child on the Magnavox Odyssey — released in 1972 — "it was a void" in his life, he said.

Starting again, even with the limitations of a standard game controller, felt like being reunited with a "long lost friend."

"Just the the social impact of gaming really changed my life. It gave me a a brighter disposition," Lane said. He noted the social isolation that often results when people who were once able-bodied become disabled.

"Everything changes," he said. "And the more you take away from us, the more isolated we become. Having gaming and having an opportunity to game at a very high level, to be able to do it again, it is like a reunion, (like losing) a close companion and being able to reunite with that person again."

MySHOOT Company Profiles