• Friday, Jan. 26, 2024
Small federal agency crafts standards for making AI safe, secure and trustworthy
(AP Illustration/Peter Hamlin)
BOSTON (AP) -- 

No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible.

But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.

To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.

At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.

Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.

This interview with Tabassi has been edited for length and clarity.

Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?

A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.

Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.

A: AI systems are inherently socio-technical, influenced by environments and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophers.

Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called "notoriously underfunded and understaffed." How many people at NIST are working on this?

A: First, I'd like to say that we at NIST have a spectacular history of engaging with broad communities. In putting together the AI risk framework we heard from more than 240 distinct organizations and got something like 660 sets of public comments. In quality of output and impact, we don't seem small. We have more than a dozen people on the team and are expanding.

Q: Will NIST's budget grow from the current $1.6 billion in view of the AI mission?

A: Congress writes the checks for us and we have been grateful for its support.

Q: The executive order gives you until July to create a toolset for guaranteeing AI safety and trustworthiness. I understand you called that "an almost impossible deadline" at a conference last month.

A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a brilliant team, are committed and excited. As for the deadline, it's not like we are starting from scratch. In June we put together a public working group focused on four different sets of guidelines including for authenticating synthetic content.

Q: Members of the House Committee on Science and Technology said in a letter last month that they learned NIST intends to make grants or awards through through a new AI safety institute — suggesting a lack of transparency.

A: Indeed, we are exploring options for a competitive process to support cooperative research opportunities. Our scientific independence is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.

Q: A consortium created to assist the AI safety institute is apt to spark controversy due to industry involvement. What do consortium members have to agree to?

A: We posted a template for that agreement on our website at the end of December. Openness and transparency are a hallmark for us. The template is out there.

Q: The AI risk framework was voluntary but the executive order mandates some obligations for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determining which models get red-teamed?

A: Our job is to advance the measurement science and standards needed for this work. That will include some evaluations. This is something we ahve done for face recognition algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.

Q: How AIs are trained and the guardrails placed on them can vary widely. And sometimes features like cybersecurity have been an afterthought. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?

A: In the AI risk management framework we came up with a taxonomy of sorts for trustworthiness, stressing the importance of addressing it during design, development and deployment — including regular monitoring and evaluations during AI systems' lifecycles. Everyone has learned we can't afford to try to fix AI systems after they are out in use. It has to be done as early as possible.
And yes, much depends on the use case. Take facial recognition. It's one thing if I'm using it to unlock my phone. A totally different set of security, privacy and accuracy requirements come into play when, say, law enforcement uses it to try to solve a crime. Tradeoffs between convenience and security, bias and privacy — all depend on context of use.

  • Thursday, Jan. 18, 2024
Samsung vies to make AI more mainstream by baking more of the technology into its Galaxy phones
The new lineup of Samsung Galaxy S24 phones on display at a preview event in San Jose, Calif. on Wednesday, Jan. 17, 2024. The sales pitch for the Galaxy S24 phones revolves around an array of new features powered by artificial intelligence, or AI, in contrast to Samsung's usual strategy highlighting mostly incremental improvements to the device's camera and battery life. (AP Photo/Haven Daley)
SAN JOSE, Calif. (AP) -- 

Smartphones could get much smarter this year as the next wave of artificial intelligence seeps into the devices that accompany people almost everywhere they go.

Samsung, the biggest rival to Apple and its iPhone, provided a glimpse of how smartphones are evolving during a Wednesday unveiling of the next generation of its flagship Galaxy models.

The sales pitch for the Galaxy S24 lineup revolves around an array of new features powered by AI.

"We will reshape the technology landscape, we will open a new chapter without barriers to unleash your potential," TM Roh, the president of Samsung's mobile experience division, vowed to a crowd gathered in a San Jose, California, arena usually used for hockey games and concerts.

Besides featuring some of Samsung's own work in AI, the Galaxy S24 lineup will be packed with some of the latest advances coming out of Google.

The technological improvements will also usher in a higher price for Samsung's top-of-the-line phone, the Galaxy S24 Ultra, which will be priced at $1,300 — a $100, or 8% increase, from last year's comparable model. The increase mirrors what Apple did with its fanciest model, the iPhone 15 Pro Max, released in September.

Samsung is holding steady on the prices for the Galaxy S24 Plus, which will sell for $1,000, and the basic Galaxy S24, which will start at $800.

All the new Galaxy phones, due in stores Jan. 31, will be packed with far more AI than before, including a feature that will provide live translation during phone calls in 13 languages and 17 dialects. The Galaxy S24 lineup will also introduce Google's "Circle To Search" that involves using a digital stylus or a finger to circle snippets of text, parts of photos or videos to get instant search results about whatever has been highlighted.

The new Galaxy phones will also enable quick and easy ways to manipulate the appearance and placement of specific parts of pictures taken on the devices' camera. It's a feature that could help people refine their photos, while also making it easier to create misleading images.

Google started a push last fall to infuse its latest Pixel phones with more AI, including the ability to alter the appearance of photos — an effort that the company accelerated at the end of last year with the initial rollout of project Gemini, its next technological leap. Google is also pushing out the Circle To Search tool to its latest phones, the Pixel 8 and Pixel 8 Pro, with plans to expand it to other devices running on its Android software later this year.

Besides introducing Circle To Search, Google also is drawing upon AI to enable users of its mobile app for iPhones as well as Android to point a camera at an object for a summary about what is being captured by the lens. Although Google believes Circle To Search and the Lens option will make its results even more useful, executives have also acknowledged they both may be prone to inaccuracies.

Like virtually all phone manufacturers other than Apple, Samsung relies on Google's Android operating system, so the two companies' interests have been aligned even though they compete against each other in the sale of mobile devices.

Apple is expected to put more AI into its next generation of iPhones in September, but now Samsung has a head start toward gaining the upper hand in making the technology more ubiquitous, Forrester Research analyst Thomas Husson said. It's a competitive edge that Samsung could use, having ceded its longstanding mantle as the world's largest seller of smartphones to Apple last year, according to the market research firm International Data Corp.

"Samsung's marketing challenge is precisely to make the technology transparent to impress consumers with magic and invisible experiences," Husson said.

The increasing use of AI in smartphones comes after the Microsoft-backed startup, OpenAI, thrust the technology into the mainstream last year with its ChatGPT bot capable of quickly creating stories, memos, videos and drawings upon request.

As AI becomes a more integral piece of smartphones, the technology will likely have broad implications on productivity, creativity and privacy, predicted Todd Lohr, U.S. technology consulting leader for KPMG.

"Intelligence is actually coming to your smartphone, which really haven't been that smart," Lohr said. "You may eventually see use cases where you could have your smartphone listen to you all day and have it provide a summary of your day at the end of it. That could create a challenge in the social construct because if everyone's device is listening to everyone, whose data is it?"

AI isn't quite that advanced yet, but Samsung already is trying to address privacy worries likely to be raised by the amount of new technology rolling out in the Galaxy S24 lineup. Samsung executives are emphasizing that the AI features can be kept on the device, although some applications may need to connect to data centers in the virtual cloud.

The South Korean company also is promising users that their on-device activity will be protected by its "Samsung Knox" security.

Michael Kokotajlo, KPMG's digital transformation partner of telecommunications, thinks Samsung and other smartphone makers are on the way to giving people an "AI assistant in their pockets" — a concept that he expects to be more readily adopted by younger generations that have grown up during the mobile-computing era.

"Millennials and Gen Z are definitely going to be looking for these AI capabilities because they don't have as much concern about privacy and security, but some of the older generations may have more concerns about that or how do you even leverage all of it," Kokotajlo said.

  • Wednesday, Jan. 17, 2024
Here's how ChatGPT maker OpenAI plans to deter election misinformation in 2024
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. ChatGPT maker OpenAI has outlined a plan, spelled out in a blog post on Monday, Jan. 15, 2024, to prevent its tools from being used to spread election misinformation as voters in more than 50 countries around the world prepare to vote in national elections in 2024. (AP Photo/Michael Dwyer, File)
NEW YORK (AP) -- 

ChatGPT maker OpenAI has outlined a plan to prevent its tools from being used to spread election misinformation as voters in more than 50 countries prepare to cast their ballots in national elections this year.

The safeguards spelled out by the San Francisco-based artificial intelligence startup in a blog post this week include a mix of preexisting policies and newer initiatives to prevent the misuse of its wildly popular generative AI tools. They can create novel text and images in seconds but also be weaponized to concoct misleading messages or convincing fake photographs.

The steps will apply specifically to OpenAI, only one player in an expanding universe of companies developing advanced generative AI tools. The company, which announced the moves Monday, said it plans to "continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency."

It said it will ban people from using its technology to create chatbots that impersonate real candidates or governments, to misrepresent how voting works or to discourage people from voting. It said that until more research can be done on the persuasive power of its technology, it won't allow its users to build applications for the purposes of political campaigning or lobbying.

Starting "early this year," OpenAI said, it will digitally watermark AI images created using its DALL-E image generator. This will permanently mark the content with information about its origin, making it easier to identify whether an image that appears elsewhere on the web was created using the AI tool.

The company also said it is partnering with the National Association of Secretaries of State to steer ChatGPT users who ask logistical questions about voting to accurate information on that group's nonpartisan website, CanIVote.org.

Mekela Panditharatne, counsel in the democracy program at the Brennan Center for Justice, said OpenAI's plans are a positive step toward combating election misinformation, but it will depend on how they are implemented.

"For example, how exhaustive and comprehensive will the filters be when flagging questions about the election process?" she said. "Will there be items that slip through the cracks?"

OpenAI's ChatGPT and DALL-E are some of the most powerful generative AI tools to date. But there are many companies with similarly sophisticated technology that don't have as many election misinformation safeguards in place.

While some social media companies, such as YouTube and Meta, have introduced AI labeling policies, it remains to be seen whether they will be able to consistently catch violators.

"It would be helpful if other generative AI firms adopted similar guidelines so there could be industry-wide enforcement of practical rules," said Darrell West, senior fellow in the Brooking Institution's Center for Technology Innovation.

Without voluntary adoption of such policies across the industry, regulating AI-generated disinformation in politics would require legislation. In the U.S., Congress has yet to pass legislation seeking to regulate the industry's role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

OpenAI CEO Sam Altman said that even with all of his company's safeguards in place, his mind is not at ease.

"I think it's good we have a lot of anxiety and are going to do everything we can to get it as right as we can," he said during an interview Tuesday at a Bloomberg event during the World Economic Forum in Davos, Switzerland. "We're going to have to watch this incredibly closely this year. Super tight monitoring. Super tight feedback loop." 

  • Thursday, Jan. 11, 2024
Motion Picture Academy to honor 16 scientific and technical achievements
LOS ANGELES -- 

The Academy of Motion Picture Arts and Sciences announced today (1/11) that 16 scientific and technical achievements will be honored at its annual Scientific and Technical Awards ceremony on Friday, February 23, 2024, at the Academy Museum of Motion Pictures.

“The Academy recognizes and celebrates all aspects of the film industry and the diverse, talented people who make movies,” said Academy CEO Bill Kramer. “Our Scientific and Technical Awards are a critical part of this mission, as they honor the individuals and companies whose discoveries and innovations have contributed in significant and lasting ways to our motion picture industry.”

“Each year, a global group of technology practitioners and experts sets out to examine the extraordinary tools and techniques employed in the creation of motion pictures,” said Barbara Ford Grant, chair of the Academy’s Scientific and Technical Awards Committee, which oversees the vetting of the awards. “This year, we honor 16 technologies for their exceptional contributions to how we craft and enhance the movie experience, from the safe execution of on-set special effects to new levels of image presentation fidelity and immersive sound to open frameworks that enable artists to share their digital creations across different software and studios seamlessly. These remarkable achievements in the arts and sciences of filmmaking have propelled our medium to unprecedented levels of greatness.”

Unlike other Academy Awards® to be presented this year, achievements receiving Scientific and Technical Awards need not have been developed and introduced during a specified period.  Instead, the achievements must demonstrate a proven record of contributing significant value to the process of making motion pictures.

The Academy Awards for scientific and technical achievements are: 

 

TECHNICAL ACHIEVEMENT AWARDS (ACADEMY CERTIFICATES)

 

To Bill Beck for his pioneering utilization of semiconductor lasers for theatrical laser projection systems.
Bill Beck’s advocacy and education to the cinema industry while at Laser Light Engines contributed to the transition to laser projection in theatrical exhibition.

 

 

To Gregory T. Niven for his pioneering work in using laser diodes for theatrical laser projection systems.
At Novalux and Necsel, Gregory T. Niven demonstrated and refined specifications for laser light sources for theatrical exhibition, leading the industry’s transition to laser cinema projection technology.

 

 

To Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru for their development of laser diodes for theatrical laser projection systems.
Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru collaborated closely with cinema professionals and manufacturers while at Nichia Corporation Laser Diode Division, leading to the development and industry-wide adoption of blue and green laser modules producing wavelengths and power levels matching the specific needs of the cinema market.

 

 

To Arnold Peterson and Elia P. Popov for their ongoing design and engineering, and to John Frazier for the initial concept of the Blind Driver Roof Pod.
The roof pod improves the safety, speed and range of stunt driving, extending the options for camera placement while acquiring picture car footage with talent in the vehicle, leading to rapid adoption across the industry.

 

 

To Jon G. Belyeu for the design and engineering of Movie Works Cable Cutter devices.
The unique and resilient design of this suite of pyrotechnic cable cutters has made them the preferred method for safe, precise and reliable release of suspension cables for over three decades in motion picture production.

 

 

To James Eggleton and Delwyn Holroyd for the design, implementation and integration of the High-Density Encoding (HDE) lossless compression algorithm within the Codex recording toolset.
The HDE codec allows productions to leverage familiar and proven camera raw workflows more efficiently by reducing the storage and bandwidth needed for the increased amounts of data from high-photosite-count cameras.

 

 

To Jeff Lait, Dan Bailey and Nick Avramoussis for the continued evolution and expansion of the feature set of OpenVDB.
Core engineering developments contributed by OpenVDB’s open-source community have led to its ongoing success as an enabling platform for representing and manipulating volumetric data for natural phenomena. These additions have helped solidify OpenVDB as an industry standard that drives continued innovation in visual effects.

 

 

To Oliver Castle and Marcus Schoo for the design and engineering of Atlas, and to Keith Lackey for the prototype creation and early development of Atlas.
Atlas’ scene description and evaluation framework enables the integration of multiple digital content creation tools into a coherent production pipeline. Its plug-in architecture and efficient evaluation engine provide a consistent representation from virtual production through to lighting.

 

 

To Lucas Miller, Christopher Jon Horvath, Steve LaVietes and Joe Ardent for the creation of the Alembic Caching and Interchange system.
Alembic’s algorithms for storing and retrieving baked, time-sampled data enable high-efficiency caching across the digital production pipeline and sharing of scenes between facilities. As an open-source interchange library, Alembic has seen widespread adoption by major software vendors and production studios.

 

 

 

SCIENTIFIC AND ENGINEERING AWARDS (ACADEMY PLAQUES)

 

To Charles Q. Robinson, Nicolas Tsingos, Christophe Chabanne, Mark Vinton and the team of software, hardware and implementation engineers of the Cinema Audio Group at Dolby Laboratories for the creation of the Dolby Atmos Cinema Sound System.
Dolby Atmos has become an industry standard for object-based cinema audio content creation and presents a premier immersive audio experience for theatrical audiences.

 

 

To Steve Read and Barry Silverstein for their contributions to the design and development of the IMAX Prismless Laser Projector.
Utilizing a novel optical mirror system, the IMAX Prismless Laser Projector removes prisms from the laser light path to create the high brightness and contrast required for IMAX theatrical presentation.

 

 

To Peter Janssens, Goran Stojmenovik and Wouter D’Oosterlinck for the design and development of the Barco RGB Laser Projector.
The Barco RGB Laser Projector’s novel and modular design with an internally integrated laser light source produces flicker-free uniform image fields with improved contrast and brightness, enabling a widely adopted upgrade path from xenon to laser presentation without the need for alteration to screen or projection booth layout of existing theaters.

 

 

To Michael Perkins, Gerwin Damberg, Trevor Davies and Martin J. Richards for the design and development of the Christie E3LH Dolby Vision Cinema Projection System, implemented in collaboration between Dolby Cinema and Christie Digital engineering teams.
The Christie E3LH Dolby Vision Cinema Projection System utilizes a novel dual modulation technique that employs cascaded DLP chips along with an improved laser optical path, enabling high dynamic range theatrical presentation.

 

 

To Ken Museth, Peter Cucka and Mihai Aldén for the creation of OpenVDB and its ongoing impact within the motion picture industry.
For over a decade, OpenVDB’s core voxel data structures, programming interface, file format and rich tools for data manipulation continue to be the standard for efficiently representing complex volumetric effects, such as water, fire and smoke.

 

 

To Jaden Oh for the concept and development of the Marvelous Designer clothing creation system.
Marvelous Designer introduced a pattern-based approach to digital costume construction, unifying design and visualization and providing a virtual analogy to physical tailoring. Under Jaden Oh’s guidance, the team of engineers, UX designers and 3D designers at CLO Virtual Fashion has helped to raise the quality of appearance and motion in digital wardrobe creations.

 

 

To F. Sebastian Grassia, Alex Mohr, Sunya Boonyatera, Brett Levin and Jeremy Cowles for the design and engineering of Pixar’s Universal Scene Description (USD).
USD is the first open-source scene description framework capable of accommodating the full scope of the production workflow across a variety of studio pipelines. Its robust engineering and mature design are exemplified by its versatile layering system and the highly performant crate file format. USD’s wide adoption has made it a de facto interchange format of 3D scenes, enabling alignment and collaboration across the motion picture industry.

  • Tuesday, Jan. 9, 2024
CES 2024 updates: The most interesting news and gadgets from tech's big show
JH Han, CEO and Head of the Device Experience Division at Samsung Electronics, speaks during a Samsung press conference ahead of the CES tech show Monday, Jan. 8, 2024, in Las Vegas. (AP Photo/John Locher)
LAS VEGAS (AP) -- 

CES 2024 kicks off in Las Vegas this week. The multi-day trade event put on by the Consumer Technology Association is set to feature swaths of the latest advances and gadgets across personal tech, transportation, health care, sustainability and more — with burgeoning uses of artificial intelligence almost everywhere you look.
We will be keeping a running report of everything we find interesting from the floor of CES, from the most interesting developments in vehicle tech, to wearables designed to improve accessibility to the newest smart home gadgets.

YOUR OWN PERSONAL BARTENDER
Ryan Close loves a good cocktail, but he's the first to admit that he is a terrible bartender.

That's why, he said, he created Bartesian, a cocktail-making machine small enough to sit on your kitchen counter. Its newest iteration, the Premier, can hold up to four different types of spirits. It retails for $369 and will be available later this year.

On a small screen, you pick from 60 recipes — like a cosmopolitan or a white sangria — drop the cocktail capsule into the machine, and in seconds you have a cocktail over ice.

Lemon drop is Bartesian's most popular recipe, according to Close.

LETTING THE RIGHT ONES IN
It can be tricky to keep track of your furry friends in and out of the house — but a new pet door might make it a little easier.

Tech startup Pawport has unveiled a motorized pet door that will let your pet come and go as they please — while keeping other critters out. An accompanying collar tag that will open the door when your pet is near. But there's also customizable guardrails.

The product, which can slide directly onto existing pet door frames, can be temporarily locked for specific pets or set to "curfews" using the Pawport app or with remote-control through compatible virtual assistants like Amazon's Alexa and Google Assistant.

Pawport's pet door and app are currently available for preorder and are set to make their ways into homes during the second quarter of 2024.

SMART LOCKS GO BIOMETRIC
It's 2024, of course your face can unlock your phone. And your front door is next.

Lockly, a tech company that specializes in smart locks, is showcasing a new lock with facial recognition technology that allows consumers to open doors without any keys. The new smart lock, dubbed "Visage," is set to hit the market this summer. In addition to facial recognition, this lock will feature a biometric fingerprint sensor and secure digital keypad for alternative ways of entry -- similar to past Lockly products. Visage is also compatible with Apple HomeKey and Apple Home.

AI TWINSIES
Have you ever wondered what it's like to be a twin? Rex Wong, CEO of Hollo AI, says his company has created "AI personalization technology" that can create your digital twin in mere minutes after uploading a selfie and voice memos in a phone app expected to launch later this month.

Wong said he wanted to create a technology that could help digital creators and celebrities connect with their fans in a new way.

Standing next to a television screen projecting her AI clone, Los Angeles-based content creator McKenzi Brooke told AP that her digital twin will allow her to interact 24 hours a day with her followers across various social media platforms – and make money off of it.

"It's not a 9-to-5 job. It's a 24-hour job. There's no break," she said, noting that she posts more than 100 times a day just on Snapchat, a photo-sharing social media platform. "Now I have my AI twin who is able to talk to my audience, but it talks the way I would talk."

PLAYSTATION CONTROLLER MAKES A CAMEO APPEARANCE AT SONY ANNOUNCEMENT
Sony Honda Mobility returned to the CES this year with some updates to its Afeela EV. While the car itself may not be any closer to moving out from being a concept, Sony had some fun with it: they drove it onto the stage with a PlayStation controller.

President of Sony Honda Mobility Izumi Kawanishi was quick to point out that Afeela owners likely won't be driving cars using controllers in the future.

HYUNDAI SEES A FUTURE IN HYDROGEN
Hyundai on Monday spotlighted its future plans for utilizing hydrogen energy. Beyond hydrogen-powered fuel cell vehicles, the South Korean automaker pointed to the possibilities of moving further into move further into energy production, storage and transportation — as Hyundai works towards contributing to "the establishment of a hydrogen society." Company leaders say this sets them apart from other automakers.

"We are introducing a way to turn organic waste and even plastic into clean hydrogen. This is unique," José Muñoz, president and global Chief Operating Officer of Hyundai Motor Company, said in a Monday press conference at CES 2024.

Hyundai also shared plans to further define vehicles based off of their software offerings and new AI technology. With so-called "software defined vehicles," that could include opportunities for consumers to pay for features on demand — such as advanced driver assistance or autonomous driving — down the road. Hyundai also aims to integrate its own large language model into its navigation system.

SAMSUNG AND HYUNDAI TEAM UP TO ADD AI TO YOUR CAR
Samsung has announced that they are collaborating with Hyundai to develop "home-to-car" and "car-to-home" services to all Kia and Hyundai vehicles.

What that means is that people will be able to use Samsung's SmartThings service to set your car's cabin temperature or open its windows, and when you're in your car, you'll be able to control your home's lights and interact with any of your connected smart devices.

Samsung also announced a team-up with Microsoft to bring more Copilot AI functions to their flagship Galaxy smartphones.

A 'PAWFECT' COMPANION FOR YOUR PET?
Busy families with dogs may want to be on the lookout for a new AI-powered robot that promises to play with, feed and even give medicine to your furry best friend.

Consumer robotics firm Ogmen was at CES 2024 to show its new ORo pet companion, an autonomous robot designed to assist with pet care by feeding, providing medicine and even playing with your dog using a ball launcher built into its chest.

TRANSPARENT TVs ARE HERE
Consumer electronics giants LG and Samsung have unveiled transparent TVs at the show, with LG having just announced its OLED-powered display will go on sale later this year.

Almost invisible when turned off, LG's 77-inch transparent OLED screen can switch between transparent mode and a more traditional black background for regular TV mode.

"The unique thing about OLED is it's an organic material that we can print on any type of surface," explains David Park from LG's Home Entertainment Division.

"And so what we've done is printed it on a transparent piece of glass, and then to get the OLED picture quality, that's where we have that contrast film that goes up and down."

Content is delivered wirelessly to the display using LG's Zero Connect Box which sends 4K images and sound.

Why would you need a transparent TV?

When not being watched as a traditional TV, the OLED T can be used as a digital canvas for showcasing artworks, for instance.

Samsung's transparent MICRO LED-powered display showed off the technology as a concept.

ADS COMING TO SHOPPING CARTS
Food companies advertise all over the grocery store with eye-catching packaging and displays. Now, Instacart hopes they'll start advertising right on your cart.

This week at CES, the San Francisco-based grocery delivery and technology company is unveiling a smart cart that shows video ads on a screen near the handle. General Mills, Del Monte Foods, and Dreyer's Grand Ice Cream are among the companies who will advertise on the carts during an upcoming pilot at West Coast stores owned by Good Food Holdings.

Instacart says a screen might advertise deals or show a limited-edition treat, like Chocolate Strawberry Cheerios. It might also share real-time recommendations based on what customers put in the cart, like advertising ice cream if a customer buys cones.

Instacart got into the cart business in 2021 when it bought Caper, which makes smart carts with cameras and sensors that automatically keep track of items placed in them. Instacart says it expects to have thousands of Caper Carts deployed by the end of this year.

  • Monday, Jan. 8, 2024
Apple's Vision Pro headset launches next month as company seeks to expand mixed-reality market
The Apple Vision Pro headset is displayed in a showroom on the Apple campus after it's unveiling on June 5, 2023, in Cupertino, Calif. Apple's high-priced headset for toggling between the real and digital world will be available in its stores beginning Feb. 2, 2024 launching the trendsetting company's push to broaden the appeal of what so far has been a niche technology. (AP Photo/Jeff Chiu, File)

Apple's high-priced headset for toggling between the real and digital world will be available in its stores beginning Feb. 2, launching the trendsetting company's push to broaden the appeal of what so far has been a niche technology.

Apple unveiled the sleek $3,500 goggles at a software conference held at its Cupertino, California, headquarters eight months ago — an event that was designed to encourage developers to make apps tailored for a device that projects users into three-dimensional simulations of reality.

Apple's announcement coincides with a major consumer electronics show in Las Vegas where the company has long been conspicuously absent.

Apple says the goggles' operating system will be compatible with more than 1 million apps designed for the iPhone and iPad. Pre-orders begin Jan. 19, but buyers will have to go to a store to be properly fitted for the goggles, which are controlled with the eyes and a few simple hand gestures.

Although Facebook owner Meta Platforms and other companies have been making virtual reality headsets for years with limited success, many industry analysts believe Apple has the potential to expand the technology's audience beyond the video gamers and mostly tech nerds that have embraced it so far.

The Vision Pro already has gotten largely enthusiastic reviews among the media who were able to test it in tightly controlled demonstrations monitored by Apple, but the device's price tag probably means relatively few unit sales during its first year on the market.

Even so, Apple's first new product since its smartwatch debut a decade ago could set the stage for the introduction of more affordable versions for a broader audience. Right now, the Vision Pro will cost seven times more than Meta's latest virtual-reality headset, the Quest 3.

In a sign that Apple is expecting the Vision Pro to pave the way to a bigger market, the company included the ability to take 3-D videos that can be viewed through the goggles on its its latest premium iPhones, the 15 Pro and 15 Pro Max. These videos are so realistic that the people and other images in them appear to be right in front of the viewer watching them.

Apple is looking for ways to juice its sales after suffering a slight decline in revenue during its last fiscal year ending in in September. Apple still raked in $383 billion in sales, with the iPhone accounting for more than half that amount.

Michael Liedtke is an AP technology writer

  • Monday, Jan. 8, 2024
CES 2024 is upon us. Here's what to expect from this year's annual show of all-things tech
People walk through the Las Vegas Convention Center during setup ahead of the CES tech show Saturday, Jan. 6, 2024, in Las Vegas. (AP Photo/John Locher)
LAS VEGAS (AP) -- 

CES, the Consumer Technology Association's annual trade show of all-things tech, is kicking off in Las Vegas this week.

The multi-day event, formerly known as the Consumer Electronics Show, is set to feature swaths of the industry's latest advances and gadgets across personal tech, transportation, health care and more — with burgeoning uses of artificial intelligence almost everywhere you look.

The Consumer Technology Association bills CES as the world's largest audited tech event held in-person. Organizers hope to bring in some 130,000 attendees this year. More than 4,000 exhibitors, including over 1,200 startups, are also expected across 2.5 million net square feet of exhibit space.

That's still below the headcounts of pre-pandemic years and would mark a 24% dip in attendance compared to the show held in early 2020, just before COVID-19 consumed much of everyday life. But 2024 is on track to beat more recent years. The anticipated numbers would surpass 2023's nearly 118,000 attendees, for example.

"People are pumped for this. They're pumped because it's post-COVID (and) they're coming back," Gary Shapiro, president and CEO of the Consumer Technology Association, said. "And the CEO level support from around the world has been amazing."

Big names set to exhibit at CES this year range from tech giants and automakers to leading cosmetics brands — including Amazon, Google, Honda, Mercedes-Benz and L'Oreal. The show will also spotlight the Consumer Technology Association's partnership with the United Nations Human Security for All campaign, which recently added technology as its eighth human security pillar.

After two days of media previews, CES will run from Tuesday through Friday. The show is not open to the general public -- it's a business-to-business event often used for industry professionals to network and connect.

We spoke with Shapiro about CES 2024 and what to expect this week. The conversation has been edited for clarity and length.

CES 2024 IS HERE. WHAT ARE THE MAIN THEMES OF THIS YEAR'S SHOW?
The overall theme of the show, in a sense, is sustainability. It's green. It's the U.N. human securities — including those that focus on clean air, clean water, food as well as health care. And the U.N. just added a new one, which is technology itself. The show is built around these human securities.

From mobility to health care, the exhibiting companies are providing solutions in the post-COVID world. We're also getting older, we're living longer and there's fewer people to take care of us. Technology is the answer.

AI IS EVERYWHERE THIS YEAR. HOW MUCH SAFETY OVERSIGHT IS THERE ON THE DEVICES WE'LL SEE IN THE COMING DAYS?
AI is like the internet itself. It's a huge ingredient that will propel so much innovation. The difference is now generative AI, which can learn from what you've done. And you can apply that to so many different aspects of what we do that will make our lives better — especially in a health care area.

Like any tool since the invention of fire, the government plays a very big role in making sure there are certain safety barriers. We've been working with the U.S. Senate and they've been hearing from every interested party about what we need — including a national privacy law. AI is a tool and it can be used for doing tremendous good, or it could be used for doing harm. And we want to focus on the good.

AUTOMAKERS ALSO HAVE A BIG SPOTLIGHT AT CES. CAN WE EXPECT ANY IMPACT FROM THE RECENT UAW STRIKE?
In terms of a trade event, this is like the biggest car event in the world. We see car companies from all over the world on the floor.

They will be there in different ways, and some choose not to be here for one reason or another. Certainly the strike had an impact for some of the Detroit companies, but the rest of the companies from around the world are very strong — notably from Europe, Vietnam and Japan.

WE SAW VIDEO GAME EXPO E3 BITE THE DUST LAST MONTH. WHAT ROLE DO TRADE SHOWS PLAY TODAY AND HOW CAN CES'S FUTURE BE ENSURED?
Since COVID, trade shows have actually become more important for business leaders — because they understand and appreciate that relationship-building. That face-to-face time is very important. A person who goes to CES, for example, has on average 29 different meetings. What is more efficient than that?

And then there's something you can't get online, which is serendipity. It's discovery. It's learning what you don't know and it's being inspired. Someone said to me on the way here, "I love going to CES because I come back optimistic for the world. I come back with 50 ideas and it energizes me." And that's what's so important. I think we have a great future, and innovation is going to be what fuels us. And we will get there by gathering the world's innovators together.

Video producer James Brooks contributed to this report.

  • Thursday, Jan. 4, 2024
Microsoft's new AI key is first big change to keyboards in decades
The Microsoft logo is shown at the Mobile World Congress 2023 in Barcelona, Spain, on March 2, 2023. Starting in February, some new personal computers that run Microsoft's Windows operating system will have a special "Copilot key" that launches the software giant's AI chatbot. (AP Photo/Joan Mateu Parra, File)

Pressing a button will be one way to summon an artificial intelligence agent as Microsoft wields its computer industry influence to reshape the next generation of keyboards.

Starting this month, some new personal computers that run Microsoft's Windows operating system will have a special "Copilot key" that launches the software giant's AI chatbot.

Getting third-party computer manufacturers to add an AI button to laptops is the latest move by Microsoft to capitalize on its close partnership with ChatGPT-maker OpenAI and make itself a gateway for applications of generative AI technology.

Although most people now connect to the internet — and AI applications — by phone rather than computer, it's a symbolic kickoff to what's expected to be an intensively competitive year as tech companies race to outdo each other in AI applications even as they haven't yet resolved all the ethical and legal ramifications. The New York Times last month sued both OpenAI and Microsoft alleging that tools like ChatGPT and Copilot — formerly known as Bing Chat — were built by infringing on copyrighted news articles.

The keyboard redesign will be Microsoft's biggest change to PC keyboards since it introduced a special Windows key in the 1990s. Microsoft's four-squared logo design has evolved, but the key has been a fixture on Windows-oriented keyboards for nearly three decades.

The newest AI button will be marked by the ribbon-like Copilot logo and be located near the space bar. On some computers it will replace the right "CTRL" key, while on others it will replace a menu key.

Microsoft is not the only company with customized keys. Apple pioneered the concept in the 1980s with its "Command" key marked by a looped square design (it also sported an Apple logo for a time). Google has a search button on its Chromebooks and was first to experiment with an AI-specific key to launch its voice assistant on its now-discontinued Pixelbook.

But Microsoft has a much stronger hold on the PC market through its licensing agreements with third-party manufacturers like Lenovo, Dell and HP. About 82% of all desktop computers, laptops and workstations run Windows, compared to 9% for Apple's in-house operating system and just over 6% for Google's, according to market research firm IDC.

Microsoft hasn't yet said which computer-makers are installing the Copilot button beyond Microsoft's own in-house line of premium Surface devices. It said some of the companies are expected to unveil their new models at next week's CES gadget show in Las Vegas.

Matt O'Brien is an AP technology writer

  • Wednesday, Dec. 27, 2023
The New York Times sues OpenAI and Microsoft for using its stories to train chatbots
A sign for The New York Times hangs above the entrance to its building, Thursday, May 6, 2021 in New York. The New York Times filed a federal lawsuit against OpenAI and Microsoft on Wednesday, Dec. 27, 2023 seeking to end the practice of using published material to train chatbots. (AP Photo/Mark Lennihan, File)
NEW YORK (AP) -- 

The New York Times is striking back against the threat that artificial intelligence poses to the news industry, filing a federal lawsuit Wednesday against OpenAI and Microsoft seeking to end the practice of using its stories to train chatbots.

The Times says the companies are threatening its livelihood by effectively stealing billions of dollars worth of work by its journalists, in some cases spitting out Times' material verbatim to people who seek answers from generative artificial intelligence like OpenAI's ChatGPT. The newspaper's lawsuit was filed in federal court in Manhattan and follows what appears to be a breakdown in talks between the newspaper and the two companies, which began in April.

The media has already been pummeled by a migration of readers to online platforms. While many publications — most notably the Times — have successfully carved out a digital space, the rapid development of AI threatens to significantly upend the publishing industry.

Web traffic is an important component of the paper's advertising revenue and helps drive subscriptions to its online site. But the outputs from AI chatbots divert that traffic away from the paper and other copyright holders, the Times says, making it less likely that users will visit the original source for the information.

"These bots compete with the content they are trained on," said Ian B. Crosby, partner and lead counsel at Susman Godfrey, which is representing The Times.

An OpenAI spokesperson said in a prepared statement that the company respects the rights of content creators and is "committed" to working with them to help them benefit from the technology and new revenue models.

"Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development," the spokesperson said. "We're hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers."

Microsoft did not respond to requests for comment.

Artificial intelligence companies scrape information available online, including articles published by news organizations, to train generative AI chatbots. The large language models are also trained on a huge trove of other human-written materials, which helps them to build a strong command of language and grammar and to answer questions correctly.

But the technology is still under development and gets many things wrong. In its lawsuit, for example, the Times said OpenAI's GPT-4 falsely attributed product recommendations to Wirecutter, the paper's product reviews site, endangering its reputation.

OpenAI and other AI companies, including rival Anthropic, have attracted billions of dollars in investments very rapidly since public and business interest in the technology exploded, particularly this year.

Microsoft has a partnership with OpenAI that allows it to capitalize on the company's AI technology. The Redmond, Washington, tech giant is also OpenAI's biggest backer and has invested at least $13 billion into the company since the two began their partnership in 2019, according to the lawsuit. As part of the agreement, Microsoft's supercomputers help power OpenAI's AI research and the tech giant integrates the startup's technology into its products.

The paper's complaint comes as the number of lawsuits filed against OpenAI for copyright infringement is growing. The company has been sued by several writers — including comedian Sarah Silverman — who say their books were ingested to train OpenAI's AI models without their permission. In June, more than 4,000 writers signed a letter to the CEOs of OpenAI and other tech companies accusing them of exploitative practices in building chatbots.

As AI technology develops, growing fears over its use have also fueled labor strikes and lawsuits in other industries, including Hollywood. Different stakeholders are realizing the technology could disrupt their entire business model, but the question will be how to respond to it, said Sarah Kreps, director of Cornell University's Tech Policy Institute.

Kreps said she agrees The New York Times is facing a threat from these chatbots. But she also argued solving the issue completely is going to be an uphill battle.

"There's so many other language models out there that are doing the same thing," she said.

The lawsuit filed Wednesday cited examples of OpenAI's GPT-4 spitting out large portions of news articles from the Times, including a Pulitzer-Prize winning investigation into New York City's taxi industry that took 18 months to complete. It also cited outputs from Bing Chat — now called Copilot — that included verbatim excerpts from Times articles.

The Times did not list specific damages that it is seeking, but said the legal action "seeks to hold them responsible for the billions of dollars in statutory and actual damages that they owe" for copying and using its work. It is also asking the court to order the tech companies to destroy AI models or data sets that incorporate its work.

The News/Media Alliance, a trade group representing more than 2,200 news organizations, applauded Wednesday's action by the Times.

"Quality journalism and GenAI can complement each other if approached collaboratively," said Danielle Coffey, alliance president and CEO. "But using journalism without permission or payment is unlawful, and certainly not fair use."

In July, OpenAI and The Associated Press announced a deal for the artificial intelligence company to license AP's archive of news stories. This month, OpenAI also signed a similar partnership with Axel Springer, a media company in Berlin that owns Politico and Business Insider. Under the deal, users of OpenAI's ChatGPT will receive summaries of "selected global news content" from Axel Springer's media brands. The companies said the answers to queries will include attribution and links to the original articles.

The Times has compared its action to a copyright lawsuit more than two decades ago against Napster, when record companies sued the file-sharing service for unlawful use of their material. The record companies won and Napster was soon gone, but it has had a major impact on the industry. Industry-endorsed streaming now dominates the music business.

AP Technology Writer Matt O'Brien contributed to this story.

  • Wednesday, Dec. 27, 2023
Foliascope, Blackmagic Design get inventive for "The Inventor"
A scene from "The Inventor" (photo by Jean-Marie Hosatte)
FREMONT, Calif. -- 

The Inventor, a stop motion and animation feature film that delves into the life story of Leonardo da Vinci, was edited and graded in Blackmagic Design’s DaVinci Resolve Studio with other aspects of postproduction composited in Fusion Studio.

Co-written and directed by Oscar-nominated screenwriter Jim Capobianco, The Inventor has a cast which includes Stephen Fry, Daisy Ridley, Marion Cotillard, and Matt Berry.

For the film Capobianco turned to Foliascope, an independent animation studio in France. Foliascope CEO Ilan Urroz and his team embarked on an extensive research mission to faithfully recreate da Vinci’s legacy. Detailed archives and da Vinci’s own drawings were used to build sets, machines and accessories. Puppets, central to the film’s unique blend of stop motion and cartoon animation, were meticulously crafted and designed.

“Films of this scale and complexity represent a massive investment in both time and money, with some projects lasting upwards of 24 months, during which multiple stages of production and post are undertaken simultaneously, mixing both offline and online formats. And all that requires us to distribute the work amongst multiple collaborators,” explained Urroz who added, “We mix all sorts of techniques to tell our stories, and in DaVinci Resolve Studio, we have found an ally to help us do just that.”

DaVinci Resolve Studio was the ideal software for managing all aspects of stop motion editing within a single tool, eliminating the need for roundtripping between multiple applications. This allowed Foliascope to carry out editing, VFX, color grading, sound and export tasks in parallel throughout the production.

Furthermore, Foliascope expanded its workflow to encompass audio mixing and mastering with DaVinci Resolve Studio’s Fairlight tools. This included dialogue, foley, and the film’s original soundtrack, composed by Alex Mandel.

MySHOOT Company Profiles