By Frank Bajak, Technology Writer
BOSTON (AP) --No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible.
But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.
To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.
At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.
Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.
This interview with Tabassi has been edited for length and clarity.
Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?
A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.
Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.
A: AI systems are inherently socio-technical, influenced by environments and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophers.
Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called "notoriously underfunded and understaffed." How many people at NIST are working on this?
A: First, I'd like to say that we at NIST have a spectacular history of engaging with broad communities. In putting together the AI risk framework we heard from more than 240 distinct organizations and got something like 660 sets of public comments. In quality of output and impact, we don't seem small. We have more than a dozen people on the team and are expanding.
Q: Will NIST's budget grow from the current $1.6 billion in view of the AI mission?
A: Congress writes the checks for us and we have been grateful for its support.
Q: The executive order gives you until July to create a toolset for guaranteeing AI safety and trustworthiness. I understand you called that "an almost impossible deadline" at a conference last month.
A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a brilliant team, are committed and excited. As for the deadline, it's not like we are starting from scratch. In June we put together a public working group focused on four different sets of guidelines including for authenticating synthetic content.
Q: Members of the House Committee on Science and Technology said in a letter last month that they learned NIST intends to make grants or awards through through a new AI safety institute — suggesting a lack of transparency.
A: Indeed, we are exploring options for a competitive process to support cooperative research opportunities. Our scientific independence is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.
Q: A consortium created to assist the AI safety institute is apt to spark controversy due to industry involvement. What do consortium members have to agree to?
A: We posted a template for that agreement on our website at the end of December. Openness and transparency are a hallmark for us. The template is out there.
Q: The AI risk framework was voluntary but the executive order mandates some obligations for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determining which models get red-teamed?
A: Our job is to advance the measurement science and standards needed for this work. That will include some evaluations. This is something we ahve done for face recognition algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.
Q: How AIs are trained and the guardrails placed on them can vary widely. And sometimes features like cybersecurity have been an afterthought. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?
A: In the AI risk management framework we came up with a taxonomy of sorts for trustworthiness, stressing the importance of addressing it during design, development and deployment — including regular monitoring and evaluations during AI systems' lifecycles. Everyone has learned we can't afford to try to fix AI systems after they are out in use. It has to be done as early as possible.
And yes, much depends on the use case. Take facial recognition. It's one thing if I'm using it to unlock my phone. A totally different set of security, privacy and accuracy requirements come into play when, say, law enforcement uses it to try to solve a crime. Tradeoffs between convenience and security, bias and privacy — all depend on context of use.
“Dune: Part Two” and “House of the Dragon” Win 2 HPA Awards Apiece
Dune: Part Two and House of the Dragon each scored two HPA Awards during a gala ceremony at the Television Academyโs Wolf Theatre in North Hollywood, Calif. on Thursday night (11/7). The HPA Awards honor trailblazing talent in the postproduction industry, celebrating standout achievements in color grading, sound, editing, restoration, and visual effects across theatrical features, commercials, and episodics.
Dune: Part Two topped the Outstanding Color Grading--Live Action Theatrical Feature and the Outstanding Sound--Theatrical Feature categories.
House of the Dragonโs two wins were for โThe Red Dragon and the Goldโ episode which scored for Outstanding Visual Effects--Live Action Episode or Series Season, and Outstanding Editing--Episode or Non-Theatrical Feature (Over 30 Minutes). In the latter HPA Creative Category, House of the Dragon tied with the โPart Six: Far,l Far Awayโ episode of Ahsoka.
The HPAโs Judges Award for Creativity and Innovation honored Taylor Swift | The Eras Tour. This recognition celebrates the profound impact on both live and filmed entertainment that defined The Eras Tour, underscoring its exceptional impact on audiences and the industry. The jury issued a statement outlining their choice: โCelebrated as the cultural phenomenon of 2023, Taylor Swift | The Eras Tour set new records in box office sales, tour revenues, and attendance. The tour showcased exceptional artistry and innovation, making a profound impact on both live and filmed entertainment.โ
This year, FotoKem was awarded the Charles S. Swartz Award for its role in supporting filmmakers, studios, cinematographers, and artists across diverse film and media landscapes. Also celebrated... Read More