By Frank Bajak, Technology Writer
BOSTON (AP) --No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible.
But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.
To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.
At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.
Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.
This interview with Tabassi has been edited for length and clarity.
Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?
A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.
Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.
A: AI systems are inherently socio-technical, influenced by environments and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophers.
Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called "notoriously underfunded and understaffed." How many people at NIST are working on this?
A: First, I'd like to say that we at NIST have a spectacular history of engaging with broad communities. In putting together the AI risk framework we heard from more than 240 distinct organizations and got something like 660 sets of public comments. In quality of output and impact, we don't seem small. We have more than a dozen people on the team and are expanding.
Q: Will NIST's budget grow from the current $1.6 billion in view of the AI mission?
A: Congress writes the checks for us and we have been grateful for its support.
Q: The executive order gives you until July to create a toolset for guaranteeing AI safety and trustworthiness. I understand you called that "an almost impossible deadline" at a conference last month.
A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a brilliant team, are committed and excited. As for the deadline, it's not like we are starting from scratch. In June we put together a public working group focused on four different sets of guidelines including for authenticating synthetic content.
Q: Members of the House Committee on Science and Technology said in a letter last month that they learned NIST intends to make grants or awards through through a new AI safety institute — suggesting a lack of transparency.
A: Indeed, we are exploring options for a competitive process to support cooperative research opportunities. Our scientific independence is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.
Q: A consortium created to assist the AI safety institute is apt to spark controversy due to industry involvement. What do consortium members have to agree to?
A: We posted a template for that agreement on our website at the end of December. Openness and transparency are a hallmark for us. The template is out there.
Q: The AI risk framework was voluntary but the executive order mandates some obligations for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determining which models get red-teamed?
A: Our job is to advance the measurement science and standards needed for this work. That will include some evaluations. This is something we ahve done for face recognition algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.
Q: How AIs are trained and the guardrails placed on them can vary widely. And sometimes features like cybersecurity have been an afterthought. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?
A: In the AI risk management framework we came up with a taxonomy of sorts for trustworthiness, stressing the importance of addressing it during design, development and deployment — including regular monitoring and evaluations during AI systems' lifecycles. Everyone has learned we can't afford to try to fix AI systems after they are out in use. It has to be done as early as possible.
And yes, much depends on the use case. Take facial recognition. It's one thing if I'm using it to unlock my phone. A totally different set of security, privacy and accuracy requirements come into play when, say, law enforcement uses it to try to solve a crime. Tradeoffs between convenience and security, bias and privacy — all depend on context of use.
“Heretic” and “Maria” Set As Red Carpet Premieres At AFI Fest
The American Film Institute (AFI) has announced that Heretic, the psychological thriller starring Hugh Grant, and Maria, based on the life of opera singer Maria Callas starring Angelina Jolie, will round out the Red Carpet Premieres section at this year’s AFI Fest. The Heretic Gala Screening will take place on Thursday, October 24, and the Maria Gala Screening will be held on Saturday, October 26. The complete Red Carpet Premieres section includes the world premieres of Music By John Williams, Robert Zemeckis’ Here, Wallace & Gromit: Vengeance Most Fowl and Clint Eastwood’s Juror #2. All Red Carpet Premieres will take place at the historic TCL Chinese Theatre. The full lineup for AFI Fest 2024 will be unveiled on October 1.
“At the heart of AFI Fest is an unwavering dedication to celebrating the best in global cinema--together,” said Bob Gazzale, AFI president and CEO. “We look forward to uniting artists and audiences once again to be inspired by the art form in a powerful sense of community.”
Heretic follows two young missionaries (Sophie Thatcher and Chloe East) who are forced to prove their faith when they knock on the wrong door and are greeted by a diabolical Mr. Reed (portrayed by Grant), becoming ensnared in his deadly game of cat-and-mouse. The film is directed by Scott Beck and Bryan Woods and produced by Stacey Sher, Beck, Woods, Julia Glausi and Jeanette Volturno. The film will be released nationwide by A24 on November 8.
Directed by Pablo Larraín, Maria presents a tumultuous and beautiful depiction of one of the world’s most renowned artists and reimagines the legendary soprano in her final days in Paris, as Callas (Jolie)... Read More