Learn how your company can build applications to automate tasks and drive further efficiencies with low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.
Expectations were high when the White House were Blueprint for an AI Bill of Rights on Tuesday. Developed by the White House Office of Science and Technology Policy (OSTP), the Blueprint is a non-binding document that outlines five principles that should guide the design, use, and implementation of automated systems, as well as technical guidelines for the implementation of the principles, including recommended actions for various federal agencies.
For many, high hopes for dramatic change led to disappointment, including criticism that the AI Bill of Rights is “toothless” against artificial intelligence (AI) harm caused by major tech companies and is just a “white paper.”
Unsurprisingly, there were some mismatched expectations about what the AI Bill of Rights would include, Alex Engler, a research associate at the Brookings Institution, told VentureBeat.
“You could argue that the OSTP has kind of set itself up with this big flashy announcement, not really communicating that they’re a science consulting firm as well,” Engler said.
Top with little code/no code
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
Efforts to contain AI risks
The Biden administration’s efforts to curb AI risks are certainly different from those currently being debated in the EU, he added.
“The EU is trying to set up rules that largely apply to every circumstance you can think of to use an algorithm for which there is a certain societal risk,” Engler said. “We’re seeing almost the opposite approach from the Biden administration, which is a very sectoral and even application-specific approach — so there’s a very clear contrast.”
Abhishek Gupta, founder and principal investigator of the Montreal AI Ethics Institute, pointed out that while there are shortcomings, they are mostly the function of a constantly evolving field where no one has all the answers yet.
“I think it does a very, very good job of getting the ball forward, in terms of what we have to do, what we have to do and how we have to do it,” Gupta said.
Gupta and Engler have outlined three key things they say the AI Bill of Rights actually does — and three things it doesn’t:
The AI Bill of Rights is doing:
1. Highlight meaningful and thorough principles.
The blueprint includes five principles around secure and effective systems, algorithmic discrimination protection, data privacy, notice and explanation, as well as human alternatives, consideration and fallback.
“I think the principles make sense on their own,” Engler said. “Not only are they well-chosen and grounded, but they also give some sort of intellectual foundation to the idea that there is systemic, algorithmic damage to civil rights.”
Engler said he believes a broad conceptualization of harm is valuable and thorough.
“I think you could argue it’s too thorough and they should have spent a little more time on other things, but it’s definitely good,” he said.
2. Offer an agency-driven, sector-focused approach.
Having principles is one thing, but Engler points out that the next obvious question is: what can government do about it?
“The obvious subtext for those of us paying attention is that federal agencies will lead the way in the practical application of current laws to algorithms,” he said. “This will be especially useful in many of [the] major systemic concerns around AI. For example, the Equal Employment Opportunities Commission is working on discrimination in the recruitment of staff. And something that I didn’t realize is very new is that Health and Human Services wants to fight racial prejudice in health care, which is a real systemic problem.”
One of the benefits of this kind of sector-specific and application-specific approach is that if the agencies really choose what issues to tackle, as encouraged by the White House, they will be more motivated. “They’re going to pick the issues their stakeholders care about,” he said. “And [there can be] really meaningful and specific policy that considers the algorithms in this broader context.”
3. Recognize organizational elements.
Gupta said he was especially pleased with the Blueprint’s recognition of organizational elements when it comes to how AI systems are procured, designed, developed and deployed.
“I think we tend to overlook how critical the organizational context is — the structure, the incentives, and how people designing and developing these systems interact with them,” he said.
The AI Bill of Rights, he explained, is greatly expanded by touching on this important element that is typically not included or recognized.
“It harmonizes engineering design interventions as well as organizational structure and governance as a common goal we want to achieve, rather than two separate streams to address responsible AI issues,” Gupta added.
The AI Bill of Rights does: not:
1. Create a binding, legal document.
The word ‘Bill of Rights’, unsurprisingly, makes most people think about the binding, legal nature of the first 10 amendments to the US Constitution.
“It’s hard to think of a more spectacular legal term than AI Bill of Rights,” Engler says. “So I can imagine how disappointing it is when you really get the step-by-step regulatory adjustment from existing agencies.”
That said, he explained: “In many ways, this is the best and first thing we want – we want specific sector experts who understand the policies they need to be in charge of, be it housing or hiring or security in the workplace or healthcare, and we want them to enforce good rules in that area, understanding algorithms.”
He continued: “I think that’s the conscious choice we’re seeing, rather than trying to write central rules that somehow regulate all these different things, which is one of the reasons EU law is so confusing and so hard to move forward.”
2. Cover every major sector.
The AI Bill of Rights, Engler said, does reveal the limitations of a voluntary, agency-led approach — as there were several sectors that were notably lacking, including access to education, worker supervision, and — most disturbingly — almost everything from law enforcement. .
“One has to question whether federal law enforcement has taken steps to address inappropriate use of algorithmic tools such as undocumented use of facial recognition, or to really affirmatively say that there are limits to what computer surveillance and computer vision can do, or that weapon detection may not. very reliable,” said Engler. “It’s not clear if they’re going to voluntarily curtail their own use of these systems, and that’s a really significant drawback.”
3. Take the next step to test in the real world.
Gupta said he would like to see organizations and companies try out the AI Bill of Rights recommendations in real pilots and document the lessons learned.
“There seems to be a lack of case studies for applications, not for these specific sets of guidelines that have just been released, but for other sets of guidelines and suggested practices and patterns,” he said. “Unless we actually test them in the real world with case studies and pilots, unless we try these things out in the field, we don’t know to what extent the suggested practices, patterns and recommendations work or don’t work.”
Companies must pay attention
While the AI Bill of Rights is non-binding and is primarily aimed at federal agencies, business companies still need to consider it, Engler said.
“If you’re already in a regulated space, and there’s already rules on the books that affect your financial system or your real estate appraisal process or your hiring process, and you’ve started doing that with an algorithmic system or software , then there’s a pretty good chance that one of your regulators is going to write a guideline that says it applies to you,” he said.
And while unregulated industries may not have anything to worry about in the short term, Engler added that any industry that includes human services and uses complicated blackbox algorithms could be scrutinized over time.
“I don’t think that will happen overnight, and it should be through legislation,” he said. “But there are some requirements in the US Data Privacy and Protection Act, which may be passed this year, that do have some algorithm protection, so I’d be concerned about that as well.”
Overall, Gupta said he believes the AI Bill of Rights has continued to increase the importance of responsible AI for organizations.
“Concretely, what it’s doing for companies right now is giving them direction in terms of what they should invest in,” he said, citing a study by MIT Sloan Management Review/Boston Consulting Group. that thought that companies that prioritize scaling their RAI program over scaling their AI capabilities experience nearly 30% fewer AI failures.
“I think [the AI Bill of Rights] set the right direction for what we need in this area of responsible AI in the future,” he said.
The mission of VentureBeat is a digital city square for tech decision makers to learn about transformative business technology and transactions. Discover our briefings.