Watch the Low-Code/No-Code Summit on-demand sessions to learn how to successfully innovate and achieve efficiencies by upskilling and scaling citizen developers. Watch now.
Artificial intelligence (AI) may still feel a bit futuristic to many, but the average consumer would be surprised where AI can be found. It is no longer a science fiction concept limited to Hollywood and feature films or top secret technology found only in computer science labs among the Googles and Metas of the world – quite the contrary. Today, AI is not only behind many of our online shopping and social media recommendations, customer service inquiries, and loan approvals, but it is also actively making music, winning art contests, and beating people in spell that have existed for thousands of years.
Because of this growing awareness gap around AI’s extensive capabilities, a critical first step for any organization or business using or delivering it would be to form a AI ethics committee. This committee would be charged with two major initiatives: engagement and education.
The ethics committee would not only prevent malpractice and unethical applications of AI in its use and implementation. It would also work closely with regulators to set realistic parameters and formulate rules that proactively protect individuals from potential pitfalls and biases. Further, it would educate consumers and allow them to view AI through a neutral lens, supported by critical thinking. Users need to understand that AI can change the way we live and work and also perpetuate bias and discriminatory practices that have plagued humanity for centuries.
The plea for an AI ethics committee
Leading institutions working with AI are probably most aware of its potential to positively change the world and do harm. Some may be more experienced in the space than others, but internal oversight is important for organizations of all sizes and with leadership of varying experience. For example, the Google engineer who himself was convinced that it was actually a Natural Language Processing (NLP) model conscious AI (it wasn’t) is a clear example that even training and internal ethical parameters should take precedence. Getting AI development started the right way is paramount to its (and our) future success.
For example, Microsoft is constantly innovating with AI and putting ethical considerations at the forefront. The software giant recently announced the ability to use AI to to summarise Team meetings. That can mean taking fewer notes and thinking more strategically. But despite this win, it doesn’t mean perfect AI innovation is coming from the software company as well. Over the summer, Microsoft scrapped its AI facial treatmentanalysis tools because of the risk of bias.
Although the development was not perfect every time, it shows how important it is to have ethical guidelines to determine the level of risk. In the case of Microsoft’s AI face analysis, those guidelines determined that the risk outweighed the reward, protecting us all from something that could have had potentially damaging consequences, like the difference between an urgently needed monthly support check and an unjustified denial . staff.
Choose proactive over passive AI
Internal AI ethics committees serve as checks and balances for the development and advancement of new technologies. They also enable an organization to fully inform and formulate consistent opinions on how regulators can protect all citizens from harmful AI. While the White House proposal for a AI Bill of Rights shows that active regulation is just around the corner, industry experts still need to have informed insights on what is best for citizens and organizations regarding safe AI.
Once an organization has committed to establishing an AI ethics committee, it is important to adopt three proactive, rather than passive, approaches:
1. Build with intention
The first step is to sit down with the committee and finalize together what the end goal is. Be diligent in researching. Talk to technical leaders, communicators, and anyone across the organization who has something to add about the direction of the committee – diversity of input is critical. It can be easy to lose sight of the AI Ethics Committee’s scope and primary function if goals and objectives are not established early on, and the final product could diverge from the original intent. Find solutions, build a timeline and stick to it.
2. Don’t let the ocean boil
Like the vast blue seas that surround the world, AI is a complex field that stretches far and wide, with many undiscovered trenches. When starting your committee, don’t assume too much or too broad a scope. Be focused and intentional in your AI plans. Know what your use of this technology is trying to solve or improve.
Be open to different perspectives
A background in deep tech is helpful, but a well-assembled committee includes diverse perspectives and stakeholders. This diversity makes it possible to express valuable opinions on potential AI ethical threats. Including the legal team, creative, media and engineers. As a result, the company and its customers are represented in all areas where ethical dilemmas can arise. Create a company-wide “call to action” or prepare a questionnaire to define goals – remember the goal here is to broaden your dialogue.
Education and involvement save the day
AI ethics committees facilitate two aspects of success for an organization using AI: education and engagement. By educating everyone in-house, from engineers to Todd and Mary in accounting, on the pitfalls of AI, organizations can be better equipped to educate regulators, consumers and others in the industry and foster a society that cares about and becomes trained in artificial intelligence.
CF Su is VP machine learning at Hyperscience.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers