We’ve all heard troubling stories involving emerging tools powered by artificial intelligence (AI), in which algorithms yield unintended, biased, or erroneous results. Here are a few examples:
- A monitoring tool for sepsis that performs less well for patients of certain races
- A selection app that prefers certain backgrounds, education, or experience, with no showing of job relatedness or business necessity
- Facial recognition software that struggles with different skin tones
- An employment screening tool that doesn’t account for accents
- A clinical decision support tool for evaluating kidney disease that gives doctors inconsistent advice based on the patient’s race
- Triage software that prioritizes one race over others
The list is long and growing, and companies that use these tools do so at increasing legal, operational, and public relations risk.
AI-powered tools, unchecked, pose real but hidden risks to our friends, neighbors, and countless others, often limiting economic opportunities or, in the extreme, causing physical harm. For organizations seeking to use these tools, they also create potentially expensive and disruptive legal liability, operational shortcomings that may impede greater success in the marketplace, and reputational damage in the court of public opinion. Currently, the impact of algorithms on organizations and target populations is poorly understood and rarely measured.
Topics
This virtual briefing focuses on the legal risks, methods for finding those risks, and solutions in the form of tailored compliance programs that address AI risks specifically.
Key takeaways in labor and employment, health care and life sciences, as well as consumer product use cases:
- Identifying the key laws and regulations implicated in these domains
- Techniques for finding bias and discrimination in algorithms, including formation of multidisciplinary teams
- Developing a holistic approach to establishing a compliance program specific to the creation and use of AI tools in these domains
- Navigating privacy laws while seeking solutions to bias and discrimination
- Predicting the future direction of regulation in this space
Agenda
Access the Virtual Briefing
Contact
If you have any questions, please reach out to Dionna Rinaldi or Amy Oldiges. Members of the media, please contact Zack Zimmerman.