AI technology comes under fire from critics in Senate hearing

Subcommittee highlights inherent problems preventing transparency, accountability and oversight.


With the adoption of artificial intelligence by industries including healthcare, critics contend that there are inherent problems with AI which have the potential to negatively impact the lives of Americans.

Rashida Richardson, director of policy research at New York University’s AI Now Institute, told a Senate subcommittee on Tuesday that the use of recommendation algorithms, predictive analytics and inferential systems are rapidly expanding and require immediate attention and action by Congress.

“Though these technologies affect every American, they are primarily developed and deployed by a few powerful companies, and therefore shaped by these companies’ incentives, values and interests,” testified Richardson. “These companies have demonstrated limited insight into whether their products will harm consumers and even less experience in mitigating those harms. So while most technology companies promise that their products will lead to broad societal benefits, there is little evidence to support these claims. In fact, mounting evidence points to the contrary.”

Also See: Lack of AI regulatory, clinical standards pose potential risks

Richardson pointed to IBM’s Watson Health cognitive computing capabilities, which have been developed to help determine the best treatment options for patients. In particular, Watson is meant to help clinicians quickly sift through big data providing them with insights on cancer-causing mutations.

“IBM’s Watson supercomputer was designed to improve patient outcomes but recently internal IBM documents showed it actually provided unsafe and erroneous cancer treatment recommendations,” according to Richardson. “This is just one of numerous examples that have come to light in the last year showing the difference between the marketing companies use to sell these technologies and the stark reality of how these technologies ultimately perform.”

She told lawmakers that potential harms to consumers from AI systems arise from risks that current laws and incentive structures fail to adequately address, including the use of “black box” technologies that prevent public transparency, accountability and oversight.

“Technologically, they are black boxes because most of the internal workings are hidden away inside the companies,” Richardson observed. “Legally, technology companies obstruct accountability efforts through claims of proprietary or trade secret legal protections, even though there is no evidence that legitimate inspection auditing or oversight poses any competitive risks.”

However, Stephen Wolfram, founder and CEO of the computational technology firm Wolfram Research, contends that when it comes to the black box phenomenon of AI algorithms “non-explainability” remains a challenge.

“People often assume that computers just run algorithms that someone sat down and wrote, but modern AI systems don’t work that way,” testified Wolfram. “Instead, lots of the programs they use are actually constructed automatically, usually by learning from some massive number of examples. And if you go look inside those programs, there’s usually embarrassingly little that we humans can understand in there. Here’s the real problem: it’s sort of a fact of basic science that if you insist on explainability, then you can’t get the full power of a computational system or AI.”

More for you

Loading data for hdm_tax_topic #better-outcomes...