The Extreme Costs of Faulty AI and the Vital Role Humans Play - Institute for Experiential AI

The Extreme Costs of Faulty AI and the Vital Role Humans Play

At the Institute of Experiential Artificial Intelligence, a recently launched independent initiative from Northeastern University, responsible and ethical AI fuels an an application-driven, human-centric approach.

In the world’s top nine largest economies, the 2021 Artificial Intelligence market is valued at more than $93 billion, with a revenue forecast expected to reach nearly $1 trillion by 2028. It’s no wonder why AI remains top of mind for businesses as a means toward growth and evolution.

Nowadays, more and more companies routinely tap into AI as a solution or catalyst for next-generation offerings with huge real-world impacts. With such ambitious undertakings, mistakes go beyond mere inconvenience. AI failures can lead to real-life harm–physically, psychologically, and emotionally. The potential financial risk to a company’s bottom line could put a significant dent in the reward.

Three examples where faulty AI had lasting repercussions

  1. Uber’s self-driving vehicle kills a woman in Arizona
    Back in March of 2018, 49-year-old Elaine Herzberg was struck and killed by an Uber self-driving car while walking her bike across the street at night in Tempe, Arizona. Traveling at a speed of 43mph, the vehicle’s radar and light-emitting lidar sensors failed to detect an object as a pedestrian until 1.3 seconds before impact. The car’s emergency braking mechanism had also been disabled to minimize jerky movements during the ride. The driver, who was there as an added safety measure, wasn’t paying attention and failed to react.
  2. Thousands of Dutch families falsely accused of tax fraud
    In an attempt to crack down on welfare fraud in the Netherlands, the Dutch government used an algorithm to find abusers of a tax credit that reimbursed a portion of monthly childcare expenses to eligible families. With minimal human oversight, the algorithm targeted residents it deemed “risky” and flagged potential fraudsters. As a result, 26,000 families were targeted falsely and ordered to pay money they didn’t owe, with many also barred from additional government services such as housing and healthcare allowances. After a lengthy investigation, the Dutch Prime Minister eventually resigned but not before countless families suffered extreme financial and emotional hardship.
  3. Facial recognition software puts innocent men in prison
    In recent years, there have been too many occasions where innocent Black men have spent time in prison due to faulty facial recognition software used by police to solve crimes. In one New Jersey case, Nijeer Parks spent eleven days in jail and paid $5,000 to defend himself in court before the case against him was dismissed. Another Michigan resident, Robert Williams, was wrongfully arrested at his home in front of his children. Williams spent 30 hours in jail before being released on a $1,000 personal bond and was forced to take time off from work to defend himself against a crime he had an alibi for.

To err is human

Each example listed above employed some form of AI technology to solve a problem but failed to provide a successful solution. However, technology alone can’t be blamed. In each situation, humans played a vital role in the outcomes with dire, lasting consequences.

Camera footage and phone records showed that the human driver in Uber’s self-driving car was streaming video on her phone moments before the crash. This incident ultimately cost Uber months of delayed testing and led the company to sell its Advanced Technologies Group.

In the case of the Dutch benefits scandal, the system was designed to think that the families most in need of financial assistance were also the biggest threat to the program. There was barely any human oversight or follow-up once a family was flagged for fraud. The Ministry of Finance has promised to pay €30,000 to wrongfully accused families, but that barely scratches the surface of what they lost.

And in the cases of false imprisonment, the police departments failed to properly investigate the accused men before placing them under arrest. Instead, officers relied too heavily on software that lacked the diversity in its programming to distinguish correctly between two men who shared racial similarities. Both Nijeer Parks and Robert Williams have filed civil suits against the departments that wrongfully arrested them. The final cost to these police departments has yet to be determined, but it won’t discount the emotional toll it took on the men.

This pattern shows that the human element integral for AI technology to reach its full potential needs immediate attention. Key ethical questions are not being asked. The conscious and unconscious intent of the humans designing the systems requires significant consideration.

At the Institute of Experiential Artificial Intelligence, a recently launched independent initiative from Northeastern University, responsible and ethical AI fueled by an application-driven, human-centric approach is top of mind. When asked how algorithms could produce results that were so racially and economically biased, the Director of Research for the Institute, Dr. Ricardo Baeza-Yates explained:

“Bias is a mirror of the designers of the intelligent system, not the system itself. Mainly, it comes from the data fed into the system. But it also comes from the objectives of the learning algorithm and the interaction feedback loop with its users. At EAI, we are going beyond the data to train and educate designers on how to diagnose and treat these types of system flaws.”

The fusion of human and machine intelligence

Congruent to Northeastern University’s “learn by doing” approach, the Institute of Experiential AI plans to solve today’s AI problems through practical, hands-on education and implementation.

“At the institute, we believe that most interesting fundamental questions can be found and addressed in making AI technology work for specific problems,” said EAI’s Inaugural Executive Director, Dr. Usama Fayyad. “The same is true when determining if human intervention is needed to minimize the risks of negative impact when AI algorithms go awry. Thus, specific problems lead to solutions to big and pragmatic research challenges. It’s an approach that is rooted in human-centric AI that enhances and extends human intelligence rather than one that attempts to replace it.”

To learn more about the cutting-edge work of the Institute and how EAI’s solutions can help your business, visit The Institute for Experiential AI. You can also follow us on LinkedInTwitterMedium, and YouTube to get notified about our latest content.