To AI or Not to AI? That Is the Ethical Question

Posted: July 14, 2022

When was the last time you used artificial intelligence (AI) Chances are it was more recently than you think! Many of us use AI technology on a daily basis, from unlocking your phone with your face to getting new TV show recommendations from Netflix to asking Alexa to add toilet paper to your shopping list. While these applications may seem harmless and even helpful, just like many scientific and technological innovations, AI faces ethical questions. As July 16 is Artificial Intelligence Appreciation Day, let's explore Ai and ethics.

AI is a broad term to describe any machine that can problem solve or learn skills associated with human intelligence. The judicial, healthcare, finance and travel industries all incorporate AI technology in some way. The market for this technology is projected to keep growing in the next couple of years.

Image credit: Getty Images

Is this a good thing? The answer depends on who you ask.

Each advancement in technology widens our capabilities. Often new activities and interactions are possible that weren’t just a year or two prior. At the same time, monitoring, regulations and laws surrounding innovation move slowly, sometimes causing a gap when new technology is introduced. AI has been around since the mid-20th century, but it wasn’t until recent decades that courts have begun ruling on how AI technology adheres to the law.

Just as the laws and ethics that govern human behavior are continually debated, AI will have its own set of standards that will need continual evaluation. Like human ethics that grapple with defining right and wrong, AI ethics will likely never have agreed-upon standards. Our life experience and culture alters our feelings about rights, benefits to society and fairness, leading to differences in law, politics and society. We all have new challenges coming as we apply our own ethical standards to the new technological landscape that includes AI, and are learning through challenges AI has posed already.

Let’s dive into some examples where both AI and ethics come into play.

Self-driving cars eliminate human error and are projected to decrease traffic, but they aren’t perfect. In 2018, a pedestrian died from a self-driving car accident. The court ruled the woman in the driving seat did not avoid the pedestrian so the accident was her fault, but the National Transportation Safety Board said Uber’s car had failed to identify the passenger as intended. Whether you feel humans bear the responsibility for actions by the car, or the technology itself should incur some responsibility, this incident pushed back the release for self-driving cars everywhere as the ethics of this AI-driven device are debated.

AI tailors content on YouTube, TikTok and other social media platforms based on the user’s search history and phone use. It recommends content you might like to keep you entertained on the app for extended periods of time while hiding material and viewpoints that it determines would make you more likely to get off the app. Because of this algorithm feature, social media has been blamed for contributing to the political polarization around the world. While the effect may be unintended, AI driving recommended content can contribute to the spread of misinformation and disinformation. Stuart Force vs. Facebook ruled that social media sites are not responsible for who sees the material that is published on their site, based on what AI thinks will attract people, even if it leads someone to commit a terrorist attack.

Image Credit: Getty Images

Courtrooms around the country use AI software to run assessments on how likely someone convicted of a crime will recommit. It makes that prediction based on criminal profiles from the past. Judges use this information to inform prison sentence lengths and bond amounts. Journalists discovered black defendants were more likely to be incorrectly labeled as a high risk to reoffend and white defendants more likely to be incorrectly put as a low risk to reoffend. The AI learned and applied racial stereotypes from the historical data.

Another example of flawed data leading to problematic application is in facial recognition software. AI makes it easy to unlock your phone, but the software works better on some people. Computer scientists discovered the software had trouble identifying faces with dark skin tones. Some people having to punch in a code to access their phone might seem minor, but as AI is becoming more widely adopted in hiring practices, banking, health care and other sectors, inconsistent or biased applications contribute to the ethical concerns.

The importance of examining AI ethics is being more widely recognized. Dr. Timent Gebru was recently named one of TIME's 100 most influential people of the year for work in AI ethics. Her Microsoft team examines the complex societal implications of AI to find techniques that are both innovative and responsible. Companies that make or use AI, such as Google and The Department of Defense, have set principles for future work that minimize bias, increase safety and provide socially beneficial outcomes. And non-governmental organizations, research entities and advocacy groups have entered the AI ethics conversations as well, some focused on serving consumers in a watchdog capacity and some offering evaluation tools and best-practice resources for companies who utilize AI.

The study by, and recommendations from, all of these groups are important pieces of the AI ethics puzzle. While there could be ethical principles and broad ideals related to AI we will strive to attain, whether we meet them or not will always be up for debate.

Learn more from a panel of experts who discussed AI during an October 2020 Lunch Break Science.