Ethical Opportunities for AI: Protecting Humans from Ourselves

This blog post was originally published by Bitfury. It is reprinted here with the permission of Bitfury.

Aside from the immature design of many AI applications, another ethical pitfall presents itself in the way we are using these applications. We are designing AI for tasks that could (and should) be done by humans and neglecting to use AI where it is urgently needed — namely, in repairing our online ecosystem.

Re-designing Our Echo Chambers

It is human nature to have opinions. Many of pride ourselves on the fact-based nature of our opinions — whether they be based on news stories, personal experience, a book we read recently, etc. However, even the strongest opinions are subject to bias, and it is becoming easier than ever to bias our opinions through social media.

Social media algorithms (the ones that determine what ads you see or what posts you will interact with) use sentiment analysis to understand our personalities and then mirror them. This means that you see ads, news stories and posts that resonate with you strongly and reinforce your personal views — this is known as “the algorithm,” the “filter bubble,” and, quite often, the “echo chamber.”

This is a critical issue because a key aspect of formulating and re-evaluating your opinions is having access to different information and points of view. That is how you learn and update your opinions (for example: learning that black swans do indeed exist by seeing a photo of them in their Australian habitat). But in the social media echo chamber, this is no longer happening.

Instead of using artificial intelligence to find and serve us the content/advertisements that will resonate with us best, social media companies should adopt AI to fight echo chamber thinking. The algorithm has already learned our preferences (and biases). It would be just as easy to serve us (authentic, fact-based) content that challenges them or offers a different point of view. I would like to see a social media platform that shows me news stories that contradict some of my strongly held opinions or misconceptions, or perhaps a tool that auto-checks any link I’d like to share for factual accuracy or biased content. And by delegating humans to overseeing this process only (instead of executing it), we lessen the odds that an equally biased human moderator could exercise outsized power in this “anti-echo chamber.”

AI could also be trained to help us address some of our most pernicious cognitive biases — such as the “gambler’s fallacy,” which is when we assume that because an event has happened often in the past, it is likely to happen in the future; “confirmation bias,” seen when we only select data that confirms our position; or “false consensus,” when we reinforce our point of view by believing more people agree with us than is actually the case. AI could be applied to help us correct this behavior online (perhaps by evaluating our posts or the articles we choose to share), or even through individualized digital trainings. AI could also help us to access more facts, instead of opinions and biased stories.

By helping improve the way we see and judge the world around us, AI applied in this way could vastly improve the quality of content being shared online by helping us to be more insightful and responsible.

Re-assigning Unethical Work

Unfortunately, a worse side of human nature can be found in our cruelest tendencies. The internet has made these tendencies more visible than ever, with websites, sub-reddits and even Facebook groups dedicated to sexist, violent, and/or illegal activities.

In our collective efforts to fight these activities, companies and governments have enlisted teams of humans to seek out, review and remove offending content. This is unquestionably one of the most unethical steps we have made in the evolution of the internet. Underpaid, under-supported and undertrained humans are the ones cataloging and removing the videos of the New Zealand Christchurch shooting; tracking the proliferation of child pornography across websites; moderating racist and sexist trolls in discussion forums; and more. This work is deeply affecting, traumatizing and never-ending. Casey Newton covered this topic in-depth with former moderators for Google and YouTube — I highly recommend reading it to understand the emotional and mental toll this work requires.

I believe it is our ethical duty to keep humans from having to do this kind of work for any longer. Every company and organization that has a digital presence could be helping to advance this application of AI.

Instead of using AI to incorrectly judge defendants and their likelihood of committing crime, judicial systems and police departments could use it to analyze surveillance videos to detect crime occurring in real-time (instead of requiring humans do so), which could help AI applications deployed online detect illegal images and video better. The possibilities and benefits of this cooperation and ethical focus are limitless, but companies and public organizations alike need to prioritize this application of AI first.

Human-Centric AI

In September 2019, after a long debate which included the scientific, business and political communities, the European Union (EU) released its guidelines on ethics in artificial intelligence. I encourage everyone to have a look to the briefing. Here is the core principle of their guidelines, which I fully endorse:

“The human-centric approach to AI strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights, including those set in the Treaties of the European Union and Charter of Fundamental Rights of the European Union, all of which are united by reference to a common foundation rooted in respect for human dignity, in which the human being enjoys a unique and inalienable moral status. This also entails consideration of the natural environment and of other living beings that are part of the human ecosystem as well as a sustainable approach enabling the flourishing of future generations to come.”

Artificial intelligence is not something to be feared. If it is designed correctly, overseen appropriately, and applied where it is most needed, AI could be one of the best inventions for humankind yet, by helping to protect us from ourselves.


Here are more links to learn more about these issues and the future of artificial intelligence.

EU guidelines on ethics in artificial intelligence
Building in Our Biases: Ethics in Artificial Intelligence
Using Blockchain and AI to Fight “Deepfakes”
Hope on the AI Horizon for Data Privacy
Data is Indeed the New Oil, But Not for the Reason You Expect

Fabrizio Del Maffeo
Head of Artificial Intelligence, Bitfury

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top