The Importance of Ethics in Artificial Intelligence
(Or any form of technology for that matter)
A Chatbot Almost Made Me Leave my Insurance Agency After 15 Loyal Years
By removing their #1 trait: service
“Just because we can, doesn’t mean we should” could be something to keep in mind when it comes to innovating with technology. The arrival of The Internet has 10x’d the speed of innovation and allows us to pretty much create anything we can think of. Artificial Intelligence is a great example of a space in which we can build whatever we like and then some, but should we?
As ethical as its developer
Ethics (noun): moral principles that govern a person’s behavior or the conducting of an activity. ( “many scientists question the ethics of cruel experiments”)
We, humans, have something called “a moral compass”. It’s an agent that sits in our brain and basically tells right from wrong. When you see an injustice, your brain tells you something isn’t right. The actions that come from it are up to you, but you can tell right from wrong. The standards of your moral compass strongly depend on your upbringing and environment, but most people have one of these compasses. It’s also what companies build their ethics and compliances on, what’s right and what is wrong and how do we set rules based upon that.
Artificial Intelligence is lacking such a compass. As a matter of fact, it’s lacking any kind of compass. Artificial Intelligence can only separate right from wrong based on data that has the label “right” and the label “wrong” attached to it. AI doesn’t have awareness of itself, nor does it have something called “empathy” which is the fundament of ethics. The only moral compass there is when talking about AI, is that of its developer who then sets the bar for what is right and what is wrong. If the developer has a low moral compass, he/she might develop AI with bad intentions, and vice versa. That doesn’t mean AI will actually always live by those standards, as AI isn’t coded, it’s trained. Meaning it could be made with good intentions, but still, draft into something less morally approved or “for good” as one might have hoped.
So, why is Ethics in technology such a big deal?
Well, if we don’t build technology based on ethics and make sure we understand the outcome of every algorithm we implement, we’re running the risk of not being ethical. And with that, I don’t mean “use a knife and fork when eating” ethically. I mean — not being racist or incriminating innocent people — ethical. Sounds hefty? We already have examples of biased data leading to potential racist decision-making.
Or even worse, that time when Facebook developed two talking Artificial Intelligence bots. They were talking to each other in English for a short while before constructing their own language which the developers could not understand. Is that unethical? Well, in the sense of humans not being able to monitor what is happening, the outcome of those conversations (or the contents) might very well be unethical.
AI doesn’t have awareness of itself, nor does it have something called “empathy” which is the fundament of ethics.
Govern the behavior
Let’s grab that definition of Ethics one more time: “Ethics are moral principles that govern a person’s behavior”. If we cannot govern the behavior of the things we build, how can we ever check its ethics? We need to always (always always always) be the ones to determine the behavior of Artificial Intelligence. Of course, with options like “self-learning”, we don’t want to slow down its process of development, as that would beat the entire purpose of it. That ultimately means two things:
- We need to have ethics built into the idea of why a certain piece of technology, equipt with AI, is being developed
- We need to monitor/check/police the outcomes of that specific piece of technology in order to fully understand its behavior and make sure that it’s not violating our (human) moral compass.
So ethics is not only important in technology (and especially Artificial Intelligence), but it should be the foundation of any innovation. We cannot run the risk of building unethical tools. So if something runs the risk of being unethical for the sake of innovation or financial gain, we should think of:
“Just because we can, doesn’t mean we should”
What do you think? Do we need to govern technology forever or will Artificial Intelligence become smart enough to intrinsically separate right from wrong?
“Just because we can, doesn’t mean we should” could be something to keep in mind when it comes to innovating with technology. The arrival of The Internet has 10x’d the speed of innovation and allows us to pretty much create anything we can think of. Artificial Intelligence is a great example of a space in which we can build whatever we like and then some, but should we?
As ethical as its developer
Ethics (noun): moral principles that govern a person’s behavior or the conducting of an activity. ( “many scientists question the ethics of cruel experiments”)
We, humans, have something called “a moral compass”. It’s an agent that sits in our brain and basically tells right from wrong. When you see an injustice, your brain tells you something isn’t right. The actions that come from it are up to you, but you can tell right from wrong. The standards of your moral compass strongly depend on your upbringing and environment, but most people have one of these compasses. It’s also what companies build their ethics and compliances on, what’s right and what is wrong and how do we set rules based upon that.
Artificial Intelligence is lacking such a compass. As a matter of fact, it’s lacking any kind of compass. Artificial Intelligence can only separate right from wrong based on data that has the label “right” and the label “wrong” attached to it. AI doesn’t have awareness of itself, nor does it have something called “empathy” which is the fundament of ethics. The only moral compass there is when talking about AI, is that of its developer who then sets the bar for what is right and what is wrong. If the developer has a low moral compass, he/she might develop AI with bad intentions, and vice versa. That doesn’t mean AI will actually always live by those standards, as AI isn’t coded, it’s trained. Meaning it could be made with good intentions, but still, draft into something less morally approved or “for good” as one might have hoped.
So, why is Ethics in technology such a big deal?
Well, if we don’t build technology based on ethics and make sure we understand the outcome of every algorithm we implement, we’re running the risk of not being ethical. And with that, I don’t mean “use a knife and fork when eating” ethically. I mean — not being racist or incriminating innocent people — ethical. Sounds hefty? We already have examples of biased data leading to potential racist decision-making.
Or even worse, that time when Facebook developed two talking Artificial Intelligence bots. They were talking to each other in English for a short while before constructing their own language which the developers could not understand. Is that unethical? Well, in the sense of humans not being able to monitor what is happening, the outcome of those conversations (or the contents) might very well be unethical.
AI doesn’t have awareness of itself, nor does it have something called “empathy” which is the fundament of ethics.
Govern the behavior
Let’s grab that definition of Ethics one more time: “Ethics are moral principles that govern a person’s behavior”. If we cannot govern the behavior of the things we build, how can we ever check its ethics? We need to always (always always always) be the ones to determine the behavior of Artificial Intelligence. Of course, with options like “self-learning”, we don’t want to slow down its process of development, as that would beat the entire purpose of it. That ultimately means two things:
- We need to have ethics built into the idea of why a certain piece of technology, equipt with AI, is being developed
- We need to monitor/check/police the outcomes of that specific piece of technology in order to fully understand its behavior and make sure that it’s not violating our (human) moral compass.
So ethics is not only important in technology (and especially Artificial Intelligence), but it should be the foundation of any innovation. We cannot run the risk of building unethical tools. So if something runs the risk of being unethical for the sake of innovation or financial gain, we should think of:
“Just because we can, doesn’t mean we should”
What do you think? Do we need to govern technology forever or will Artificial Intelligence become smart enough to intrinsically separate right from wrong?