Grow VC Group
  • Home
  • Group
  • FAQ
  • Join Us
  • Trainee Program
  • Contact
  • News

We’re not asking the right questions about AI ethics

6/25/2018

Comments

 
Recently I was watching a discussion about the ethical questions of AI –  in other words, how to guarantee AI is more good than bad to human beings and societies. Nowadays many discussions are easily polarized, and this seems to be the case for AI ethics too. There are strong opinions that either people can always use machines (including AI) for their own good, or AI machines will be apocalyptic beasts. Now is the time to evaluate this more deeply.

Some people defend AI, saying it’s not a threat because it’s all basically software code programmed by  humans who can also create rules for behavior in each situation. They can decide and control what kind of ethical decisions machines make, and create iron rules that cannot be broken. These people also often argue that what and what kind of machines will be allowed is a political decision.

The other side of that coin is that humans are exactly the problem – or at least those humans who will create machines to amplify their own power and position in business and society. Slave machines will do the work of people, sex robots will replace human partners, and fighter robots will populate armies. Basically, the logic is that bad people with bad intentions can – and will – do bad things with AI and machines to gain power in the world.

The reality is much more complex. Here are five questions that explore a few aspects of this debate:

1. Can political processes control what kind of machines and ethics rules are implemented?
It is hard to believe that any political decision can really stop development of AI. History gives us lots of examples – if something is technically possible, someone will implement it sooner or later. There are many motivations to do so – making money, improving business, gaining power or just intellectual curiosity. New solutions and machines will be used for business purposes. Even if governments decide to ban them for private use, they would still be developed for military purposes. Or perhaps criminals and terrorists would develop them for their own purposes. This is not to say that politicians, governments and societies can’t develop rules and laws for machines – the point is that bans and overly-restrictive rules never work.

2. Is AI technology only bad, or is it just another step of the natural development of human society that has seen our lives improve in many ways?
There have always been people who see development or progress as a threat. Of course, AI machines raise many complex questions, not only in terms of how the machines behave, but also the purposes for which they are developed. They can replace workers and change the distribution of wealth, and these changes can create crises for many individuals. At the same time, we have seen these kinds of changes many times before throughout history, such as the shifts from agricultural societies to industrial societies and then services societies. At the same time, however, all parties must take these issues seriously and work to find solutions for them. This means, for example, finding solutions for wealth distribution (perhaps in the form of new tax and basic income systems), human rights and how each human being can maintain her or his dignity.

3. Can we program ethical rules for machines so that everything works based on our rules?
It is still unknown if machines will ever develop consciousness. At the very least we can say that if they do, it will be different from human consciousness. In any case, some machines are already becoming so complex that we cannot create simple rules for them to govern how they think and behave. Machines process so much data and learn from it that it’s not possible for us to predict their behavior in each situation, especially when machines are linked to each other and learn from each other too. There is currently work being done to create a kind of ‘moral machine’ inside AI. This can include top-down type categoric rules (e.g. “never do this”) and bottom-up learning from different real-world situations. Nowadays it is thought that these moral machines should be based on a hybrid model of rules and learning. But there are still many complex problems to be solved to get this to work.

4. Do we even know – and can humans agree on – which ethical rules to implement?
This is one important question that is often ignored in AI and machine ethics discussions. Not all people are ethical – or, put another way, many people have very different ideas of what constitutes ethical behavior. Even from the philosophical point of view, there are very different approaches – e.g. rule-based deontological models or result-oriented utilitarian models. Then there are even more questions, such as how to interpret models in practical situations. When we need to teach ethics and behavior rules for machines, we must first define common principles. But even if we do that, there will be people who will teach different models to machines – for better or worse – just as people do to each other.

5. Who should take the lead in discussion and decision making on AI ethics?
The simple answer is that everyone must participate and have the right to participate in this process. But the reality is more complex. At the very least there will be a combination of technology, business and political processes. Now, even academic discussion is difficult because it requires competence from many areas, such as moral philosophy, data science and economics – not many people fully understand even one of those areas, let alone all three. An important starting point is to increase awareness and encourage open discussion and systematic thinking around these matters. But how many politicians – for example – have seriously started to think and talk about this?
​
As we can see, we have many open and unanswered questions even as AI development is underway – and the truly important questions focus on the interaction between AI machines and human beings and the impact on the latter, not just about machines and their behavior. At the discussion I mentioned at the beginning of this article, someone made an interesting point: human beings and machines will probably become more similar over time, but not only because machines will become more like humans – it will also be vice versa. As machines become central in more important roles, people will start to behave more like machines.

The article was first published on Disruptive.Asia. 
Picture
Photo: Wikimedia Commons (Artificial.intelligence.jpg)
Comments

    About

    Est. 2009 Grow VC Group is building truly global digital businesses. The focus is especially on digitization, data and fintech services. We have very hands-on approach to build businesses and we always want to make them global, scale-up and have the real entrepreneurial spirit.​

    Read the latest Grow VC Group  FinTech, distributed and crypto finance, and blockchain report
    Read the AI, Asia and FinTech report - including comments about potential trade wars.
    Download

    Research Report 1/2018: Distributed Technologies - Changing Finance and the Internet 


    ​Research Report 1/2017: Machines, Asia And Fintech:
    Rise of Globalization and
    Protectionism as a
    Consequence


    Fintech Hybrid Finance Whitepaper

    ​Fintech And Digital Finance Insight & Vision Whitepaper


    Learn More About Our Companies:
    • Difitek
    • Prifina​
    • RE Bearing
    • Token Index Fund
    • Startup Commons
    • Lost in Translations
    • Robocorp
    • Nodi Liber​

    Archives

    January 2023
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    September 2020
    July 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    September 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016

    Categories

    All
    Difitek
    Grow VC Group
    Robocorp

    RSS Feed

Digital Intelligence Globally
Picture
© 2009-2023 Grow VC Operations Ltd. All Rights Reserved.
  • Home
  • Group
  • FAQ
  • Join Us
  • Trainee Program
  • Contact
  • News