Topic: Future Self-Driving Cars With Human Morals and Ethics
Tom4Uhere's photo
Sun 07/09/17 10:04 PM
A new study demonstrating that human ethical decisions can be implemented into machines has strong implications for managing the moral dilemmas autonomous cars may face on the road.

SOURCE ~ http://www.sciencedaily.com/releases/2017/07/170705123229.htm
Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible.

until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modeled or described algorithmically
~ Leon Sütfeld, first author of the study.

Prof. Gordon Pipa, a senior author of the study, says that...
since it now seems to be possible that machines can be programmed to make human like moral decisions it is crucial that society engages in an urgent and serious debate


The study's authors say that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more common place. They warn that we are now at the beginning of a new epoch with the need for clear rules otherwise machines will start marking decisions without us.

I'm thinking this could get out of control very easily.
Imagine your fridge deciding it would be morally unethical to allow you in it?
Right now, the coding must be written by humans. What if someday, the machines are writing their own moral codes and determining ethics without human input? There is a lot of science fiction written about that very thing.
You might argue that Asimov's 3 Laws of Robotics might prevent the bad. Well, I lead a scifi forum project with other scifi fans and it has been determined that Asimov's 3 Laws would lock up a robot in an ethical loop. The same can't be said about a functioning AI that writes its own code.
Are you ready to bow down to your robot overlords?

motowndowntown's photo
Sun 07/09/17 10:47 PM
Machine logic, for now, is based on yes or no answers. There are no "grey" areas as is sometimes required in order to make a "moral" decision.

In your self driving car imagine a scenario where a rather large dog and a small boy are suddenly and unavoidably in your path. You have to hit one of them. Machine morals might say something like "hit the object with the smallest mass". A real person will probably make his decision based on some other moral.

Conrad_73's photo
Mon 07/10/17 01:49 AM
http://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
sorry,No Cigar for a very long time yet,if ever!


Tom4Uhere's photo
Mon 07/10/17 09:18 AM
Had to bookmark that link for further reading, Conrad.
Very interesting and worth more than a quick skim.

On the other hand Vernor Vinge wrote a paper on the Technological Singularity that you might find interesting.
http://edoras.sdsu.edu/~vinge/misc/singularity.html

I've read many scientific articles related to machine intelligence and programming. According to most, the AI Singularity is still far away.

This article is significant because it is a proposal that could be a disaster. Like motowndowntown pointed out, there are too many factors that could cause catastrophic problems with self-driving cars that operate on an insufficient cognitive system.

The article implies that human morals and ethics are an algorithm that can be programmed into an artificial intelligence system.

In 2014, Eugene passed the touring test. Eugene is a computer program.
The Turing Test is based on 20th century mathematician and code-breaker Turing's 1950 famous question and answer game, ‘Can Machines Think?'. The experiment investigates whether people can detect if they are talking to machines or humans.
If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test.
Here is the article - http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx

Microsoft has developed DeepCoder, a program that constructs its own code from other written codes.
DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

Source ~ http://www.newscientist.com/article/mg23331144-500-ai-learns-to-write-its-own-code-by-stealing-from-other-programs/

While still a very long way from run-away AI these little advancements are providing a platform to creating the AI Singularity.

Your article cited from Wired shows many interesting aspects of our understanding of intelligence as we can fathom. From the little I read so far, it empirically reasons intelligence and provides a model.

The AI Singularity may not fit any closer to that model than a superior alien intelligence. There can't be a model built on something that has never existed before and may be beyond our ability to comprehend.

AI intelligence is not going to be like human intelligence. This article about the self-driving cars ability to have human ethics and morality creates the impression that AI will adopt those traits from the algorithm the researchers are trying to develop.

Right now, we program specific instructions into our robots. We program those instructions and then program a set of commands and rules for the robot to implement. We can even program when and how those implementations occur. A robot intelligence is like a remote control that has everything laid out in advance, by us.

The reason why Asimov's Three Laws of Robotics
http://www.auburn.edu/~vestmon/robotics.html
won't work is because chaos happens.
A robot uses sensors to understand the world around it.
That 'input' (Johnny 5) is registered and the robot picks the code that can be implemented from that sensory data. Any imperative place with priority over input will create a feedback loop in ethical/moral coding. The robot will stop until it is either reprogrammed or it reprograms itself.

Right now, self-driving cars use inputs from the roadway and maps that are programmed into its software. They stay on the road because they don't 'see' other paths. The 'Path' is a priority over the 'destination'. If it were the other way around we might see self-driving cars in ditches and lakes, ponds or any other off-road direction to its programmed destination.

Right now, its not 'Smart' its "programmed".

This article desires to make machines use human potential based on ethics and morals. No matter how complex, it will still be a program.

An AI might make its own decisions. Decisions that defy our logic but are logical from its perspective. It will set its own priorities. It will meet those tasks by its own accord.

Tom4Uhere's photo
Mon 07/10/17 08:10 PM
Just Found This
http://science.sciencemag.org/content/357/6346/19.full

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if it's 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a human's expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computer's attempt to “understand” spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination. PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as “earn a high video game score” or “manage a factory efficiently.” During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say it's impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AI's ability to pass as human. In Alan Turing's original conception, an AI would be judged by its ability to converse through written text.