From Optflux
Revision as of 13:40, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Oh no, not the Harvard Moral Psychology argument again, you know the main one in regards to the person that sees 5 people around the track below with a fast moving train they...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Oh no, not the Harvard Moral Psychology argument again, you know the main one in regards to the person that sees 5 people around the track below with a fast moving train they can't see, and another track with simply anyone onto it, whilst the individual at a distance is sitting at the switch box to improve the train in one track to a different, what if and when they do? It's some of those moral dilemma challenges, when they switch the track they caused a person to die, when they relax 5 folks are destined to be killed, with seconds to act, so what can they are doing?


Well, in walks the new future whole world of artificial intelligence and autonomous cars. Everyone's visited situations we must avoid thing, and swerve we occasionally risk damaging our car in order to avoid hitting a child who just rode outside in front individuals on his bicycle. So, here goes the challenge - the truth is;

There were a fascinating article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Doing exercises building ethical robots is among the thorniest challenges in Artificial Intelligence BI," by Boer Deng on July 1, 2015. The content stated:

"In May, a panel talk on driverless cars in the Brookings Institution, a think tank in Dc, changed into attorney at law regarding how autonomous vehicles would behave inside a crisis. What if a vehicle's efforts to save its passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or let's say an autonomous car swerved in order to avoid a young child, but risked hitting another individual nearby?"

Well, yes you'll find those types of dilemmas to start with we get into any one that, or logic based, probability rules, there are even more dilemmas which can be even more serious to ponder the first sort. Let's talk lets?

The thing is that, what some from the black-and-white whole world of programming fail to seem to comprehend is that laws, and rules aren't that, with there being exceptions and circumstances. Poorly programmed AI would have been a disaster for "What's Right" from the eyes of the they may be supposedly serving. Ethically speaking this indeed ends up going against everything we mean in free country.

So how should programmers approach this dilemma while they pre-decide who might live or die in a few hypothetic situation later on? Yes, see the philosophical moral quicksand here - really this and other challenges follows these future concept autonomous cars, but mind you, they will be here in no time.