From Optflux
Revision as of 13:47, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Not again, not the Harvard Moral Psychology argument again, you already know the main one in regards to the person that sees 5 people for the track below with a quick train th...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you already know the main one in regards to the person that sees 5 people for the track below with a quick train they won't see, and another track with only one person onto it, whilst the individual well away is sitting at the switch box to change the train in one track to a new, what if and when they do? It's among those moral dilemma challenges, whenever they switch the track they caused someone to die, should they do nothing at all 5 individuals are likely to be killed, and the've seconds to do something, what can they do?


Well, in walks the brand new future arena of artificial intelligence and autonomous cars. We've all visited situations we need to avoid thing, and swerve we occassionally risk damaging our car in order to avoid hitting a kid who just rode outside in front individuals on his bicycle. So, here goes task - the thing is that;

There is an appealing article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Doing exercises building ethical robots is one of the thorniest challenges in Artificial Intelligence BI," by Boer Deng on July 1, 2015. The content stated:

"In May, a panel talk on driverless cars on the Brookings Institution, a think tank in Washington DC, converted into a discussion regarding how autonomous vehicles would behave in the crisis. Suppose a vehicle's efforts in order to save its passengers by, say, slamming around the brakes risked a pile-up with all the vehicles behind it? Or what if an autonomous car swerved to prevent a youngster, but risked hitting somebody else nearby?"

Well, yes you will find those types of dilemmas but before we have into any one of that, or logic based, probability rules, you can even find more dilemmas that happen to be even more serious to ponder the last. Let's talk lets?

The thing is that, what some from the black-and-white realm of programming are not able to understand is that laws, and rules are never that, since there are exceptions and circumstances. Poorly programmed AI is a disaster for "What's Right" within the eyes of those these are supposedly serving. Ethically speaking this indeed ultimately ends up going against everything we mean in free country.

So, just how should programmers approach this dilemma since they pre-decide who might live or die in certain hypothetic situation in the foreseeable future? Yes, see the philosophical moral quicksand here - much more of this along with other challenges will track these future concept autonomous cars, but mind you, are going to here before very long.