From Optflux
Revision as of 13:37, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Not again, not the Harvard Moral Psychology argument again, you know usually the one regarding the individual who sees 5 people on the track below using a fast moving train th...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you know usually the one regarding the individual who sees 5 people on the track below using a fast moving train they won't see, and another track with a single person about it, as the individual well away is sitting in the switch box to change the train derived from one of track to a new, what whenever they do? It's one of those moral dilemma challenges, when they switch the track they caused you to definitely die, if they loosen up 5 everyone is gonna be killed, and the've seconds to do something, what do they do?


Well, in walks the brand new future arena of artificial intelligence and autonomous cars. Most of us have held it's place in situations we have to avoid thing, and swerve sometimes we risk damaging our car to prevent hitting a kid who just rode out in front folks on his bicycle. So, here goes the task - you see;

There were a unique article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Doing exercises developing ethical robots is probably the thorniest challenges in diwo," by Boer Deng on July 1, 2015. The article stated:

"In May, a panel talk on driverless cars on the Brookings Institution, a think tank in Dc, converted into a discussion about how exactly autonomous vehicles would behave inside a crisis. Let's say a vehicle's efforts to save its passengers by, say, slamming for the brakes risked a pile-up with all the vehicles behind it? Or imagine if an autonomous car swerved to avoid a kid, but risked hitting someone else nearby?"

Well, yes you can find the type of dilemmas before we get into any of that, or logic based, probability rules, there are also more dilemmas that are even worse to ponder the earlier. Let's talk shall we?

The thing is that, what some within the black-and-white whole world of programming fail to understand is that laws, and rules will never be that, as there are exceptions and circumstances. Poorly programmed AI will be a disaster for "What's Right" within the eyes of people these are supposedly serving. Ethically speaking this indeed eventually ends up going against everything we stand for in free country.

So, just how should programmers approach this dilemma while they pre-decide who might live or die in some hypothetic situation in the future? Yes, start to see the philosophical moral quicksand here - really this as well as other challenges follows these future concept autonomous cars, but however, they will be here before long.