From Optflux
Revision as of 13:53, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Not again, not the Harvard Moral Psychology argument again, you realize usually the one about the one that sees 5 people on the track below having a fast moving train they won...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you realize usually the one about the one that sees 5 people on the track below having a fast moving train they won't see, and yet another track with simply one person on it, even though the individual well away is sitting at the switch box to switch the train from track to another, what if and when they do? It's some of those moral dilemma challenges, should they switch the track they caused a person to die, whenever they relax 5 everyone is likely to be killed, with seconds to act, exactly what do they actually do?


Well, in walks the newest future arena of artificial intelligence and autonomous cars. We've all been in situations we have to avoid thing, and swerve sometimes we risk damaging our car to stop hitting a child who just rode outside in front individuals on his bicycle. So, here goes the task - the thing is;

There was clearly an interesting article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Training developing ethical robots is amongst the thorniest challenges in diwo," by Boer Deng on July 1, 2015. This article stated:

"In May, a panel talk on driverless cars on the Brookings Institution, a think tank in Dc, changed into a conversation about how autonomous vehicles would behave within a crisis. Suppose a vehicle's efforts to save its passengers by, say, slamming about the brakes risked a pile-up together with the vehicles behind it? Or let's say an autonomous car swerved to avoid a youngster, but risked hitting another person nearby?"

Well, yes there are the type of dilemmas when we into any one of that, or logic based, probability rules, you can even find more dilemmas that are even more serious to ponder the prior. Let's talk shall we?

The thing is that, what some inside the black-and-white world of programming are not able to seem to comprehend is that laws, and rules should never be that, as there are exceptions and circumstances. Poorly programmed AI might be a disaster for "What's Right" from the eyes of people they're supposedly serving. Ethically speaking this indeed ends up going against everything we stand for in free country.

So how should programmers approach this dilemma as they pre-decide who might live or die in certain hypothetic situation in the future? Yes, understand the philosophical moral quicksand here - more of this along with other challenges will follow these future concept autonomous cars, but actually, they'll be here before you know it.