From Optflux
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you already know the one regarding the individual who sees 5 people for the track below with a fast paced train they can not see, and another track with anyone on it, as the individual well away is sitting on the switch box to switch the train from one track to a new, what as long as they do? It's one particular moral dilemma challenges, if they switch the track they caused someone to die, if they relax 5 individuals are likely to be killed, and they've seconds to act, exactly what do they do?


Well, in walks the brand new future realm of artificial intelligence and autonomous cars. Everyone's held it's place in situations we need to avoid thing, and swerve we occasionally risk damaging our car to stop hitting a young child who just rode in front of us on his bicycle. So, here goes the task - you see;

There is a unique article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Training building ethical robots is among the thorniest challenges in cognitive computing," by Boer Deng on July 1, 2015. The content stated:

"In May, a panel talk on driverless cars on the Brookings Institution, a think tank in Dc, changed into a conversation about how precisely autonomous vehicles would behave within a crisis. What if a vehicle's efforts to avoid wasting its passengers by, say, slamming about the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a kid, but risked hitting another person nearby?"

Well, yes you can find the type of dilemmas to start with we into any one of that, or logic based, probability rules, there are even more dilemmas that are rather more serious to ponder the last. Let's talk shall we?

The truth is, what some in the black-and-white world of programming are not able to comprehend is that laws, and rules aren't that, with there being exceptions and circumstances. Poorly programmed AI will be a disaster for "What's Right" within the eyes of the these are supposedly serving. Ethically speaking this indeed eventually ends up going against everything we mean in free country.

So, just how should programmers approach this dilemma since they pre-decide who might live or die in a few hypothetic situation in the foreseeable future? Yes, start to see the philosophical moral quicksand here - really this and other challenges will follow these future concept autonomous cars, but actually, they'll be here before you know it.