From Optflux
Revision as of 13:56, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Not again, not the Harvard Moral Psychology argument again, you already know the main one about the person that sees 5 people around the track below using a fast paced train t...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you already know the main one about the person that sees 5 people around the track below using a fast paced train they can not see, and the other track with one person onto it, as the individual at a distance is sitting with the switch box to alter the train from one track to another, what if and when they do? It's some of those moral dilemma challenges, if they switch the track they caused a person to die, should they loosen up 5 folks are going to be killed, and the've seconds some thing, what can they are doing?


Well, in walks the newest future whole world of artificial intelligence and autonomous cars. Most people have experienced situations we have to avoid thing, and swerve we sometimes risk damaging our car to prevent hitting a young child who just rode in front folks on his bicycle. So, here goes the task - you see;

There was an interesting article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Training how to build ethical robots is amongst the thorniest challenges in artificial intelligence systems," by Boer Deng on July 1, 2015. The content stated:

"In May, a panel talk on driverless cars with the Brookings Institution, a think tank in Dc, converted into a conversation about how precisely autonomous vehicles would behave in a crisis. Suppose a vehicle's efforts to avoid wasting its own passengers by, say, slamming on the brakes risked a pile-up using the vehicles behind it? Or suppose an autonomous car swerved to avoid a child, but risked hitting someone else nearby?"

Well, yes there are the type of dilemmas to start with we have into any one of that, or logic based, probability rules, you can even find more dilemmas which can be rather more serious to ponder the prior. Let's talk lets?

The thing is, what some inside the black-and-white world of programming neglect to understand is that laws, and rules should never be that, with there being exceptions and circumstances. Poorly programmed AI is a disaster for "What's Right" within the eyes of people they're supposedly serving. Ethically speaking this indeed winds up going against everything we mean in free country.

So how should programmers approach this dilemma because they pre-decide who might live or die in a few hypothetic situation in the foreseeable future? Yes, begin to see the philosophical moral quicksand here - much more of this and also other challenges follows these future concept autonomous cars, but mind you, are going to here before very long.