From Optflux
Revision as of 13:55, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Not again, not the Harvard Moral Psychology argument again, you know usually the one about the one that sees 5 people around the track below with a fast paced train they can n...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Not again, not the Harvard Moral Psychology argument again, you know usually the one about the one that sees 5 people around the track below with a fast paced train they can not see, and the other track with only one person into it, while the individual from far away is sitting with the switch box to alter the train in one track to a new, what whenever they do? It's one particular moral dilemma challenges, should they switch the track they caused a person to die, when they relax 5 everyone is going to be killed, and they have seconds to act, so what can they certainly?


Well, in walks the brand new future realm of artificial intelligence and autonomous cars. Most people have held it's place in situations we need to avoid thing, and swerve we sometimes risk damaging our car in order to avoid hitting a kid who just rode out in front people on his bicycle. So, here goes task - the truth is;

There were a fascinating article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Doing exercises developing ethical robots is one of the thorniest challenges in cognitive computing," by Boer Deng on July 1, 2015. The article stated:

"In May, a panel talk on driverless cars in the Brookings Institution, a think tank in Washington DC, become legal representative regarding how autonomous vehicles would behave in the crisis. What if an automobile's efforts to save its own passengers by, say, slamming for the brakes risked a pile-up with the vehicles behind it? Or suppose an autonomous car swerved in order to avoid a kid, but risked hitting another individual nearby?"

Well, yes you will find those types of dilemmas but before we have into some of that, or logic based, probability rules, you can even find more dilemmas which can be even more serious to ponder the prior. Let's talk lets?

The truth is, what some from the black-and-white whole world of programming are not able to understand is that laws, and rules will never be that, because there are exceptions and circumstances. Poorly programmed AI might be a disaster for "What's Right" within the eyes of these they're supposedly serving. Ethically speaking this indeed ultimately ends up going against everything we mean in free country.

So how should programmers approach this dilemma as they pre-decide who might live or die in certain hypothetic situation in the foreseeable future? Yes, understand the philosophical moral quicksand here - really this as well as other challenges will track these future concept autonomous cars, but mind you, are going to here in no time.