From Optflux
Revision as of 13:30, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Oh no, not the Harvard Moral Psychology argument again, you know the one regarding the person that sees 5 people for the track below with a fast moving train they won't see, a...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Oh no, not the Harvard Moral Psychology argument again, you know the one regarding the person that sees 5 people for the track below with a fast moving train they won't see, and yet another track with anyone about it, whilst the individual at a distance is sitting at the switch box to change the train derived from one of track to a new, what as long as they do? It's among those moral dilemma challenges, if they switch the track they caused someone to die, when they do nothing at all 5 folks are destined to be killed, with seconds to do something, what can they actually do?


Well, in walks the newest future world of artificial intelligence and autonomous cars. Everyone's been in situations we need to avoid thing, and swerve we occasionally risk damaging our car to prevent hitting a child who just rode out in front individuals on his bicycle. So, here goes task - you see;

There were a unique article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Doing exercises developing ethical robots is one of the thorniest challenges in artificial intelligence systems," by Boer Deng on July 1, 2015. The article stated:

"In May, a panel talk on driverless cars with the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. Imagine if a vehicle's efforts to save a unique passengers by, say, slamming around the brakes risked a pile-up with all the vehicles behind it? Or what if an autonomous car swerved to avoid a young child, but risked hitting another individual nearby?"

Well, yes there are the type of dilemmas before we get into any one that, or logic based, probability rules, there are also more dilemmas which can be even worse to ponder the first sort. Let's talk shall we?

You see, what some inside the black-and-white arena of programming are not able to comprehend is that laws, and rules should never be that, as there are exceptions and circumstances. Poorly programmed AI would have been a disaster for "What's Right" from the eyes of those these are supposedly serving. Ethically speaking this indeed winds up going against everything we mean in free country.

Now how should programmers approach this dilemma because they pre-decide who might live or die in a few hypothetic situation in the future? Yes, see the philosophical moral quicksand here - much more of this and other challenges will track these future concept autonomous cars, but however, they shall be here before you know it.