From Optflux
Revision as of 13:31, 25 July 2018 by 62.173.138.208 (talk) (Created page with "Oh no, not the Harvard Moral Psychology argument again, you realize the one in regards to the one that sees 5 people about the track below having a fast paced train they can't...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Oh no, not the Harvard Moral Psychology argument again, you realize the one in regards to the one that sees 5 people about the track below having a fast paced train they can't see, and the other track with anyone about it, whilst the individual at a distance is sitting at the switch box to improve the train from track to a new, what should they do? It's one of those moral dilemma challenges, whenever they switch the track they caused a person to die, should they relax 5 individuals are gonna be killed, with seconds to act, what do they do?


Well, in walks the brand new future realm of artificial intelligence and autonomous cars. Most of us have visited situations we need to avoid thing, and swerve we sometimes risk damaging our car to avoid hitting a kid who just rode in front people on his bicycle. So, here goes the task - the truth is;

There was a unique article in "Nature - International Weekly Journal of Science" titled; "Machine ethics: The robot's dilemma - Exercising how to build ethical robots is one of the thorniest challenges in diwo," by Boer Deng on July 1, 2015. This article stated:

"In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into attorney at law about how precisely autonomous vehicles would behave in the crisis. Imagine if a vehicle's efforts in order to save its own passengers by, say, slamming about the brakes risked a pile-up with the vehicles behind it? Or let's say an autonomous car swerved to prevent a kid, but risked hitting another person nearby?"

Well, yes you'll find the type of dilemmas when we into any of that, or logic based, probability rules, you can even find more dilemmas which are even worse to ponder the earlier. Let's talk shall we?

The truth is, what some inside the black-and-white world of programming neglect to seem to comprehend is that laws, and rules should never be that, as there are exceptions and circumstances. Poorly programmed AI is a disaster for "What's Right" in the eyes of people they're supposedly serving. Ethically speaking this indeed winds up going against everything we are a symbol of in free country.

So, just how should programmers approach this dilemma since they pre-decide who might live or die in certain hypothetic situation in the foreseeable future? Yes, begin to see the philosophical moral quicksand here - really this along with other challenges follows these future concept autonomous cars, but however, they will be here in no time.