Sunday, November 16, 2014

Machines Will Kill Us- And it's not that Bad

With all transportation there is a certain risk we need to take. Since transportation existed there was always a risk of falling overboard, falling off of a horse, crashing a car, crashing a train, crashing a plane, etc. So danger among travel isn't new. What's different this time around, is that this time death can happen not from human error- but from machine error.

But it's not that bad. Studies have shown that we'll be better off with machines causing death than causing our own death, albeit unfortunate all around. So far, in the hundreds of thousands of miles driven by autonomous cars, there have only been two accidents. The first being the fault of the driver behind the car, rear-ending it at a stop light, and the second being the fault of the driver of the car, manually driving the autonomous car. Why hasn't the car crashed yet when car accidents are so common? It's simply because we're too human. A vast majority, around 90%, of car accidents are attributable to human error or distraction.

First "autonomous car" accident


So overall, we'll be much better off having the roads swarming with these excellent drivers, and deaths, injury, and repair costs from auto accidents will plummet.

But, as any programmer knows- no algorithm is perfect. Eventually, sometime in the future, there will be a case where a driving algorithm miscalculates, a falling rock hits a car, a car is unable to account for hydroplaning or black ice, and someone's death will be at the wheel of the machine.

How would you feel if a love one died from such an incident? You would be angry of course- you would need closure, some type of retribution- someone to blame.

Who is to blame here? The person for taking the risk of being a passenger in an autonomous car? The manufacturer for producing the car? The programmer, for making the algorithm not perfect enough to avoid the dangers that come with travel? Or, maybe, the car itself?

The answer isn't any one of the above but the question raised is enough to cause problems. It raises several moral, legal, and ethical questions about the incorporation of autonomous vehicles into our life (and death).

The question can't be ignored, and the vehicles shouldn't be stopped from taking the road, as the safety of millions of travelers shouldn't be sacrificed for the philosophical conundrum of blaming something without free will for the death of a person. Like any new technology, it will be a hurdle that it'll have to face, and a hurdle that we'll need to cope with for the betterment of humanity.

Sources:

image: Business Insider http://static5.businessinsider.com/image/4e3c15c769bedd7a5a000032/google-car-accident.jpg

6 Comments:

At November 18, 2014 at 12:17 PM , Blogger Unknown said...

Hi Nathan,
I really enjoyed your post because of the topic. It’s scary how close the future of autonomous cars is approaching. It’s still a process and as you mentioned not every algorithm is perfect and it will continue to need improvement. I agree with your standpoint on how different it is between a human killing us and a machine killing us. For example, if a drunk driver hit another driver causing serious harm or lost of life. I would be extremely angry at the individual who hit the sober driver. If a piece of machinery caused serious harm to an individual, the cause is most likely an accident. Overall, your post was great. Keep up the good work!

 
At November 18, 2014 at 2:02 PM , Blogger James V said...

Nathan,

Very interesting topic. It would be helpful to provide more information about the experiment of the autonomous car in your second paragraph. I am assuming that it is about the google car judging from your source. I do agree with your statement that many accidents are from human mistakes, because we are human we make mistakes. Being a car enthusiast I enjoy driving cars and the idea of autonomous cars does not appeal to me much. However, I know for certain that many people would benefit from not manually driving. Accidents would drop, but like you mention, if an accident does occur, who is to blame? Great thought invoking blog, I am eager to read about how you will expand about this topic.

 
At November 19, 2014 at 5:18 PM , Blogger davidfleming said...

This is a conundrum, as it currently stands when a single machine causes an accident than its just that an accident but in some cases, usually when there is a recurring theme, the fault could be with the designer or developer. At which case who do you blame and how much of it is really their fault, as it is companies are willing to take a risk when releasing a product that if there are issues there will be few so that they don't have to lose money with a recall. Of course that bring up the question whether that risk is less than the risk of death from human error. Either way there will probably need to be some sort of regulation to make sure that a certain level of quality and precision is maintained.

 
At November 19, 2014 at 9:10 PM , Blogger Unknown said...

I think the legal issue surrounding machine error isn't actually as big of a deal ethics wise as we think. While I agree that yes, a loved one dying in a glitch would be absolutely terrible, I don't think society would disagree with assuming some of that risk for some of the benefits that would come with such a culture. We already legally sacrifice so many of our rights and privileges when we sign forms without looking at the fine print or even using social media apps. I think this type of case, taking assumed risk, is something that society would easily accept. Is it just? Probably not. But then again, I don't write the rules for society...

 
At November 20, 2014 at 9:04 PM , Blogger Unknown said...

I really like this post. My previous comment on your first post asked about the negative consequences, and you really highlighted that for me in this post. At first, reading the first post, I was thinking that driving these cars would cause more accidents, but now I can see why these cars would help us in the long run. I thought the statistics you presented on only two accidents was interesting - only two accidents! And none could be blamed on the actual car! This definitely holds water and could be applied for the future.
I do agree with your part about who to blame for if there is an error and a crash does occur. I would hope that if a court case like this were to arise, that the scientists behind the algorithm would not be at fault since making an algorithm that isn't "perfect enough" shouldn't be the fault behind the scientists, rather the choice of the individual to ride in one of these cars. Awesome job!

 
At November 22, 2014 at 2:46 PM , Blogger Carlos Ayala said...

I really like this post. It made me think a lot. I do not agree with this idea because How would a computer react if a child ran into the street and in order to avoid the child, the car had to swerve into oncoming traffic putting other peoples lives in jeopardy. This situation can quickly escalate if the "child" is really a dog and the autonomous vehicle just took someones life for an animal mistaken as a child. So I do not think we still ready for it or I should say technology is ready for it.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home