When Does Technological Progress Actually Become Dangerous?
So we're all familiar with the doomsday scenario right? You know, the one where the human race develops artificial intelligence that supersedes our own and takes over, rendering us all obsolete.
It's been tackled in countless books, Hollywood blockbusters and is even being explored on the recent hit Channel 4 drama "Humans".
But exactly how far-fetched is it? Could it actually happen? Will we allow it to?
These are all quite deep scientific, theological and philosophical questions, but if we stop and really think about it for a few minutes, I'm sure we can all think of "micro" instances where this has already happened. What about cars that park themselves? This is now mainstream technology. Or simple things like voice recognition to activate "stuff".
It's crept up on us all and demonstrates our thirst for a direction of travel where the end game is that comparable AI to our own. To have our own "pet" robot remains the stuff of dreams for many.
So when I came across the story of the "Samsung Truck" being tested in Argentina, it got me thinking that this could be an example of technology becoming dangerous. Not in the "Hollywood" sense where robots take over the world, but in a real world sense where we try and use technology to solve one problem, but the result is to create a much bigger one.
Road safety attracts huge investment, whether through compliance or ethics, it's one of those areas where the motor industry and governments look for continued improvements. Quite right too! But is the "Samsung Truck" an example where the potential over reliance on technology will in fact cost, rather than save lives?
So the concept consists of 4 cameras at the front of the truck wirelessly displayed onto the back of the truck which is made up of 4 exterior monitors. This will operate in day and night mode. This sounds simple, right? What could possibly go wrong?
Here's a video which shows the Samsung truck 'in action':
I’m sure you are already ahead of me here, but let me suggest a few “real world” problems that might occur:-
- Rear ending the truck believing the road ahead to be clear?
- Overtaking the truck based on the image displayed but mis-judging distance to oncoming traffic
- Just simply mis-reading the image and not seeing the oncoming traffic
Could this run the risk of creating a false sense of security? Encourage bad driving habits? And what would happen if reliance on the technology caused an accident to happen? Can you imagine the legal mess?
Unless the cars following the truck had technology that prevented overtaking based on the image displayed (not being developed), I think this has the potential for a much bigger set of problems than simply trying to improve the basic skill of driving and road safety. So it could be argued that this technology demonstrates a perfect example of where technology won’t help? Where technology actually becomes dangerous?
Based on testing already completed, Samsung expect this technology to take off. I’m sure there will be arguments for and against, long after its either been dropped or adopted, but for my own peace of mind where driving is concerned, I think I will continue to trust my own eyes and judgement based on what I can see, without the use of a camera!
What do you think? Do you agree with the Samsung safety truck idea? Or do you have an opinion on how it could be executed better? Let me have your thoughts by commenting below.