The Curious Incident of the Robot in the Daytime
When I read Andy’s blog last week about the pilot scheme for the new Samsung Safety Truck, I found myself nodding along to most of the points he made.
If you haven’t read it yet, it’s a really interesting one. The Samsung Safety Truck was designed to address Argentina’s widespread issue of road deaths caused by people overtaking when they shouldn’t be. Four screens on the back of the truck project whatever is in front, indicating to the driver wanting to overtake when the path is clear.
Andy’s take on this was to suggest that there is a certain point at which technology can defeat the purpose it’s trying to serve. He also proposed some possible consequences to this initiative, such as just looking at the image and therefore rear ending the truck, or misjudging the distance to oncoming traffic.
I agree that an over-reliance on technology can introduce bad habits; this applies to all sorts of scenarios. If in a situation like being behind the wheel, then it certainly does have the potential to put not only ourselves, but more importantly, countless other people in danger.
What struck me most however wasn’t the technology or how good/ bad/ well thought out it was. (And don’t get me wrong – if it saves more lives than it takes, then it’s certainly something worth considering.) It was the fact that human activity still had a massive impact, regardless of whether the intention was actually to take the human factor out of the equation.
Technology can help you make better decisions, I’m not denying that. But when, how, and to what extent you make that decision remains the influencing factor. Artificial intelligence? Nope – this is still human intelligence; you’re just acting on a different set of information.
Andy spoke at the beginning of his blog about the ‘doomsday’ scenario i.e robots becoming so intelligent they gain the ability to activate the apocalypse. It’s this scenario which prompts many of us to withhold a certain amount of trust for the technology behind AI.
But have we ever considered the possibility that maybe we are the ones not to be trusted?
Take the recent tale of HitchBOT. HitchBOT was a robot; part of an experiment conducted by Canadian university professors to test how human beings interacted with AI, having not been warned about it.
Before I go on, I will say that there is a degree of silliness to this story. So if you don’t like silliness, I won’t mind a jot if you decide to leave me at this point. But I hope the point I’m making remains valid regardless.
So, I say HitchBOT ‘was’ a robot. Because unfortunately, HitchBOT is no more…
HitchBOT was solar powered, had the ability to engage in conversation, took photos of its surroundings every 20 minutes, and came with an outstretched thumb and instructions (as the name suggests) for humans who could help the robot travel across the United States in their cars. His journey was supposed to take him from the east coast to the west coast, with San Francisco as the intended destination.
Now, I don’t mean to be cruel, but HitchBOT might just be the least impressive robot I’ve ever seen. And yes, I’ve seen Peter Crouch’s dance moves.
If it had looked a bit more like Optimus Prime, then I would understand perhaps a little more hesitancy to acquiesce to the robot’s demands. (Come to think of it, that was a bad example. Why would Optimus Prime need to hitchhike when he has his own carry on wheels?)
But it had a smiley face and polite demeanour, and so HitchBOT, having already successfully travelled 10,000km across Canada earlier in the year, started its journey in Massachusetts. Many Intrigued/ kind-hearted/ had nothing better to do Americans took note of the instructions and helped HitchBOT on its way.
Its journey came to an abrupt end far short of the intended destination however, due to the violent actions of one lone vandal in Philadelphia. HitchBOT had its arms and legs ripped off, and was also decapitated. Thus the experiment was swiftly concluded due the robot being irreparable.
So, is this the way we all see technology? Personally, I believe that the actions of one individual does not a sweeping statement make – and the outcries on the internet as well as the vast majority of people who wanted to help HitchBOT in the first place should suggest that this isn’t how we all will act when put face to face with artificial intelligence.
But that’s really not the issue. I think this story highlights that we are still in charge of our own decisions, and emotional capacity (or lack thereof) remains the trigger. This is despite the arrival of increasing amounts of technology that could suggest we don’t have to make any decisions at all. We might have different decisions to make, but they are there to be made. We are all responsible for our own actions, and the consequences they create. And I really don’t see that changing.
By the way, for anyone concerned about the legacy of HitchBOT, its makers have stoically confirmed that they will go on. HitchBOT will return.
My trip must come to an end for now, but my love for humans will never fade. Thanks friends: http://t.co/DabYmi6OxH pic.twitter.com/sJPVSxeawg
— hitchBOT (@hitchBOT) August 1, 2015
Cyber Resilience: Lessons from an International Shipping Firm
How an Online Retailer Overcame a Devastating Ransomware Attack with TSG’s Support
From chaos to control: electronic document management with TSG Mosaic, an advanced EDRMS
Contact management, CRM, and customer engagement in a digital world
How Opera 3 SQL SE can boost your business performance and security
Streamlining Business Finances: The Power of Pegasus AP Automation Unveiled
Event date passed - Available on demand