top of page
Search
  • Writer's pictureRay Delany

Artificial Intelligence Starts With Natural Intelligence

Updated: Apr 19


Recently I listened to a podcast where a representative of the NZ Institute of Directors was talking about digital disruption. As often happens in these kinds of discussion, the subject of artificial intelligence and driverless cars came up with the usual tired old scenario which can be summarised as, "what happens when the car has to decide between killing the pedestrian in front of it or killing the driver?"

A few years ago I travelled with my wife and another couple to the Great Australian Desert - in the middle of Summer. At one point somewhere between Coober Pedy and Kulgera, I was at the wheel of the car traveling in air-conditioned comfort along an absolutely straight and flat road, at a perfectly legal 110km per hour, when a large lizard ran out into the middle of the road and stopped dead a couple of hundred metres in front of the car.

I learned later that this is not uncommon. Since reptiles are cold-blooded they seek out warm places early in the day and the black strip of tarmac running right through their habitat looks perfect to them. They might be smart enough to realize that there is a small number of humans crazy enough to be driving along it at the height of Summer and therefore the risk is low but I doubt it. I suspect lizards are, like humans, more driven by instinct than calculation.

So apply that scenario to a driverless car scenario. What is likely to happen goes like this:


  • The cars sensors detect the lizard as an obstacle to be avoided.

  • A calculation is made taking into account the speed of the vehicle, the stopping distance required and potentially a complex equation involving the capability of the ABS braking system and the state of the road surface

  • If this calculation returns a result that indicates that the car can stop safely the brakes will be applied to exactly the extent required to make that happen without losing traction or throwing passengers about too much

Is the vehicle "deciding" what to do? Of course not. The machine is simply applying an algorithm devised by its human creators to a predictable scenario. According to the Turing Test if you can't tell the difference between how a human would respond and how a machine would respond then the machine is for all practical purposes thinking, but let's leave that aside and explore the scenario a bit further.

The likely reality is that a car equipped with a sophisticated automatic pilot will also have lots of other state of the art engineering on board. Most (human) drivers do not have the skill to aggressively deploy that technology effectively in emergency situations. In simple terms we react slowly and cautiously, more so as we get older and more experienced. (Every parent has had the experience of how heavy a teenager's foot can feel on both accelerator and brake pedal). Few of us have the speed of reaction and nerve to do what we probably should do in real emergency stopping situations, which is to stamp harder on the brakes than we've ever done before.

But the car's tech has no such inhibition. It can process the scenario and react in nanoseconds, far faster than even the fastest reactions of a highly trained Formula One driver. By all accounts being in a vehicle on auto-pilot when this happens is an un-nerving experience for those few nanoseconds before the human passengers slowly realise that they are safe, the car has stopped and the lizard has survived.

The professional software developer does not of course stop with a single scenario or with one as simple as described above. The questions are always asked to detect the boundaries and ethical dilemmas of the problem:


  • What if the car can't stop in time?

  • What if the lizard moves to the left or right?

  • What if it turns and runs towards the approaching vehicle?


Even these are the simple straightforward cases. Other arcane conditions have to be explored as well. What if an alien probe flys in front of the car at the same instant, momentarily interrupting the sensor data-stream? (OK a bird then).

Most software engineers I know have a fine sense of black humour, but I don't know any that would seriously work the scenario "at what point do we kill the lizard?" That's because software engineers are oriented toward success not failure, they work to achieve situations where everyone survives, every time. No single points of failure, all imaginable cases considered and allowed for. Everything tested, reworked and tested again. To those not used to this process it can seem endlessly time-consuming and frustrating. But we want it to be that way, because to a soulless piece of tech there's no practical difference between a large lizard and a small child.

Sometimes this process of refining requirements and scenarios takes years and it always takes a lot of time (and therefore money). Great software is usually not built in the first version. Rather it grows like a tree, a slender sapling at first, still strong and true within its limits, but no-one should expect a thick trunk strong enough to withstand a big storm until some time has passed.

Business leaders don't need to occupy their minds with arcane discussions about artificial intelligence, they need to make sure that they use their own natural intelligence to ensure that of the people they employ, whether on staff or an external contractor, are discussing real scenarios with real facts. The ability to discern real from alternatives will be the determining factor in success in the foreseeable future.

And yes I know AI is not just about algorithms (but more on that later, when I'll also reveal what happened to the lizard).

This post was originally published on www.designertech.co.nz

38 views0 comments

Recent Posts

See All

Comments


bottom of page