Almost every day, there is another headline touting the latest advancement in artificial intelligence (AI).
And almost every time, the subject of the story isn’t actually AI.
While we have made huge advancements in automation and natural language processing (which is usually what those articles are about), neither of these are really AI.
Both are crucial steps on the journey towards AI, but a home speaker that turns the lights on when I ask it to isn’t exactly intelligent – artificially or otherwise.
That’s not for want of trying. With Australia’s AI spending heading for $3.6 billion, and $44 million in Government grants being made available to develop AI and digital capability centres, it’s a destination we’re determined to reach, but we have not moved as far over the past decade, as many would believe.
The challenge lies in how we define AI. In its simplest form, it should be a program that is able to draw on its own past experiences, think for itself, and come up with new answers, creations, or processes on its own.
Some call this level of advancement ‘singularity’ or ‘General AI’, but can anything less really be called artificial intelligence?
Today’s breed of ‘AI’ are algorithms with a finite and defined purpose – these are also known as ‘weak’ or ‘narrow’ AI. They are constrained by how they have been programmed and essentially boil down to a series of ‘if x, then y’ commands.
The machine learning and automation we have today definitely make our lives easier, but it is reactionary – it needs to be told what to do and how to do it before acting.
Take shopping centre parking lots as an example.
What is currently being described as ‘AI’ are programs that monitor how many parking spaces are occupied and how many are free. Perhaps it does this through sensors in each spot; maybe this is achieved through object recognition via smart CCTV. Either way, the end result is something like the occupied spots displaying a red light above them, those that are still available showing a green light, and an LED screen tallying up the remaining free spots to display them at the car park’s entrance.
While this is an impressive feat of automation and convenient for shoppers, it is far from AI.
It still relies on an algorithm that essentially tells the program that if there is a car in a spot, show a red light; if there is not, show a green light, then count the free spots and display that number (if x, then y).
True AI in the shopping centre car park would make far more intelligent decisions without necessarily being told what it should be looking for. What might this look like?
By analysing the makes and models of each car entering the car park, it could make an estimation of the average purchasing power of shoppers, discover the trends for when different cohorts are at the shopping centre, and advise each store on optimum staffing levels.
To make this more accurate, it might track the brands on the shopping bags patrons are leaving with, drawing on that experience over time to uncover when specific stores are likely to be busier.
On the loss prevention front, it might identify that a recently found shoplifting patron entered the car park and could notify stores to be vigilant. Perhaps it could identify when a stolen car, or a car with stolen license plates, enters the facility.
If it identifies that more parents take their children to the centre at particular times and days, this insight could be shared with centre management to make sure the big fluffy mascot makes an appearance or children’s entertainers are prioritised at that time.
With a true AI, the point is that we wouldn’t necessarily know what insights it would discover or what connections it would make between seemingly disparate data points.
A big hurdle between where we are today and where we think we are is the bottleneck of computing and storage.
For an AI to be able to take in all its past experiences and all the necessary data points and create its own thought processes, it needs truly massive amounts of data it can analyse immediately.
Until we figure out how to overcome this bottleneck, we’ll continue making better mouse traps – but they won’t be telling us anything new about mice.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.