Consider your repetitive daily tasks. Do you, or your employees, follow a process to perform these? If so, then this job, or at least the repetitive part of it, will almost definitely be replaced by a robotic usurper.
No industry will be unchanged from the “robot revolution”. Disruption is already occurring in many of the commoditised functions of accounting and legal advice, for example. Templated legal document services have already eroded simple legal advice and computing systems can do initial audits more efficiently and cost effectively than humans, flagging inconsistencies for human review. Electronic settlement service PEXA, which digitalises property settlement, is a big step towards self-directed conveyancing service. Even medicine is set to see much of its diagnostic imaging being analysed by robots.
Perhaps at greatest threat will be industries like finance, whose general functions are, in simplistic terms, the application of knowledge to data. Mortgage broking & insurance services in particular, which rely heavily on compiling client data and then making comparisons on rates or premiums, are certain to see much lower levels of human involvement.
In a research paper “The Future of employment” published in 2013 by Oxford University, it was estimated that 47% of American jobs are replaceable by robots.
What do we mean by robots?
Robots, in the sense we are talking about, are the programmatic application of algorithms upon data sets. Less the humanoid machines from sci-fi movies iRobot or Bicentennial Man and more akin to complex computational apps, like virtual assistant Siri or Cortana. The greater the reductive power of an app, the more powerful its disruption. And, when applications can be programmed to adapt their algorithms to incorporate new data and outcomes, we enter the exciting world of Artificial Intelligence (AI).
Artificial Intelligence
According to Nick Bostrom, Oxford philosopher, artificial intelligence can be broken into three categories:
1. Narrow A.I.
Narrow A.I. specialises in one area. Things like Facebook’s ability to tag friends in photos automatically or, Netflix and Spotify providing personalised recommendations are all examples of narrow AI.
2. General A.I
General AI is defined as a computational process as smart as a human across any computational task. Key traits of general AI would be the ability to think creatively, think in abstract and importantly, apply learnings from a variety of experiences to solutions in seemingly unrelated areas.
General AI is a at least a few years off. For general AI to become a reality, we need to further refine the sensory interactions of computers with the world. These external stimuli would then need to be fed into an “artificial neural network”. This is a style of computing storage modelled on a biological brain. i.e. data inputs are stored in interconnected nodes not discretely. How the nodes interact with each other internally is influenced by learned experiences. i.e. should a nodal reference to seemingly unrelated data be a success, it may strengthen certain connections between data nodes in the future and vice versa.
The building blocks of achieving all of the above are already well under way.
3. Artificial Superintelligence or Singularity
This is the third category of AI. Singularity occurs when computers surpass human intelligence.
How is this possible? If true general AI is achieved, then computers will be able to self-learn in a hypothetical digital environment, with simulated scenario repetition occurring at speeds only limited by energy movement and computational power. With enough computational power there will be a very rapid advancement in computing intellect. So rapid and profound, that it would quickly be a billion times more powerful than all human intelligence. Whether this is actually possible or rests in the realm of science fiction remains unknown.
Have we reached a tipping point?
Potentially yes! Evidence that machine learning is possible emerged this year when AlphaGo (a project of google DeepMind) beat the world’s best human Go player. Go is an ancient Chinese board game that has more move combinations than atoms in the universe. As such, computers have been previously unsuccessful using brute computational force to simulate combinations to beat humans.
So how did AlphaGo win?
AlphaGo used two artificial neural networks. These neural networks filtered erroneous moves and proposed a limited set of ‘intelligent’ moves. These limited options could then be simulated more manageably. The program then executed the move with best possible outcome. To improve the efficiency of the neural network ‘filters’, 30 million advanced human move combinations were pre-loaded into the neural networks.
Then, “once AlphaGo reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play”[1]. The computer learnt new strategies by playing thousands of simulated games against itself… this is a big leap in the move towards A.I. More here on AlphaGO
What should businesses do?
If your businesses service or product is the output of a series of processes or systematic functions, then you may find that in two to ten years’ competition emerging from robotic digitalisation. Ensuring the value to clients or customers is connected to an intangible point of difference, whether it be people or goodwill, will help protect you from digital disruption.
And, if like many businesses you have input elements that can be digitalised, then you may be a beneficiary of the robotic revolution. If your business is completely process or systematic based, then perhaps you should consider leading the digital disruption!
About the author:
Matt Vickers CFP® is the principal adviser of
[1] https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html