At the start of this year, we all took a deep sigh of relief, thinking that unprecedented events were behind us, but as the year continued, it became clear that change on a macro scale is here to stay, and we now find ourselves stuck in a perfect storm.
While the federal treasurer is confident Australia won’t head into recession this year, investment firm Deutsche Bank is forecasting we could see it in 2023 due to a rise in unemployment.
That aside, conflicts continue to impact global markets and organisations all over the world are looking at their bottom lines, working out what is the smart investment and why.
We’re already seeing the effects on the tech landscape with VC funding declining, tech de-coupling, a continued lack of access to data skills and more complex regulations coming into place.
With so much pressure to innovate, it can be hard to know what to focus on. But what’s clear is that achieving decision accuracy and integrating siloed and distributed data sets to see the big picture in real-time accurately will be vital to survival and future success.
That’s why we’ve outlined five key trends that every data-driven business should act upon in 2023.
AI moves deeper into the data pipeline
As economic uncertainty continues, many will see a pullback on investment and hiring. However, with the global skills shortage continuing to impact companies of all sizes, ensuring technologies such as Artificial Intelligence (AI) and Machine Learning (ML) are able to automate some of the more menial data preparation tasks will be crucial.
By moving AI deeper into the data pipeline before an application or dashboard has even been built, we can finally start to shift the breakdown of time spent on data preparation versus data analytics.
Doing this would enable hard-to-come-by data talent to focus on value-add, cross-pollinating, and generating new insights that weren’t possible before. Far more productive use of their time.
Invest more in derivative and synthetic data to prepare for unprecedented events
If the last few years have taught us anything, it’s the value of investing time and resources into risk prediction and management. Unfortunately, prior to COVID-19, there wasn’t enough real data on pandemics readily available to the average operation to prepare for such a crisis, but this is precisely where synthetic data plugs the gap.
Research suggests that models trained on synthetic data can be more accurate than others, and of course, it eliminates some of the privacy, copyright, and ethical concerns associated with the real. Whilst derivative data allows us to repurpose data for multiple needs and enables crucial scenario planning needed to prepare for future issues and crises.
Be ready for natural language capabilities to rival humans
Many organisations have been using language AI in its basic form for some time now. Think about how often you’ve interacted with a customer support chatbot to resolve your issues with your bank or insurance provider. The popularity of this technology is set to grow at around 18% for the next few years; but also evolve dramatically. There are several new models in development which are significantly more powerful than anything we use today.
We can only imagine where those will take us, but we know that natural language capabilities will have huge implications for how we query information and how it’s interpreted and reported. We’ll find not only the data we’re looking for but also the data we hadn’t thought to ask about. That’s why businesses need to capitalise on this.
Mitigating supply-chain disruption with real-time data
The aftershocks of COVID-19 and continued global conflicts are still compromising supply chains. Anyone who has attempted to buy a new car (a computer, or even something as basic as toilet paper) in the last few years knows how seriously supply chains have been impaired.
Things show no sign of abating over the next few years, and so does the need to react quickly, or ideally “pre-act” to forecast issues before they even start. Having the power to analyse data in real-time and in context is key to this. It’s no wonder that IDC predicts that by 2027 sixty per cent of data capture and movement tech spending will be about enabling real-time simulation, optimization, and recommendation capabilities.
X fabric connects data governance as it becomes even more complex
Investment in data and analytics has dramatically accelerated thanks to the pandemic and will continue to do so, with 93% of companies indicating they plan to continue to increase budgets in these areas. But rapidly shifting rules and regulations around privacy, as well as the distribution, diversity and dynamics of data, is holding back organisations’ abilities to really squeeze the best competitive edge out of it.
This becomes especially challenging in a fragmented world as data governance becomes even more complex. Improving access, real-time movement and advanced transformation of data between sources and systems across the enterprise is crucial to organisations realising the full power of data.
This is why an increasing number of businesses are turning to data control plane architecture, an “X-fabric” not just for your data but also for your applications, BI dashboards and algorithms, enabled by catalogues and cloud data integration solutions.
This is a critical component in the modern distributed environment for any organisation that wants to act with certainty.
The good news is that after the last few years, we’re all better prepared to roll with the punches than ever before. As data and analytics professionals, we need to adjust to more fragmentation, with disparate data centres, disrupted supply chains, the consistent need for innovation, and hampered access to skilled labour.
And in a world where a crisis has become a constant, calibrating for it becomes a core competence – so we can react at the moment and anticipate what’s coming next.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.