Amid the rapid integration of generative AI tools into the workplace, Australian enterprises find themselves in a predicament, wrestling with the security implications of their employees’ widespread adoption of these technologies, all while lacking a comprehensive grasp of the associated risks.
The report delves into the strategies adopted by organizations in securing and governing the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders.
It is evident that generative AI has rapidly integrated into the workplace, with a staggering 74% of IT and security leaders acknowledging that their employees frequently or occasionally utilize generative AI tools and Large Language Models (LLMs) on the job. Paradoxically, they appear uncertain about how to effectively address the ensuing security vulnerabilities.
Security takes a backseat
Surprisingly, the research shows that IT and security leaders express greater concern about receiving inaccurate or nonsensical responses (40%) from generative AI tools, as opposed to being apprehensive about security-related issues. The exposure of customer and employee personally identifiable information (PII) (36%), leakage of trade secrets (33%), and financial losses (25%) take a back seat in their list of worries.
Ineffectiveness of Generative AI bans
Nearly one-third (29%) of respondents disclosed that their organizations have implemented bans on generative AI tool usage. Strikingly, this proportion aligns with those who possess high confidence (34%) in their ability to shield their systems against AI threats. However, the bans appear to be largely ineffective, as only 5% report that their employees never use these tools at work.
Demand for guidance, particularly from the government
The research underscores the need for guidance in navigating the generative AI landscape. An overwhelming 85% of respondents express a desire for government involvement, with 51% advocating for mandatory regulations and 34% endorsing government-set standards that businesses can voluntarily adopt.
Lack of basic hygiene in security measures
While 80% of IT and security leaders express confidence in their current security stack’s ability to thwart generative AI threats, the report exposes a lack of attention to fundamental security practices. Less than half of the surveyed organizations have invested in technologies that enable the monitoring of generative AI tool usage. Additionally, only 45% have implemented policies governing acceptable use, and merely 34% provide training on the safe utilization of these tools to their employees.
In the wake of the launch of ChatGPT in November 2022, enterprises have had less than a year to grapple with the inherent risks and rewards associated with generative AI tools. With the rapid adoption of these technologies, it becomes paramount for business leaders to gain a profound understanding of their employees’ utilization of generative AI, thereby enabling them to identify potential security gaps and prevent unauthorized sharing of sensitive data or intellectual property.
Raja Mukerji, Co-founder and Chief Scientist at ExtraHop, commented on the findings, saying, “There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace.
However, as with all emerging technologies that have become integral to modern businesses, leaders require more guidance and education to grasp how generative AI can be harnessed across their organizations and the potential risks it entails. By combining innovation with robust safeguards, generative AI can continue to reshape entire industries in the years ahead.”
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.