Dynamic Business Logo
Home Button
Bookmark Button

Credit: Jonathan Kemper

Don’t type your secrets into ChatGPT

Does it ever feel like you’re “talking” to Google or ChatGPT?

The disarming use of our everyday language leads many of us to treat these tools as though they were a magic mirror connected only to our keyboard. For example, when we “ask Google” or “check ChatGPT”, it tricks us into thinking these tools are private – but they aren’t private at all.

It’s easy to forget that both Google and ChatGPT are connected to enormous databases, run by complex algorithms and employ deep data analytics. The average user likely would not consider that real people will almost certainly see whatever is typed into those harmless-looking search bars. 

Earlier in April, South Korean technology giant Samsung painfully re-learned this lesson.

According to reports, three of the company’s IT staff thought it was a good idea to enter confidential information into ChatGPT. One employee asked the chatbot to check sensitive source code for errors, another asked it to help optimise a string of crucial hardware code, and a third fed it a recorded team meeting and asked it to generate minutes.

Presumably, the three developers were under pressure and moving too quickly. That’s the most likely cause of the blunder. But it’s also possible they considered their session with the chatbot to be secure. 

Whatever the motive behind the mistake, it no longer really matters because OpenAI (ChatGPT’s parent company) effectively now has access to some of Samsung’s trade secrets, and there’s nothing Samsung can do about it.

ChatGPT does warn people about what happens when they use the system. Its data policy explicitly cautions users not to share sensitive information in conversations with the chatbot since every piece of data typed into the system will be used to train the artificial intelligence models.

It’s hard to believe Samsung’s highly-talented software engineers didn’t know the internet could be dangerous. After all, the #1 maxim of the internet is: to assume your session is compromised and act accordingly. 

(It is also worth pointing out that Samsung is working on its own ChatGPT-like AI service, so this whole story may well be carefully crafted PR.)

Supposing these events at Samsung are correct, they offer plenty of lessons for protecting critical intangible assets such as data and confidential information.

The first lesson is that if such a simple blunder can be made at a large company like Samsung, which relies on intangible assets to generate nearly all its revenue, then what are the chances your company is teaching good privacy and security hygiene to staff? Good question, right?

Unfortunately, EverEdge sees very little evidence that businesses are aware of the intangible assets they own, and we see even less evidence that staff know the risks of leaking those assets to the wild. 

Simply put, companies generally have no idea how close they regularly come to losing everything due to simple human error. 

The second lesson is that it doesn’t matter if the Samsung staff were acting maliciously. Even if they knew their work was confidential and they were acting in good faith, something didn’t click in their brains to stop them from uploading those assets onto an external system like ChatGPT. Why not?

The terrifying answer is that no one told them not to.

Under pressure to produce new products and software, staff should be trusted to follow the rules set by management. While there is always some scope for initiative, following a clear procedure should always be encouraged as the primary path.

That’s all fine in theory, but what happens when management doesn’t understand its most valuable intangible assets and therefore hasn’t set a robust procedure or strategy to mitigate the risk of leakage? This is a disaster waiting to happen.

A few years ago, EverEdge saw a similar situation that effectively scuttled a software start-up just as it was getting ready to expand into a global market.

The company spent years developing a brilliant new technology for in-car navigation systems. Starting out small, the success of the service surprised even the founders who quickly realised they lacked enough in-house software engineers to develop the system to its full capacity. The decision was made to outsource this task to a Germany-based software company. 

A mid-level engineer was asked to send the German company the relevant code and the build was completed successfully and to a high standard. But when the time came to issue a new update, the German company seemed to have disappeared.

Then, six months later and in a completely different jurisdiction, the (previously German) company reappeared under a different name with its own in-car navigation software suite that looked suspiciously similar. 

The rival company had also raised €25 million in venture capital. To top off the betrayal, they were positioned much closer to the core European markets also targeted by the EverEdge client.

It turned out that the medium-level engineer had sent the outsourcing company the entire source code because that’s what he was told to do. He was following procedure.

The problem was that management hadn’t considered the source code to be a critical intangible asset. It wasn’t on the balance sheet or featured in the P&Ls, so it failed to appear on anyone’s risk radar. The company ultimately suffered not just the loss of its intangible assets but also over $150 million worth of forecasted revenue.

This brings us to the third and final lesson of this story.

In the Samsung case, neither ChatGPT nor the staff was directly at fault. The chatbot was only doing its job, as were the three software developers. 

Blaming humans doesn’t help since it’s unfair to expect staff to be continually vigilant for the security implications of every decision they make while online. Some level of security training is worthwhile, but this training hits the point of diminishing returns pretty quickly.

A far more effective approach is to introduce good information controls at multiple levels – including, if necessary, restricting access to services like ChatGPT.

This approach requires that management understand the value of the company’s intangible assets and create procedures to protect those assets. While there will always be human error in every process, that’s no excuse for laziness.

And in this case, management laziness was precisely the problem at the top of Samsung.

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Caitlin Burnett

Caitlin Burnett

Caitlin Burnett is a registered Trans-Tasman patent attorney and senior strategist at EverEdge, a global intangible asset advisory, corporate finance and investment firm.

View all posts