AI Apps in the Enterprise: Safeguarding Employee Usage of AI

The surge in generative AI usage is reshaping the modern workplace. In February, ChatGPT set the record for the fastest growing user base for a consumer application in history, much of it fuelled by businesses looking to leverage AI applications to optimise operations. Yet in the absence of clear regulation, enterprises face glaring data challenges when adopting AI, with the associated risks including sensitive data leakage and cyber attacks.

AI app landscape: growing adoption and input sensitivity

The true impact of the AI applications being used across businesses is still yet to be fully determined. Netskope’s recent Cloud and Threat Report, analysing data from millions of users across thousands of global organisations, found that the number of users accessing AI applications increased by 22.5% from May to June this year. At the current growth rate, the popularity of AI apps in the enterprise will double within the next seven months.

ChatGPT is the most popular, as the versatile chatbot has more than 8x as many daily active users than any other AI app. However, Google Bard is currently growing fastest, adding users at a rate of 7.1% per week, and is on course to catch up with ChatGPT in just over a year on its current trajectory.

As the number of daily enterprise users continues to grow, so does the surface for attackers seeking to exploit system vulnerabilities and steal sensitive information. In January this year, Microsoft acknowledged the problem and warned employees against sharing confidential information with AI applications at work. However, employer warnings have had little success reducing the problem, and Netskope has found that in the average large organisation, sensitive data is uploaded to generative AI apps multiple times per day.

Source code, the fundamental text that designates the function of a computer programme and is routinely proprietary corporate intellectual property, accounts for the largest share of sensitive data being exposed to ChatGPT, at a rate of 158 incidents per 10,000 enterprise users a month.

This trend is not entirely unexpected, considering ChatGPT’s ability to review and explain code, pinpoint bugs and identify security vulnerabilities. However, entrusting source code to ChatGPT involves gambling with potential breaches, accidental leaks, and legal entanglements. Case in point: when Twitter’s source code was leaked this year onto GitHub, the social media company sought legal action against the platform to remove the code and force them to reveal the identity of the source leak.

AI app vulnerabilities: exploitation and countermeasures

The hype surrounding ChatGPT and other generative AI apps has drawn the attention of scammers and attackers looking to capitalise on the interest in an emerging technology to fool potential victims for profit or malicious intent. The exploitation of the AI boom by malicious actors is unsurprising. Attackers tend to gravitate towards popular services and trending topics, leveraging novelty and hype for illicit gain.

Netskope’s Cloud Threat Report monitored various cases involving phishing campaigns, malware distribution initiatives, and a rise in spam and deceitful websites, and found that over 1,000 harmful URLs and domains are exploiting the allure of ChatGPT. This has been growing recently with the rise of WormGPT and FraudGPT – LLMs sold in the dark web that aid criminals’ in writing malware and phishing emails.

All of this underscores the necessity for a robust, multi-layered strategy to protect users from attackers attempting to capitalise on the hype and popularity surrounding any significant event or trend. This approach should include domain filtering, URL filtering, and content inspection to protect against both known and unknown attacks.

Balancing AI access and security

Faced with the risks of data loss via AI platforms, many companies are considering a total ban on generative AI in the workplace. However, there is little evidence that a total ban will eradicate, or even effectively reduce, the use of AI apps in the enterprise. Instead, employees will likely turn to shadow AI to continue using the productivity provided by these tools. Shadow AI results in even less visibility over what information is shared with the platform, increasing the risk of sensitive data leakage by an order of magnitude.

Netskope’s Threat Labs’ findings reveal that financial services, healthcare, and the technology sector have been at the forefront of taking action to regulate ChatGPT usage. These controls vary by industry vertical. In financial services and healthcare, both highly regulated industries, nearly 1 in 5 organisations have implemented a blanket ban on ChatGPT, while in the technology vertical, only 1 in 20 organisations operate a ban.

Instead of outright blocking ChatGPT, organisations can – and should – aim to enable the safe adoption of AI apps. This can be done by identifying permissible apps and implementing controls that empower users to choose the preferred app, while safeguarding the organisation from risks. This includes a combination of mainly three things: cloud access security controls via a CASB, data loss prevention (DLP), and user coaching. DLP can be used to identify potentially sensitive data being posted to AI apps, including source code, regulated data, passwords and keys, and intellectual property. 1 in 4 of Netskope’s tech customers involved in the research are already using DLP controls to detect specific types of sensitive information (especially source code) being posted to ChatGPT.

User coaching combined with DLP is an impactful way to improve user compliance with company policy around the use of AI apps. One example is to provide the user with a prompt when they attempt to share confidential data with the platform, leaving the decision of whether or not to proceed with the user. The Cloud and Threat Report found 1 in 5 technology organisations implement real-time user coaching to remind users of company policy and the risks that come along with ChatGPT and other AI apps.

Additional methods to safeguard enterprises during employee use of AI tools involve regularly reviewing AI app activity, trends, and data sensitivity, to identify risks to the organisation. Enterprises should also block access to apps that do not serve any legitimate business purpose, or that pose a disproportionate risk. Furthermore, as opportunistic attackers seek to exploit the popularity of AI apps, pre-emptive measures like blocking established malicious domains and URLs, and inspecting all HTTP and HTTPS traffic can protect enterprises.

As with many technological advancements, security is often an afterthought. Safely enabling the adoption of AI apps in the enterprise is a multifaceted challenge requiring a considered approach from enterprises. Businesses must understand and recognise the advantages of AI, and most importantly, ensure their employees are using it safely and securely to position themselves for success.

Ray Canzanese is Director of Netskope Threat Labs.

Netskope’s mission is to research the threat landscape to provide cloud-enabled enterprises the knowledge and tools to protect themselves. You can find their work on the Netskope blog and at top security conferences worldwide.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE