ChatGPT Raises Security Concerns for Businesses: Should It Be Banned?

ChatGPT on screen

Open AI’s ChatGPT Is All the Rage in ’23
(But You Already Knew That)

Open AI’s revolutionary ChatGPT has been the talk of digital spaces for months, making it virtually impossible to miss hearing about the innovative tool. Following its launch in late November 2022, ChatGPT attracted an astonishing 100 million users just in its first two and a half months. ChatGPT’s seemingly magical talents of generating essays, summaries, code, and even song lyrics can be attributed to it being a Large Language Model (LLM), according to CNBC, “which is programmed to understand human language and generate responses based on large corpora data.” ChatGPT’s current LLM is called “GPT 3.5”, a newly updated version of “GPT-3”. If you’re still wondering what makes ChatGPT and other LLMs so impressive– here’s your answer. The AI tool can generate intelligent, human-like responses when asked,  in a fraction of the time it would take to write a paragraph, essay, or piece of code on your own. It is an understatement to say that ChatGPT has taken the world by storm. By January 2023, it had amassed over 100 million monthly users. To put that into perspective, it took Tik Tok nine months to gain 100 million users, while it took ChatGPT just a little over two months. 

 

Why ChatGPT Brings Up Security Concerns

As more people in the workplace begin enjoying the rapid pace at which they can now produce copy, many business leaders are growing wary of the possible consequences of the technology– specifically in how it affects their organization’s security. Recently, the issue of employees inputting sensitive company information into the AI tool has emerged, making employers worried about the security of their information. In a recent report by security firm Cyberhaven, they reported finding over 3,381 attempts to paste sensitive corporate data into ChatGPT per 100,000 employees. ChatGPT utilizes user-inputted data to develop its capabilities and refine comprehension. This dynamic approach ensures the tool constantly learns, keeping it up-to-date on modern conversational trends. Meaning that sensitive data input can also be output. Below is a real-life example by Cyberhaven of how this can become a security issue. 

 

“Using ChatGPT, a doctor enters a patient’s name and medical condition details to automatically generate a letter to the patient’s insurance company to support a medical procedure request. In the future, if a third party queries ChatGPT about the patient’s medical issue, ChatGPT can respond based on the information entered by the doctor.”

To make matters worse– fake malicious versions of ChatGPT have been deployed to steal users’ sensitive information. Guardio first made the report stating that a Facebook extension of ChatGPT claiming to give quick access to the tool to Facebook users has been hijacking accounts and installing backdoors onto devices. More specifically, the fake Chrome extension reportedly hijacked high-profile Facebook business accounts and created Facebook bots and paid ads to lure Facebook users into downloading the extension. Guardio had since reported that the extension had been removed from the Chrome store. 

 

Companies Begin Banning Employee Use of ChatGPT 

With these security issues emerging, many global business leaders are cracking down on employee usage of the AI writing tool. According to sources cited by CNN, financial giant JPMorgan Chase has prohibited its global employees from utilizing ChatGPT due to compliance issues with third-party software. The employees were instructed not to input confidential information into OpenAI’s chatbot. JPMorgan is not alone in prohibiting its staff from using the popular AI tool. According to reports, Amazon has also prohibited its team members from entering confidential customer information into ChatGPT. Similarly, Verizon and Accenture have implemented comparable measures. Other companies have also spoken out about the tool, IKEA’s VP for Digital Ethics, Nozha Boujemaa, commented on the possible risks that could result in AI tools.

As businesses continue to grapple with the challenges posed by ChatGPT, it remains unclear whether the technology should be banned altogether or if alternative measures should be implemented to mitigate potential security risks. While some argue that the tool offers unprecedented efficiency and productivity gains, others are concerned about its potential to compromise sensitive data. As the conversation around ChatGPT and other AI tools continues to evolve, it is clear that businesses must approach their use of these technologies with caution, prioritizing security and compliance to avoid potential consequences. Ultimately, deciding whether to ban or continue to use ChatGPT will depend on a range of factors, including individual companies’ security protocols, compliance requirements, and risk tolerance levels.

Written by Peter Burg

Peter Burg is Director of Business Development at AgileBlue, partnering with organizations who are looking for ways to make IT and cybersecurity work. Peter currently resides in Minnesota and is a big baseball fan.

March 16, 2023

You May Also Like…

Request a Demo

AgileBlue is a software company with an innovative SOC-as-a-Service for 24X7 network monitoring, cloud security, data privacy and compliance.

Our modern SOC-as-a-Service is built on innovative machine learning and autonomous execution. If you would like to discuss our SOC-as-a-Service, Partner Program or schedule a brief demo please give us a little info and we will contact you immediately.