Home » INSIGHTS » Dawn of Generative AI: Can ChatGPT Overcome Privacy Concerns?

Dawn of Generative AI: Can ChatGPT Overcome Privacy Concerns?

Generative AI tools like ChatGPT have taken the world by storm. Within two months of its release, ChatGPT reached 100 million active users, making it the fastest-growing consumer application ever launched. However, recently it has been in news around the potential privacy risks ChatGPT poses for users and businesses. As AI capabilities continue to evolve, technology companies are getting locked into a fierce arms race. The problem is, this race is fuelled by our personal data

Amit Singh

If you have been following therecent news or scanning through your social media feed, then you must have noticed the rage of the hour: ChatGPT. Since its debut in November last year, ChatGPT has been attracting headlines for its highly anticipated potential, which—depending on your view of artificial intelligence (AI)—will either transform work for the better by helping us with mundane and complicated tasks or will encroach disastrously into territory best navigated by humans.

Owing to its chat box format, ChatGPT allows users to request information on highly specific topics—from quantum computing and processing code to the mechanics of the human heart or the principles of flower arrangement. In this manner, it mimics the question-and-answer format of Google search, although instead of producing multiple answers on a search engine results page, ChatGPT acts as the sole authority delivering responses.

“The text-based AI bot is showcasing the immense power of AI to open a world of amazing new abilities, including fixing a coding bug, generating cooking recipes, creating 3D animations, and even composing songs.At the enterprise level, we are seeing companies integratingChatGPT into their operations to improve automation in critical domains such as customer experience, online education, content creation, data analysis, and more,” says Sean Duca, Vice President, Regional Chief Security Officer – Asia Pacific & Japan, Palo Alto Networks.

Furthering its capabilities, the Microsoft-backed startup OpenAIrecently launched GPT-4, which builds on ChatGPT’s wildly successful technology. Enhanced capabilities of the new model include the ability to generate content from both image and text prompts. Furthermore, GPT-4 scored 40 percent higher on factual accuracy tests and 82 percent less likely to respond to disallowed content requests than its predecessor. GPT-4 can also deal with longer documents of up to 25,000 words.

There is no doubt that ChatGPT will have a significant impact on how humans will work in the future. However, with opportunities, there are privacy risks associated with using ChatGPT.

In fact, Italy has become the first country to block the advanced chatbot temporarily, saying it has improperly collected and stored information. Italy’s privacy watchdog, Garante, charged Microsoft-backed OpenAI with failing to verify the age of ChatGPT’s users who are required to be aged 13 or above.

According to Garante, ChatGPT lacks any legal justification for the extensive gathering and storage of personal data needed to train the chatbot. OpenAI has 20 days to react with corrective measures; otherwise, it runs the risk of being fined up to 20 million euros ($21.68 million), which is equal to 4% of its annual global turnover.

Privacy risks

If you’ve ever written a blog post or product review or commented on an article online, there’s a good chance this information was consumed by ChatGPT.

“More than 300 billion words have been fed to ChatGPT by OpenAI from data sets collected from books, articles, websites, and posts on the internet, including personally identifiable information acquired without permission.Considering the large amounts of data the platform already possesses, users are never asked if ChatGPT can use their data, which raises privacy concerns. What’s more, ChatGPT currently offers no means for users to check whether the company stores personal information or even deletes any previously shared information. Threat actors can easily take advantage of any underlying vulnerabilities in the platform to gain access to this sensitive data and use it for malicious purposes,” highlights Duca of Palo Alto.

Generative AI tools like ChatGPT can collect personal information without the user’s consent, such as IP addresses, browser types and settings, email addresses, and other contact information, and use this data to personalize the user’s experience and target ads to them.

Another possible privacy risk, according to experts, relates to the information that ChatGPT receives from user prompts.When we interact with the tool, we could inadvertently put sensitive information in the public domain without realizing it.

Moreover, OpenAI offers no procedures for individuals to check whether the company stores their personal information, or to request it is deleted. This is assured under the European General Data Protection Regulation (GDPR) – although it’s still not clear ifChatGPT is compliant with GDPR requirements.

“OpenAI did state that it crawls millions of websites containing private information to collect training data. Additionally, there is still the matter of contextual integrity, which is the idea that personal information shouldn’t be disclosed outside of the initial context in which it was given,” states RamprakashRamamoorthy, Director, AI Research, ManageEngine.

Even when data is publicly available, its use can violate the principle of contextual integrity- a fundamental concept in discussions pertaining to privacy. Contextual integrity ensures that individuals’ data is not disclosed outside of the context in which it was initially created or collected.

The platform generates responses to user queries, which may include copyrighted or trademarked material. ChatGPT does not own any of the intellectual property rights associated with these responses, and users must obtain proper permissions before using any copyrighted or trademarked material generated by the platform

Multiplying the risk, there’s a possibility of ChatGPT being misused to track the user’s activities and preferences or even being used to commit identity theft or phishing scams.

Identity theft/phishing scams

ChatGPT’s ability to mimic language to a high degree of fluency, and incorporate the idioms of nationality, will surely be exploited by hackers for phishing attacks via email or text. Currently, the stilted language used in these attacks makes them easy to spot, but ChatGPT could make it easier for criminals to send out emails that come across as trustworthy and authoritative. The utility could also be used by hackers to create malicious code.

 

In fact, real-life use cases of ChatGPT being used for phishing have already surfaced. A team representing Singapore’s Government Technology Agency at the Las Vegas Black Hat and Defcon security conferences recently demonstrated AI’s ability to craft better phishing emails and effective spear phishing messages than humans. “Researchers combined OpenAI’s GPT-3 platform with other AI-as-a-service products, focused on personality analysis, and generated phishing emails customized to the backgrounds and characters of their colleagues. Eventually, the researchers developed a pipeline that groomed and refined the emails before hitting their targets,” shares Duca of Palo Alto.

“We’re seeing ChatGPT itself being used to improve the templates for phishing emails as well as attempts to steal cryptocurrency from users seeking to access GPT-4. Hoping to cash in on the massive interest around OpenAI’s GPT-4, scammers have launched phishing campaigns via email and Twitter designed to steal cryptocurrency,” adds SatnamNarang, Senior Staff Research Engineer, Tenable.

While ChatGPT is probably still some ways away from revolutionizing cyber-attacks, it certainly has the ability to make attacks more efficient, accurate, and impactful.The immediate risks right now are those related to phishing and social engineering attacks, which continue to be a pain in the neck for security professionals. ChatGPT has the potential to be used in these situations to enhance attacks, sharesReuben Koh, Director, Security Technology & Strategy, APJ, Akamai Technologies.

Further, ChatGPT can serve as an upgrade to malware as a service, which has existed for some time now. There are security risks associated with using these generative AI models to create more authentic phishing emails or automate the generation of malicious codes, which could result in many uninitiated users testing these methods as quick and simple ways to launch cyber-attacks, shares JhilmilKochar, Managing Director, CrowdStrike India.

User awareness is crucial

It is often challenging to pinpoint who is responsible for the actions of a generative AI system. The negative impacts of the system may be difficult to address due to this lack of accountability. However, this also means people must become more vigilant and guardrails have to get higher.

Users need to be cautious while granting permissions to the application and should be aware of the risks related to sharing personal data. “In addition, employees using ChatGPT within their network can lead to data breaches, making it crucial for organizations and their security teams to be prepared to battle this threat. Creating employee awareness of the associated risks, having a response plan in place, and conducting regular security audits, are some important proactive measures. A proactive approach can reduce the potential risk associated withChatGPT and other AI platforms,” saysDhananjayGanjoo, Managing Director, India & SAARC, F5.

From an employee perspective, data privacy can be one of the biggest concerns. If an employee is using ChatGPT for work, they should always verify the information received. Inaccurate or unreliable content can always find its way and set a wrong context addsRangaJagannath, Senior Director, Agora.

Narang of Tenable opines that it’s less about permissions and more about what data users share with these generative AI services. “It can’t be understated just how important it is for users to recognize that the information they share with services like ChatGPT helps to improve its model in future iterations.”

Organizations must ensure that individuals are fully informed about how their data is used and that all data is collected and used with explicit consent. Strong security measures should also be implemented to prevent unauthorized access to or disclosure of personal data, and organizations should be prepared to notify the concerned parties in the event of a data breach, says Ramamoorthy of ManageEngine.

He further adds that along with legal compliance, the ethical ramifications of generative AI must be taken into account, and measures must be taken to guarantee that the technology is used in a transparent, accountable, and responsible manner. Also, generative AI models can be audited regularly to make sure they are being utilized properly. This will help build clear standards and procedures for the responsible use of generative AI.

Kochar of CrowdStrike shares that the best way for enterprises to deal with these challenges is to increase their focus on cybersecurity, invest in threat hunting, and use ML-based next-generation AV/ EDR. “ChatGPT is experimental and in its early stages at this time. While there are privacy and ethical concerns on one hand, there are cybersecurity concerns on the other. As it progresses to further advanced versions, it is yet to be seen whether this develops into an alarming tool,” she states.

Check Also

Indian IT Partners Riding the 2025 Tech Wave

Indian IT Partners Riding the 2025 Tech Wave

Indian IT partners, system integrators (SIs), and managed service providers (MSPs) are no longer just …

Do NOT follow this link or you will be banned from the site!