Home » Interview » Businesses need to be Wary of Loopholes in OpenAI’s Data Collection and Sharing: ManageEngine

Businesses need to be Wary of Loopholes in OpenAI’s Data Collection and Sharing: ManageEngine

Ramprakash Ramamoorthy, Director, AI Research, ManageEngine, puts the responsibility on the users when it comes to the data they share on generative AI platforms like ChatGPT. In the quick interaction with Amit Singh, he highlights that there is a risk of users’ IP addresses and the on-site interactions might be shared with third-party prospects for their business objectives

What are the privacy risks ChatGPT poses to users due to its ability to collect personal data?

Generative AI can be complex in terms of privacy, but it also largely depends on the information users put on the internet. For example, OpenAI did state that it crawls millions of websites containing private information to collect training data. Additionally, there is still the matter of contextual integrity, which is the idea that personal information shouldn’t be disclosed outside of the initial context in which it was given.

In response to these obvious risks to privacy, regulations such as the European Union’s GDPR, Canada’s PIPEDA, and California’s CCPA have recently introduced policies to help mitigate these concerns. It is becoming increasingly important for all organizations, from startups to larger enterprises, to be aware of these risks as well as the global rules being introduced to respond to them.

Is there a possibility of ChatGPT being misused to track users’ activities and preferences or being used to commit identity theft or phishing scams?

As ChatGPT is still in its nascent stages, OpenAI’s privacy policies are constantly evolving. Recently, OpenAI clarified that it no longer uses the data submitted through its API for model training. While this is promising, there is still no policy or mechanism as of now that prevents a user from voluntarily opting to provide personal information in the prompts. This is where it gets tricky, as OpenAI would not be able to remove any data once entered. As far as we know, OpenAI does crawl through personal data, but there’s no documentation to see how exactly it takes place.

What are the security measures offered by ChatGPT? Are they enough to safeguard user privacy and data?

Generative AI as a whole is evolving and is in a nascent stage at present. Privacy regulations are still being worked on, so for the time being, it is the users’ responsibility when it comes to the data they put on these platforms.

Organizations and individuals need to be wary of OpenAI’s loopholes in data collection and sharing, including the ones concerning users’ IP addresses and the on-site interactions that take place. There is also the risk that this information might be shared with third-party prospects for their business objectives.

With its growing popularity as the platform to generate responses to user queries, ChatGPT is being used by many of the employees inside their organizational network. What are the implications of its usage in the organizational network and what are the precautions employees need to take to avoid any uncomfortable situation?

All enterprises, from startups to global corporations, should be concerned about the possible effects that generative AI may have on data privacy. Organizations must ensure that individuals are fully informed about how their data is used and that all data is collected and used with explicit consent. Strong security measures should also be implemented to prevent unauthorized access to or disclosure of personal data, and organizations should be prepared to notify the concerned parties in the event of a data breach.

Along with legal compliance, the ethical ramifications of generative AI must be taken into account, and measures must be taken to guarantee that the technology is used in a transparent, accountable, and responsible manner. Also, generative AI models can be audited regularly to make sure they are being utilized properly. This will help build clear standards and procedures for the responsible use of generative AI.

How are enterprises and security teams gearing themselves to fight against any unforeseen disaster arising due to ChatGPT in their organization?

It is becoming increasingly important for privacy regulators, legislators, and IT workers to be aware of the privacy risks that generative AI poses. Although generative AI can fundamentally alter how businesses collect and use data, it also comes with substantial privacy threats. To reduce these dangers, it is essential to put privacy safeguards in place, maintain legal compliance, and comprehend the ethical ramifications. By taking these actions, companies can use generative AI tools with the knowledge that they are utilizing innovation safely with the good of society in mind.

Check Also

How GCCs are Now Driving Innovation and Strategic Value in Global Organizations

How GCCs are Now Driving Innovation and Strategic Value in Global Organizations

India’s GCCs have undergone a remarkable evolution, shifting from traditional cost-centric roles to becoming critical …

Do NOT follow this link or you will be banned from the site!