Home » CHANNEL NEWS » More Than One-Third of Sensitive Business Data Entered into AI Apps is Regulated Personal Data: Netskope Threat Labs

More Than One-Third of Sensitive Business Data Entered into AI Apps is Regulated Personal Data: Netskope Threat Labs

Netskope, a leader in Secure Access Service Edge (SASE), released new findings indicating that over 33% of sensitive business information shared with generative AI (genAI) applications includes regulated data, posing potential risks such as costly data breaches for organizations.

According to Netskope Threat Labs’ latest research, despite a more than threefold increase in genAI adoption over the past year, many enterprises struggle to strike a balance between facilitating secure usage and managing associated risks. The study highlights that 75% of surveyed businesses currently block at least one genAI app entirely to mitigate the risk of sensitive data leaks. However, less than half of these organizations employ data-centric controls to prevent the sharing of sensitive information through input queries, underscoring a lag in deploying robust data loss prevention (DLP) solutions necessary for safe genAI operations.

Based on global datasets, the research reveals that 96% of enterprises are now leveraging genAI, up significantly from previous years. On average, organizations deploy nearly 10 genAI applications, with top adopters using an average of 80 apps. This surge in usage has corresponded with a rise in incidents involving the sharing of proprietary source code via genAI apps, accounting for 46% of documented data policy violations. These trends highlight the complexity enterprises face in managing risks associated with genAI and underscore the urgent need for enhanced DLP strategies.

The report also notes positive strides in proactive risk management, such as the implementation of real-time user coaching in 65% of enterprises to guide interactions with genAI apps effectively. Effective coaching has proven pivotal in mitigating risks, prompting behavioral changes in 57% of users following coaching alerts.

James Robinson, Chief Information Security Officer at Netskope, emphasized the importance of robust risk management strategies amidst the rapid expansion of genAI across enterprises. He stressed that while genAI offers transformative potential, it also introduces new vulnerabilities that can inadvertently expose sensitive data or propagate malicious content, necessitating comprehensive security measures to safeguard data integrity, reputation, and business continuity.

Netskope’s Cloud and Threat Report: AI Apps in the Enterprise highlights:
– ChatGPT remains the most widely adopted genAI app, used by over 80% of enterprises.
– Microsoft Copilot has shown significant growth since its launch in January 2024, with a 57% increase in adoption.
– 19% of organizations have imposed a blanket ban on GitHub CoPilot due to security concerns.

Key Recommendations for Enterprises:
Netskope advises enterprises to tailor their risk frameworks specifically to AI and genAI using frameworks like the NIST AI Risk Management Framework. Tactical steps include assessing current AI usage, implementing fundamental security controls such as access management and encryption, and developing advanced measures like threat modeling and anomaly detection to detect abnormal data movements and behaviors across cloud environments.

 

Check Also

SmartSoC Solutions Partners with Cortus to Advance Chip Design and Manufacturing for SIM Cards, Smart Cards, Banking Cards, and E-Passports in India

SmartSoC Solutions Partners with Cortus to Advance Chip Design and Manufacturing for SIM Cards, Smart Cards, Banking Cards, and E-Passports in India

SmartSoC Solutions Private Limited, an Indian semiconductor design and product engineering company, today announced a strategic partnership …

Do NOT follow this link or you will be banned from the site!