Examine: 1 in 10 AI Prompts May Expose Delicate Knowledge
A brand new examine from knowledge safety startup Harmonic Security discovered that just about one in 10 prompts utilized by enterprise customers when interacting with generative synthetic intelligence instruments might inadvertently disclose delicate knowledge.
The examine, performed within the fourth quarter of 2024, analyzed prompts throughout generative AI platforms reminiscent of Microsoft Copilot, OpenAI’s ChatGPT, Google Gemini, Claude, and Perplexity. Whereas the vast majority of AI utilization by staff concerned mundane duties like summarizing textual content or drafting documentation, 8.5% of prompts posed potential safety dangers.
Delicate Knowledge at Threat
Among the many regarding prompts, 45.8% risked exposing buyer knowledge, together with billing and authentication data. One other 26.8% concerned employee-related knowledge, reminiscent of payroll particulars, private identifiers, and even requests for AI-assisted worker efficiency opinions.
The remaining delicate prompts included:
- Authorized and finance data (14.9%): Gross sales pipeline knowledge, funding portfolios, and merger and acquisition exercise.
- Safety knowledge (6.9%): Penetration take a look at outcomes, community configurations, and incident reviews, which might be exploited by attackers.
- Delicate code (5.6%): Entry keys and proprietary supply code.
Harmonic Safety’s report additionally flagged considerations about staff utilizing free-tier generative AI providers, which regularly lack strong safety measures. Many free-tier providers explicitly state that person knowledge could also be used to coach AI fashions, creating additional dangers of unintended disclosure.
Free-Tier Utilization Raises Pink Flags
The examine revealed important reliance on free-tier AI providers, with 63.8% of ChatGPT customers, 58.6% of Gemini customers, 75% of Claude customers, and 50.5% of Perplexity customers choosing non-enterprise plans. These providers usually lack vital safeguards present in enterprise variations, reminiscent of the flexibility to dam delicate prompts or warn customers about potential dangers.
“Most generative AI use is mundane, however the 8.5% of prompts we analyzed doubtlessly put delicate private and firm data in danger,” mentioned Alastair Paterson, co-founder and CEO of Harmonic Safety, in a press release. “Organizations want to deal with this difficulty, notably given the excessive variety of staff utilizing free subscriptions. The adage that ‘if the product is free, you’re the product’ rings very true right here.”
Suggestions for Threat Mitigation
Harmonic Safety urged corporations to implement real-time monitoring programs to trace and handle knowledge entered into generative AI instruments. The agency additionally advisable:
- Guaranteeing staff use paid or enterprise AI plans that don’t practice on enter knowledge.
- Gaining visibility into prompts to grasp what data is being shared.
- Blocking or warning customers about dangerous prompts to stop knowledge leakage.
Whereas many organizations have begun implementing such measures, the report highlighted the necessity for broader adoption of those safeguards as generative AI turns into more and more built-in into office processes.
“Generative AI instruments maintain immense potential for enhancing productiveness, however with out correct safeguards, they’ll turn into a legal responsibility. Organizations should act now to make sure delicate knowledge is protected whereas nonetheless leveraging the advantages of AI expertise,” Paterson mentioned.
The total report is offered on the Harmonic Security site.
In regards to the Creator
John K. Waters is the editor in chief of a variety of Converge360.com websites, with a deal with high-end growth, AI and future tech. He is been writing about cutting-edge applied sciences and tradition of Silicon Valley for greater than two many years, and he is written greater than a dozen books. He additionally co-scripted the documentary movie Silicon Valley: A 100 12 months Renaissance, which aired on PBS. He will be reached at [email protected].