Chatbots are gaining popularity and prominence worldwide, with billions accessing them. Now, new research has revealed that a silent problem of privacy breaches is coming to a head, exposing companies that process personal data to greater risk, Zawya reports.
Cybersecurity firm Harmonic Security analysed over 176,000 prompts input by about 8,000 users into popular generative AI platforms, including ChatGPT, Google’s Gemini, Perplexity AI, and Microsoft’s Copilot, and discovered that scores of sensitive information were reaching the platforms through the prompts.
In the quarter to March 2025, around 6.7 per cent of the prompts it tracked reportedly contained sensitive information, including customer personal data, employee data, company confidential legal and finance details, or even sensitive code.
Approximately 30 per cent of the sensitive data was legal and finance data on companies’ planned mergers or acquisitions, investment portfolio, legal discourse, billing and payment, sales pipeline, or even financial projections.
In addition, customer data such as credit card numbers, transactions, or profiles had made their way to these platforms through the prompts, together with employee information, including payroll details and employment profiles.
Developers using genAI tools to improve or perfect their code had also inadvertently passed on copyrighted or intellectual property material, security keys, and network information into the bots, exposing their companies to fraudsters.
If questioned about the safety of such information, chatbots such as ChatGPT reportedly state that the information is safe and is not shared with third parties. Their terms of service emphasise this, but experts are raising a red flag.
Though information may appear secure within the bots and pose no threat of breach, cybersecurity experts have warned that it is time for companies to begin checking and restricting what information their employees feed into these platforms, or risk massive data breaches.
“One of the privacy risks when using AI platforms is unintentional data leakage,” Anna Collard - senior vice president for content strategy at cybersecurity firm KnowBe4 Africa - said. “Many people don’t realise just how much sensitive information they’re inputting.
“Cyber hygiene now includes AI hygiene. This should include restricting access to genAI tools without oversight or only allowing those approved by the company.
”While a majority of companies around the globe now acknowledge the importance of AI in their operations and are beginning to adopt it, only a few organisations have policies or checks for AI output,” Ms Collard said.
According to McKinsey’s latest State of AI survey, which interviewed business leaders worldwide, only 27 per cent of companies fully review content generated by AI, while 43 per cent of companies check less than 40 per cent of such content.
Yet AI use multiplies by the minute. Large language Models (LLMs) such as ChatGPT have assailed the social media apps that have become digital magnets for user visits and hours of daily interactions.
Numerous studies, including the one by McKinsey, reportedly reveal that today, nearly three in four employees use genAI to complete simple tasks like writing a speech, proofreading copy, composing an email, analysing a document, generating a quotation, or even writing computer programmes.
The rapid proliferation of Chinese-based LLMs such as Deepseek is also perceived to be increasing the threat of data breaches to companies. Over the past year, an avalanche of new Chinese chatbots, including Baidu chat, Ernie Bot, Qwen chat, Manus, and Kimi Moonshot, have emerged.
In a recent report, Harmonic wrote, “The Chinese government can likely just request access to this data, and data shared with them should be considered property of the Chinese Communist Party.”
Source: Zawya
(Quotes via original reporting)
Chatbots are gaining popularity and prominence worldwide, with billions accessing them. Now, new research has revealed that a silent problem of privacy breaches is coming to a head, exposing companies that process personal data to greater risk, Zawya reports.
Cybersecurity firm Harmonic Security analysed over 176,000 prompts input by about 8,000 users into popular generative AI platforms, including ChatGPT, Google’s Gemini, Perplexity AI, and Microsoft’s Copilot, and discovered that scores of sensitive information were reaching the platforms through the prompts.
In the quarter to March 2025, around 6.7 per cent of the prompts it tracked reportedly contained sensitive information, including customer personal data, employee data, company confidential legal and finance details, or even sensitive code.
Approximately 30 per cent of the sensitive data was legal and finance data on companies’ planned mergers or acquisitions, investment portfolio, legal discourse, billing and payment, sales pipeline, or even financial projections.
In addition, customer data such as credit card numbers, transactions, or profiles had made their way to these platforms through the prompts, together with employee information, including payroll details and employment profiles.
Developers using genAI tools to improve or perfect their code had also inadvertently passed on copyrighted or intellectual property material, security keys, and network information into the bots, exposing their companies to fraudsters.
If questioned about the safety of such information, chatbots such as ChatGPT reportedly state that the information is safe and is not shared with third parties. Their terms of service emphasise this, but experts are raising a red flag.
Though information may appear secure within the bots and pose no threat of breach, cybersecurity experts have warned that it is time for companies to begin checking and restricting what information their employees feed into these platforms, or risk massive data breaches.
“One of the privacy risks when using AI platforms is unintentional data leakage,” Anna Collard - senior vice president for content strategy at cybersecurity firm KnowBe4 Africa - said. “Many people don’t realise just how much sensitive information they’re inputting.
“Cyber hygiene now includes AI hygiene. This should include restricting access to genAI tools without oversight or only allowing those approved by the company.
”While a majority of companies around the globe now acknowledge the importance of AI in their operations and are beginning to adopt it, only a few organisations have policies or checks for AI output,” Ms Collard said.
According to McKinsey’s latest State of AI survey, which interviewed business leaders worldwide, only 27 per cent of companies fully review content generated by AI, while 43 per cent of companies check less than 40 per cent of such content.
Yet AI use multiplies by the minute. Large language Models (LLMs) such as ChatGPT have assailed the social media apps that have become digital magnets for user visits and hours of daily interactions.
Numerous studies, including the one by McKinsey, reportedly reveal that today, nearly three in four employees use genAI to complete simple tasks like writing a speech, proofreading copy, composing an email, analysing a document, generating a quotation, or even writing computer programmes.
The rapid proliferation of Chinese-based LLMs such as Deepseek is also perceived to be increasing the threat of data breaches to companies. Over the past year, an avalanche of new Chinese chatbots, including Baidu chat, Ernie Bot, Qwen chat, Manus, and Kimi Moonshot, have emerged.
In a recent report, Harmonic wrote, “The Chinese government can likely just request access to this data, and data shared with them should be considered property of the Chinese Communist Party.”
Source: Zawya
(Quotes via original reporting)