November 26, 2024
Data privacy concerns surge among AI users, reveals new survey
- 54% of consumers don’t know how much personal data AI tools collect
- 71% of AI users have regretted sharing their data with an AI tool
New research from Liverpool City Region based data privacy software company, Syrenis, has found that 71% of AI users have regretted sharing their data with an AI tool after discovering the extent of what they shared.
Based at the national science and innovation campus, Sci-Tech Daresbury, Syrenis has brought Cassie to market: a consent and preference management platform that centrally manages over 1.2 billion customer records for large organisations worldwide.
Handling high-volume, complex data, the privacy software platform ensures that an organisation’s consent data is fully auditable and up-to-date across their entire tech stack. The North West software as a service (SaaS) business serves some of the world’s biggest brands in pharmaceuticals, banking, automotive, and retail.
When it comes to AI, consent is a vital element. Consumers need to feel like they can trust brands with their data, but do they trust AI?
Cassie has reported that not only do most consumers regret sharing their information with AI, but that 54% don’t know how much personal data their AI tools even collect.
This is according to data from Cassie’s research report, The AI Trade Off, which collected responses from over 600 individuals in the US across 31 states. While the report found that 71% of AI users have regretted sharing their data with an AI tool, it was also revealed that 79% of respondents who claim not to use AI admit they aren’t entirely sure which tools are AI-powered.
The report highlights growing concerns about AI as it becomes more integrated into everyday life. Many consumers fear that potential privacy violations could expose them to risks such as unauthorised data storage or leaks, and in response, users are increasingly demanding greater transparency and control over how AI operates. Consumer bodies are increasingly calling for clear disclosure of when AI is being used with the goal of protecting personal information.
In fact, 89% of consumers believe AI isn’t inherently bad but needs more regulation, and 79% would be more likely to opt-in to data sharing if stricter regulations were in place.
Nicky Watson, Founder and Chief Architect of Cassie, commented: “The world of AI is changing, and the rules surrounding privacy and data feel like they’re shifting all the time. When using AI tools, it’s increasingly difficult for consumers to track how their personal data is used and stored, resulting in individuals having low confidence when making decisions online.
“To combat this, companies need to adopt complete transparency when using AI on their platform, informing consumers when AI is present and explaining exactly how their personal data will be used by the technology before consent is given.
“With the growth of smart technology in household appliances, cars, and an array of new innovations, our compliance management is crucial to help businesses navigate the complex world of data privacy, ensuring consumers are in complete control of their personal information and businesses are compliant.”
John Leake, Business Growth Director at Sci-Tech Daresbury, added: “It’s gratifying to see such important, topical research carried out on campus here at Sci-Tech Daresbury. Cassie is a key part of the growing cluster of companies and organisations on the campus developing and applying AI solutions across a variety of sectors. AI is already changing the world for the better in many ways, but transparency around potential pitfalls of the technology is imperative.
“While Cassie may be based in the Liverpool City Region, its compliance management work operates on the world stage, and it’s research like this that can help businesses and consumers across the globe navigate the complications of using of AI.”