Rising data breaches highlight AI privacy & compliance concerns

Perforce’s 2025 State of Data Compliance and Security Report reveals that organisations continue to struggle with protecting sensitive data in AI and software development environments.

Key Findings

Based on a survey of 280 organisations globally, 60% reported data breaches or theft in development, testing, or AI systems—an 11% rise from last year. Despite these risks, 84% still allow compliance exceptions in non-production environments, with most citing data-driven decision-making as justification. Sensitive data remains widely used, with 95% usage in testing, 90% in AI, and 78% in general development. Around one-third of respondents faced audit issues, and 22% reported regulatory breaches or fines, highlighting the compliance risks associated with using real data outside production systems.

Conflicting Views on AI Data Use

The report highlights contradictions in attitudes toward AI. While 91% believe models should use sensitive data in AI training and 82% view it as safe, most are also concerned about theft (78%) and compliance failures (68%). Perforce’s experts warn that organisations face a “dual challenge” — pressure to innovate with AI while maintaining data privacy. They advise against using personally identifiable information (PII) in model training and recommend synthetic data as a safer alternative.

Privacy Investment and Ongoing Risks

To address these risks, 86% of organisations plan to invest in AI data privacy solutions within two years, nearly half already use synthetic data, and 95% apply static masking to limit exposure. However, many still perceive masking and compliance steps as burdensome, which can lead to persistent vulnerabilities. Perforce urges organisations to close these gaps by embracing automation and AI-driven privacy tools.

Industry Outlook

In response, Perforce has expanded its Delphix DevOps Data Platform to include AI-powered synthetic data generation, integrating masking, delivery, and privacy solutions for secure AI development. The report concludes that organisations are entering a phase of rapid AI adoption, combined with heightened regulatory risk, driving urgent demand for privacy technologies and governance frameworks that safeguard sensitive data while enabling innovation.

Source: IT brief Australia

Leave a Reply

Your email address will not be published. Required fields are marked *