fbpx

The double-edged sword: The risks and rewards of AI examined in new study—does productivity boost counter security and privacy risks?

by | May 17, 2024 | Public Relations

AI security and governance have taken a back seat to the business-boosting promise of AI so far in the young life of the wunderkind tech, as business leaders froth at the mouth imagining the realization of AI’s potential. New research from data security firm Immuta announced The AI Security & Governance Report more closely explores how organizations are adopting AI, navigating emerging security and privacy challenges, and updating governance guidelines to safely capitalize on the technology’s potential.

The firm’s 2024 State of Data Security report, for which they with market research agency UserEvidence surveyed nearly 700 engineering leaders, data security professionals, and governance experts on their outlook, finds that AI adoption remains sky-high, with more than half of data experts (54 percent) saying that their organization already leverages at least four AI systems or applications. More than three-quarters (79 percent) also report that their budget for AI systems, applications, and development has increased in the last 12 months.

AI risks

However, this fast-paced adoption also carries massive uncertainty

For example, 80 percent of data experts agree that AI is making data security more challenging. Experts expressed concern around the inadvertent exposure of sensitive data by LLMs and adversarial attacks by malicious actors via AI models. In fact, 57 percent of respondents have seen a significant increase in AI-powered attacks in the past year.

While rapid AI adoption is certainly introducing new security challenges, the optimism around its potential is pushing organizations to adapt. Data leaders believe, for example, that AI will enhance current security practices for tasks such as AI-driven threat detection systems (40 percent) and the use of AI as an advanced encryption method (28 percent). With these benefits looming in the face of security risks, many organizations (83 percent) are updating internal privacy and governance guidelines, and taking steps to address the new risks:

  • 78 percent of data leaders say that their organization has conducted risk assessments specific to AI security.
  • 72 percent are driving transparency by monitoring AI predictions for anomalies.
  • 61 percent have purpose-based access controls in place to prevent unauthorized usage of AI models.
  • 37 percent say they have a comprehensive strategy in place to remain compliant with recent and forthcoming AI regulations and data security needs.

AI risks

“Current standards, regulations, and controls are not adapting fast enough to meet the rapid evolution of AI, but there is optimism for the future,” said Matt DiAntonio, VP of product management at Immuta, in a news release. “The report clearly outlines a number of AI security challenges, as well as how organizations are looking to AI to help solve them. AI and machine learning are able to automate processes and quickly analyze vast data sets to improve threat detection and enable advanced encryption methods to secure data. As organizations mature on their AI journeys, it is critical to de-risk data to prevent unintended or malicious exposure of sensitive data to AI models. Adopting an airtight security and governance strategy around generative AI data pipelines and outputs is imperative to this de-risking.”

Despite so many data leaders expressing that AI makes security more challenging, 85 percent say they’re somewhat or very confident that their organization’s data security strategy will keep pace with the evolution of AI. In contrast to research just last year that found 50 percent strongly or somewhat agreed that their organization’s data security strategy was failing to keep up with the pace of AI evolution, this indicates that there’s a maturity curve and many organizations are plowing ahead on AI initiatives despite the risks as the expected payoff is worth it.

AI risks

The rapid changes in AI are understandably exciting, but also unknown

This is especially true as regulations are fluid and many models lack transparency. Data leaders should pair their optimism with the reality that AI will continue to change—and the goalposts of compliance will continue to move as it does. No matter what the future of AI holds, one action is clear: there is no responsible AI strategy without a data security strategy. Companies need to establish governance that supports a data security strategy that isn’t static, but rather one that dynamically adapts as innovation delivers results for the business.

Download the full report here.

Richard Carufel
Richard Carufel is editor of Bulldog Reporter and the Daily ’Dog, one of the web’s leading sources of PR and marketing communications news and opinions. He has been reporting on the PR and communications industry for over 17 years, and has interviewed hundreds of journalists and PR industry leaders. Reach him at richard.carufel@bulldogreporter.com; @BulldogReporter

RECENT ARTICLES