In recent years, privacy has evolved from just a compliance issue to a business imperative and customer demand. And with the explosive growth of artificial intelligence, the focus on privacy will only increase.
That's why Cisco's annual data privacy benchmark study is so important. Along with the company's consumer privacy research, it picks out key trends in data governance, investments, and the impact of rapidly changing technology.
Learn more about Cisco's 2024 Data Privacy Benchmark Study, based on a survey of 2,600 security and privacy professionals from 12 countries. Deb StahlkopfChief Legal Officer of Cisco.
Thanks, Deb! Cisco has been conducting data privacy research for seven years, including its most recent report, his 2024 Data Privacy Benchmark Study. What are some of the key trends that have emerged?
Thank you, Kevin. There are two themes that stand out to me. First and foremost, organizations recognize the relationship between privacy and trust. Customers increasingly want to buy from organizations they trust. In fact, 94% of respondents said their customers won't buy from them if their data isn't properly protected. Second, organizations believe that the return on their privacy investment exceeds the cost. Privacy spending has more than doubled since we began our research, but customers say their ROI remains high. Our data shows that organizations receive an estimated $160 in benefits for every $100 they spend on privacy.
The survey also shows increasing support for privacy laws from both organizations and consumers. What is driving that trend?
Yes, privacy laws impose additional costs and requirements on organizations. Still, 80% of respondents said privacy laws had a positive impact on their organizations, while only 6% said they had a negative impact.
So why are organizations so aggressive about regulations that add cost and effort? It comes back to trust. Organizations recognize that privacy is a driver of customer trust. Globally interoperable privacy laws will also help drive a more consistent approach to handling personal data throughout the data lifecycle and ecosystem.
Our research also shows that consumers want governments to take a leading role in data protection. Strong privacy regulations increase customer confidence that your organization is handling their data well.
Technology is changing rapidly, and with that change comes new privacy concerns. How do you think privacy laws and regulations will evolve in the coming years?
Over 160 countries currently have omnibus privacy laws in place. And dozens more are being drafted and enacted as we speak. The next generation of privacy laws will continue to promote transparency, fairness, and accountability in areas such as data collection and use, cross-border data flows, and verifiable compliance. Each of these areas is broader than just privacy, but privacy is at the core of many of these issues.
Unsurprisingly, AI was a major topic in this year's data privacy benchmark survey. As Cisco's Chief Legal Officer, how will you address the changing intersection of AI and privacy?
Privacy is the foundation of AI. Much of what we have built in privacy over the past decade—policies, standards, tools, frameworks—is being leveraged to build responsible AI programs. While some of the biggest risks of AI arise from the collection and use of personal data, AI risks also include privacy, intellectual property, human rights, accuracy and reliability, and bias, to name a few. It extends far beyond. Our research shows that 60% of consumers have already lost trust in organizations due to the use of AI. Therefore, it was a business imperative for us to build a governance program at Cisco that was aligned with the new use cases and impact of AI.
How is this put into practice during product development?
We have a dedicated privacy team leveraging the Cisco Secure Development Lifecycle (CSDL) to embed privacy by design as a core component of our product development methodologies. As the use of AI becomes more prevalent, we will develop an AI Impact Assessment based on our Responsible AI Principles to evaluate the development, use, and deployment of Cisco AI, and will incorporate the assessment as part of our CSDL and vendor due diligence. I did. These assessments consider various aspects of AI and product development, including models, training data, tweaks, prompts, privacy practices, and testing methods. The goal is to help you identify, understand, and manage AI risks to maintain the trust of your employees, customers, and stakeholders.
What are the main risks associated with AI and how can they be mitigated?
A recent survey found that 92% of organizations view GenAI as a fundamentally different technology with new challenges and concerns, requiring new techniques to manage data and risk. 69% cite the potential for GenAI to compromise their organization's legal and intellectual property rights as their biggest concern. 68% were concerned that the information they entered could be shared publicly or with competitors. Additionally, 68% were concerned that the information returned to users might be incorrect. These are real risks, but they can be managed with a thoughtful approach to governance.
In an AI-driven environment, how can businesses ensure they leverage the potential of AI while protecting the privacy rights of their customers and employees?
Companies must conduct their own risk assessment. But I believe there is a way forward if governance is in place. At Cisco, for example, we have both responsible AI principles and a framework to guide our approach. We also created a generative AI policy regarding the acceptable use of these new tools. Conduct an AI impact assessment to identify and manage AI-specific risks before allowing the use of GenAI tools containing sensitive information. Once you've verified that the tool adequately protects sensitive information and are satisfied with the security and privacy protections in place, make the tool available for further exploration and innovation by your employees.
Do you have any final advice for businesses trying to navigate this current environment?
We are still in the early stages of AI. We need to approach this new technology with excitement and humility. There are still many things we don't know. New concerns are being raised every day. Businesses need to be agile and responsive to changing regulations, consumer concerns, and evolving risks. We also need strong partnerships across the public and private sectors. AI has great potential for good, but industry, governments, developers, adopters, and users all need to work together to foster responsible innovation without compromising privacy, security, human rights, and safety. is needed.