BY Fast Company 3 MINUTE READ

Shortly after ChatGPT’s public launch, a slew of corporate giants, from Apple to Verizon, made headlines when they announced bans on the use of the technology at work. But a new survey finds that those companies were far from outliers.

According to a new report from Cisco, which polled 2,600 privacy and security professionals last summer, more than one in four companies have banned the use of generative AI tools at work at some point. Another 63% of respondents said they’ve limited what data their employees can enter into those systems, and 61% have restricted which generative AI tools employees at their companies can use.

At the heart of these restrictions is the concern that employees may inadvertently leak private company data to a third party like OpenAI, which can then turn around and use that data to further train its AI models. In fact, 68% of respondents listed said they’re concerned about that kind of data sharing. OpenAI does offer companies access to a paid enterprise product, which promises to keep business data private. But the free, public-facing version of ChatGPT and other generative AI tools like Google Bard offer far fewer guardrails.

That, says Cisco chief legal officer Dev Stahlkopf, can leave companies’ internal information vulnerable. “With the influx of AI use cases, an enterprise has to consider the implications before opening a tool for employee use,” says Cisco chief legal officer Dev Stahlkopf, noting that Cisco conducts AI impact assessments for all new AI products from third parties. “Every company needs to make their own assessments of their risk and risk tolerance, and for some companies prohibiting use of these tools may make sense.”

Companies like Salesforce have attempted to turn this uncertainty into a market opportunity, rolling out products that promise to remove sensitive data from being stored by the system and screen for toxicity in model responses. And yet, it’s clear the popularity of off-the-shelf tools like ChatGPT is already causing headaches for corporate privacy professionals. Despite the restrictions the majority of companies have enacted, the survey found that 62% of respondents have entered information about internal processes into generative AI tools. Another 42% say they’ve entered non-public company information into these tools, and 38% say they’ve put customer information into them, as well.

But it’s not just employees leaking private data that businesses are worried about. According to the survey, the biggest concern among security and privacy professionals when it comes to generative AI is that the AI companies are using public data to train their models in ways that infringe on their businesses’ intellectual property. (In addition, 58% see job displacement as a risk.)

Already, the IP issue is bubbling up in the courts. Last month, The New York Times sued OpenAI over allegations that the AI giant used the Times’ news articles to train the models that run its chatbot. OpenAI has said the suit is “without merit” and that training using those articles is fair use, legally speaking. The suit joins a mounting number of cases, brought by the likes of comedian Sarah Silverman and others, which make similar infringement claims against companies including Meta and Stability AI.

The survey results suggest that, for the vast majority of companies, addressing these privacy risks — both to their own data and their clients’ data — is a top priority, and many seem to welcome legislation that would enshrine privacy protections into law. While the U.S. has yet to pass long-promised federal privacy legislation, Cisco’s global survey found that some 80% of respondents said privacy legislation in their region had actually helped their companies, despite the increased investment required.

“Organizations believe the return on privacy investment exceeds spending,” Stahlkopf says. “Organizations that treat privacy as a business imperative, and not just as a compliance exercise, will benefit in this era of AI.”