BY Fast Company 3 MINUTE READ

Salesforce CEO Marc Benioff scanned the crowd of corporate executives, journalists, and enthusiastic Salesforce fanboys sitting before him in a hotel ballroom in Manhattan on Monday and took an informal poll. “How many people here have already used ChatGPT? Raise your hands,” he asked the group, which had gathered for Salesforce’s AI Day to hear more about how the enterprise software giant plans to incorporate generative AI into its products.

Not a second later, the answer to Benioff’s question looked unanimous. “I didn’t have to count the hands, did I?” he said.

That this technology had become so ubiquitous in such a short period of time, was a testament to what “an amazing experience” generative AI tools can offer users, Benioff said. But this overnight adoption was also risky, he warned—particularly for Salesforce’s corporate clients who are potentially putting money, confidential information, and brand reputation at risk by experimenting with these ever-evolving, frequently hallucinating, entirely unregulated systems. “We’ve all seen the movies,” Benioff said. “We’ve all seen where this can go.”

Technologists have plenty of reasons to caution against the risks of generative AI. For some, it’s a moral imperative. For others, it’s an olive branch to regulators. For Benioff, it’s a heck of a marketing pitch.

In the midst of this arms race, where research-focused companies like OpenAI are barreling ahead, testing the limits of what large language models can do, Salesforce is positioning itself as a kind of responsible intermediary, offering corporations access to this transformational technology in a way that won’t land them in the middle of a major compliance crisis or a privacy scandal.

On Monday, Salesforce launched its own AI Cloud, a suite of products that act as a middleman between corporate clients and powerful large language models. These tools allow corporations to plug their own data into these models, while stripping out sensitive information, preventing that data from being retained to train the models further, and monitoring for toxicity in AI outputs. Salesforce is framing these products as the softer, safer side of this technical revolution.

“That is really the burden of our AI team over here,” Benioff told the crowd. “They have to really be able to use these next generation models, but have that capability to deliver a trusted experience to our customers.”

At the core of the problem with generative AI—or, the opportunity, if you’re Benioff—is the fact that these models have been trained on such vast quantities of data, not all of it reliable, and that they’re constantly learning from the information that users feed into them. For companies in tightly regulated industries like banking, or even less regulated industries where companies still want to keep user data secure, widely available generative AI tools offer little guarantee about where information is coming from or where it’s going. In Salesforce’s own research, the results of which it released Monday, the company found that while 61% of sales, service, marketing, and commerce employees surveyed said they are using or plan to use generative AI tools, nearly 60% of them also say they don’t know how to use these tools in a way that will use only trusted data sources and keep company data secure. Nearly three quarters of those surveyed said they believe generative AI creates new security risks.

While some companies—even prominent tech companies like Apple and Amazon—are already banning the use of ChatGPT at work, Salesforce is offering organizations access to these generative AI tools in a way that they say is less likely to lead to hallucinations and chatbots gone rogue. “The main reason why hallucinations happen is because the model on the other side doesn’t have enough data, so it’s acting on either old data or it just doesn’t have enough context to give you an answer that makes sense,” Patrick Stokes, executive vice president of platform at Salesforce, tells Fast Company. Enabling companies to use these models on their own data, Stokes says, will “mitigate the amount of hallucinations that happen, because the model has more data.”

In some ways, all of this makes Salesforce a sort of laboratory for a more responsible, private approach to generative AI technology. Already, governments in Europe have expressed concerns over user privacy, prompting OpenAI to release a feature that allows users to delete their data. With its products, Salesforce is promising clients their data will never be retained in the first place.

Of course, even as Benioff frames Salesforce’s products as the “trust layer” for generative AI, his company is also playing an active role in building up the very technology it’s attempting to police. Along with these product announcements Monday, Salesforce also announced it was doubling its generative AI fund—which has already invested in AI companies Anthropic and Cohere—from $250 million to $500 million. After all, the bigger the industry becomes, the more work there is for Salesforce.

FastCompany