Skip to main content

Data loss prevention is most often discussed in terms of fortifying against external attacks by threat actors upon a network or other sensitive system. We talk about updates, patches, multi-factor authentication, and user authorization. When AI systems and AI-dependent systems are brought into the discussion, the focus tends to shift to what people send out into the world, wittingly or unwittingly, through lax or lapses in digital security measures; in other words, internal threats. 

The rapid and wide introduction of large language models (LLMs), such as ChatGPT, onto the world stage has added another threat surface to the mix: “conversations” with LLMs in which the prompts received and the answers provided are recorded and saved, not for posterity, but for future model training and, subsequently, as additional content for that model’s knowledge base. If a prompt includes sensitive or proprietary company data or intellectual property (IP) or personally identifiable information (PII) about employees or customers, the organization is immediately faced with risks from every direction: system integrity, consumer, shareholder, and employee trust, reputational harm, and financial impacts. But nothing can be done to undo that loss of data. 

Human-dependent safeguards, such as employee training and strong system configurations, are part of a solid first line of defense. But LLMs require a new, specialized approach to “perimeter defense,” not so much due to their structure or purpose, but due to the way we interact with them. We know they are systems with access to staggering amounts of information and incredible compute power. We know that. And yet…using them can have a lulling effect that implies a safe, private space. 

This approach, even for LLMs that are not “conversational,” can create a reflex not unlike that of a well-crafted phishing email in that a person who ought to and often does know better does exactly what they should not do, and then it’s game over. The information is irretrievably out there.

The initial response to LLMs by organizations from school districts to banking behemoths was a total ban. But that is a stop-gap measure at best. A better solution, and one that is now available to organizations, is CalypsoAI Moderator: a user-friendly,  proactive governance and usage system that reviews prompts for organization-specific issues, such as bias, toxicity, profanity, personal information, etc., as well as for source code, and prevents the prompts from being sent to the LLMs of choice and thereby prevents company-related content from becoming part of another organization’s knowledge base. It also scans LLM responses for content such as malware and spyware and prevents it from entering your organization’s ecosystem.  

CalypsoAI Moderator has a clear, simple interface and responds with personable replies when prompts need to be edited to meet acceptable use policies. Because it is LLM agnostic, it can be used with any language model and enables the organization to provide multiple options; for instance, ChatGPT, Cohere, and AI21 for general use, Bloomberg for Finance teams, Harvey for Compliance, Legal, and Lobbying teams, and other specialized, task-specific options to fit the organization’s business needs. 

Scalable across the enterprise, CalypsoAI Moderator is the first solution to provide a safe, secure research environment and fine-grained content supervision with no effect on response speed. Chat histories are fully tracked for usage, content, and cost, and are fully auditable. With a setup time of less than 60 minutes, CalypsoAI Moderator is a groundbreaking solution that can accelerate deployment of reliable, resilient, trustworthy AI within your organization today