Sunday, February 23, 2025
HomeRoboticsRick Caccia, CEO and Co-Founding father of WitnessAI - Interview Collection

Rick Caccia, CEO and Co-Founding father of WitnessAI – Interview Collection


Rick Caccia, CEO and Co-Founding father of WitnessAI, has in depth expertise in launching safety and compliance merchandise. He has held management roles in product and advertising at Palo Alto Networks, Google, and Symantec. Caccia beforehand led product advertising at ArcSight via its IPO and subsequent operations as a public firm and served as the primary Chief Advertising Officer at Exabeam. He holds a number of levels from the College of California, Berkeley.

WitnessAI is growing a safety platform centered on making certain the protected and safe use of AI in enterprises. With every main technological shift—corresponding to net, cellular, and cloud computing—new safety challenges emerge, creating alternatives for business leaders to emerge. AI represents the subsequent frontier on this evolution.

The corporate goals to determine itself as a pacesetter in AI safety by combining experience in machine studying, cybersecurity, and large-scale cloud operations. Its staff brings deep expertise in AI growth, reverse engineering, and multi-cloud Kubernetes deployment, addressing the vital challenges of securing AI-driven applied sciences.

What impressed you to co-found WitnessAI, and what key challenges in AI governance and safety had been you aiming to unravel? 

After we first began the corporate, we thought that safety groups could be involved about assaults on their inside AI fashions. As an alternative, the primary 15 CISOs we spoke with stated the other, that widespread company LLM rollout was a great distance off, however the pressing drawback was defending their staff’ use of different individuals’s AI apps. We took a step again and noticed that the issue wasn’t keeping off scary cyberattacks, it was safely enabling corporations to make use of AI productively. Whereas governance perhaps much less horny than cyberattacks, it’s what safety and privateness groups truly wanted. They wanted visibility of what their staff had been doing with third-party AI, a solution to implement acceptable use insurance policies, and a solution to shield information with out blocking use of that information. In order that’s what we constructed.

Given your in depth expertise at Google Cloud, Palo Alto Networks, and different cybersecurity companies, how did these roles affect your strategy to constructing WitnessAI? 

I’ve spoken with many CISOs through the years. One of the vital frequent issues I hear from CISOs right now is, “I don’t wish to be ‘Physician No’ in terms of AI; I wish to assist our staff use it to be higher.” As somebody who has labored with cybersecurity distributors for a very long time, it is a very totally different assertion. It’s extra harking back to the dotcom-era, again when the Net was a brand new and transformative know-how. After we constructed WitnessAI, we particularly began with product capabilities that helped clients undertake AI safely; our message was that these things is like magic and naturally everybody desires to expertise magic. I feel that safety corporations are too fast to play the concern card, and we wished to be totally different.

What units WitnessAI other than different AI governance and safety platforms available in the market right now? 

Nicely, for one factor, most different distributors within the area are centered totally on the safety half, and never on the governance half. To me, governance is just like the brakes on a automotive. If you happen to actually wish to get someplace rapidly, you want efficient brakes along with a robust engine. Nobody goes to drive a Ferrari very quick if it has no brakes. On this case, your organization utilizing AI is the Ferrari, and WitnessAI is the brakes and steering wheel.

In distinction, most of our rivals give attention to theoretical scary assaults on a corporation’s AI mannequin. That may be a actual drawback, however it’s a distinct drawback than getting visibility and management over how my staff are utilizing any of the 5,000+ AI apps already on the web. It’s loads simpler for us so as to add an AI firewall (and we’ve) than it’s for the AI firewall distributors so as to add efficient governance and threat administration.

How does WitnessAI steadiness the necessity for AI innovation with enterprise safety and compliance? 

As I wrote earlier, we imagine that AI ought to be like magic – it will probably assist you do wonderful issues. With that in thoughts, we expect AI innovation and safety are linked. In case your staff can use AI safely, they’ll use it typically and you’ll pull forward. If you happen to apply the standard safety mindset and lock it down, your competitor gained’t do this, and they’re going to pull forward. Every little thing we do is about enabling protected adoption of AI. As one buyer advised me, “These items is magic, however most distributors deal with it prefer it was black magic, scary and one thing to concern.” At WitnessAI, we’re serving to to allow the magic.

Are you able to discuss in regards to the firm’s core philosophy concerning AI governance—do you see AI safety as an enabler moderately than a restriction? 

We recurrently have CISOs come as much as us at occasions the place we’ve introduced, and so they inform us, “Your rivals are all about how scary AI is, and you’re the solely vendor that’s telling us truly use it successfully.” Sundar Pichai at Google has stated that “AI might be extra profound than fireplace,” and that’s an fascinating metaphor. Hearth may be extremely damaging, as we’ve seen lately. However managed fireplace could make metal, which accelerates innovation. Generally at WitnessAI we speak about creating the innovation that permits our clients to soundly direct AI “fireplace” to create the equal of metal. Alternatively, when you suppose AI is akin to magic, then maybe our objective is to provide you a magic wand, to direct and management it.

In both case, we completely imagine that safely enabling AI is the objective. Simply to provide you an instance, there are lots of information loss prevention (DLP) instruments, it’s a know-how that’s been round ceaselessly. And folks attempt to apply DLP to AI use, and perhaps the DLP browser plug in sees that you’ve got typed in a protracted immediate asking for assist together with your work, and that immediate advertently has a buyer ID quantity in it. What occurs? The DLP product blocks the immediate from going out, and also you by no means get a solution. That’s restriction. As an alternative, with WItnessAI, we are able to determine the identical quantity, and silently and surgically redact it on the fly, after which unredact it within the AI response, so that you just get a helpful reply whereas additionally preserving your information protected. That’s enablement.

What are the largest dangers enterprises face when deploying generative AI, and the way does WitnessAI mitigate them?

The primary is visibility. Many individuals are stunned to study that the AI software universe isn’t simply ChatGPT and now DeepSeek; there are actually 1000’s of AI apps on the web and enterprises soak up dangers from staff utilizing these apps, so step one is getting visibility: which AI apps are my staff utilizing, what are they doing with these apps, and is it dangerous?

The second is management. Your authorized staff has constructed a complete acceptable use coverage for AI, one which ensures the protection of buyer information, citizen information, mental property, in addition to worker security. How will you implement this coverage? Is it in your endpoint safety product? In your firewall? In your VPN? In your cloud? What if they’re all from totally different distributors? So, you want a solution to outline and implement acceptable use coverage that’s constant throughout AI fashions, apps, clouds, and safety merchandise.

The third is safety of your individual apps. In 2025, we’ll see a lot sooner adoption of LLMs inside enterprises, after which sooner rollout of chat apps powered by these LLMs. So, enterprises want to ensure not solely that the apps are protected, but in addition that the apps don’t say “dumb” issues, like suggest a competitor.

We tackle all three. We offer visibility into which apps persons are accessing, how they’re utilizing these apps, coverage that’s primarily based on who you’re and what you are attempting to do, and really efficient instruments for stopping assaults corresponding to jailbreaks or undesirable behaviors out of your bots.

How does WitnessAI’s AI observability function assist corporations observe worker AI utilization and forestall “shadow AI” dangers?

WitnessAI connects to your community simply and silently builds a catalog of each AI app (and there are actually 1000’s of them on the web) that your staff’ entry. We inform you the place these apps are positioned, the place they host their information, and many others so that you just perceive how dangerous these apps are. You possibly can activate dialog visibility, the place we use deep packet inspection to watch prompts and responses. We will classify prompts by threat and by intent. Intent could be “write code” or “write a company contract.” It’s vital as a result of we then allow you to write intent-based coverage controls.

What function does AI coverage enforcement play in making certain company AI compliance, and the way does WitnessAI streamline this course of?

Compliance means making certain that your organization is following laws or insurance policies, and there are two elements to making sure compliance. The primary is that you need to be capable to determine problematic exercise. For instance, I have to know that an worker is utilizing buyer information in a approach that may run afoul of an information safety regulation. We do this with our observability platform. The second half is describing and implementing coverage towards that exercise. You don’t wish to merely know that buyer information is leaking, you wish to cease it from leaking. So, we we have constructed a singular AI-specific coverage engine, Witness/CONTROL, that permits you to simply construct identification and intention-based insurance policies to guard information, stop dangerous or unlawful responses, and many others. For instance, you possibly can construct a coverage that claims one thing like, “Solely our authorized division can use ChatGPT to write down company contracts, and in the event that they achieve this, routinely redact any PII.” Simple to say, and with WitnessAI, simple to implement.

How does WitnessAI tackle considerations round LLM jailbreaks and immediate injection assaults?

Now we have a hardcore AI analysis staff—actually sharp. Early on, they constructed a system to create artificial assault information, along with pulling in broadly obtainable coaching information units. Because of this, we’ve benchmarked our immediate injection towards all the pieces on the market, we’re over 99% efficient and recurrently catch assaults that the fashions themselves miss.

In follow, most corporations we converse with wish to begin with worker app governance, after which a bit later they roll out an AI buyer app primarily based on their inside information. So, they use Witness to guard their individuals, then they activate the immediate injection firewall. One system, one constant solution to construct insurance policies, simple to scale.

What are your long-term objectives for WitnessAI, and the place do you see AI governance evolving within the subsequent 5 years? 

To date, we’ve solely talked a few person-to-chat app mannequin right here. Our subsequent section can be to deal with app to app, i.e agentic AI. We’ve designed the APIs in our platform to work equally nicely with each brokers and people. Past that, we imagine we’ve constructed a brand new solution to get network-level visibility and coverage management within the AI age, and we’ll be rising the corporate with that in thoughts.

Thanks for the nice interview, readers who want to study extra ought to go to WitnessAI

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments