Guardrails AI wants to crowdsource fixes for GenAI model problems

[ad_1]

It doesn’t take much to get GenAI spouting mistruths and untruths.

This past week provided an example, with Microsoft’s and Google’s chatbots declaring a Super Bowl winner before the game even started. The real problems start, though, when GenAI’s hallucinations get harmful — endorsing torture, reinforcing ethnic and racial stereotypes and writing persuasively about conspiracy theories.

An increasing number of vendors, from incumbents like Nvidia and Salesforce to startups like CalypsoAI, offer products they claim can mitigate unwanted, toxic content from GenAI. But they’re black boxes; short of testing each independently, it’s impossible to know how these hallucination-fighting products compare — and whether they actually deliver on the claims.

Shreya Rajpal saw this as a major problem — and founded a company, Guardrails AI, to attempt to solve it.

“Most organizations … are struggling with the same set of problems around responsibly deploying AI applications and struggling to figure out what’s the best and most efficient solution,” Rajpal told TechCrunch in an email interview. “They often end up reinventing the wheel in terms of managing the set of risks that are important to them.”

To Rajpal’s point, surveys suggest complexity — and by extension risk — is a top barrier standing in the way of organizations embracing GenAI.

A recent poll from Intel subsidiary Cnvrg.io found that compliance and privacy, reliability, the high cost of implementation and a lack of technical skills were concerns shared by around a fourth of companies implementing GenAI apps. In a separate survey from Riskonnect, a risk management software provider, over half of execs said that they were worried about employees making decisions based on inaccurate information from GenAI tools.

Rajpal, who previously worked at self-driving startup Drive.ai and, after Apple’s acquisition of Drive.ai, in Apple’s special projects group, co-founded Guardrails with Diego Oppenheimer, Safeer Mohiuddin and Zayd Simjee. Oppenheimer formerly led Algorithmia, a machine learning operations platform, while Mohiuddin and Simjee held tech and engineering lead roles at AWS.

In some ways, what Guardrails offers isn’t all that different from what’s already on the market. The startup’s platform acts as a wrapper around GenAI models, specifically open source and proprietary (e.g. OpenAI’s GPT-4) text-generating models, to make those models ostensibly more trustworthy, reliable and secure.

Guardrails AI

Image Credits: Guardrails AI

But where Guardrails differs is its open source business model — the platform’s codebase is available on GitHub, free to use — and crowdsourced approach.

Through a marketplace called the Guardrails Hub, Guardrails lets developers submit modular components called “validators” that probe GenAI models for certain behavioral, compliance and performance metrics. Validators can be deployed, repurposed and reused by other devs and Guardrails customers, serving as the building blocks for custom GenAI model-moderating solutions.

“With the Hub, our goal is to create an open forum to share knowledge and find the most effective way to [further] AI adoption — but also to build a set of reusable guardrails that any organization can adopt,” Rajpal said.

Validators in the Guardrails Hub range from simple rule-based checks to algorithms to detect and mitigate issues in models. There’s about 50 at present, ranging from hallucination and policy violations detector to filters for proprietary information and insecure code.

“Most companies will do broad, one-size-fits-all checks for profanity, personally identifiable information and so on,” Rajpal said. “However, there’s no one, universal definition of what constitutes acceptable use for a specific organization and team. There’s org-specific risks that need to be tracked — for example, comms policies across organizations are different. With the Hub, we enable people to use the solutions we provide out of the box, or use them to get a strong starting point solution that they can further customize for their particular needs.”

A hub for model guardrails is an intriguing idea. But the skeptic in me wonders whether devs will bother contributing to a platform — and a nascent one at that — without the promise of some form of compensation.

Rajpal is of the optimistic opinion that they will, if for no other reason than recognition — and selflessly helping the industry build toward “safer” GenAI.

“The Hub allows developers to see the types of risks other enterprises are encountering and the guardrails they’re putting in place to solve for and mitigate those risks,” she added. “The validators are an open source implementation of those guardrails that orgs can apply to their use cases.”

Guardrails AI, which isn’t yet charging for any services or software, recently raised $7.5 million in a seed round led by Zetta Venture Partners with participation from Factory, Pear VC, Bloomberg Beta, Github Fund and angles including renowned AI expert Ian Goodfellow. Rajpal says the proceeds will be put toward expanding Guardrails’ six-person team and additional open source projects.

“We talk to so many people — enterprises, small startups and individual developers — who are stuck on being able ship GenAI applications because of lack of assurance and risk mitigation needed,” she continued. “This is a novel problem that hasn’t existed at this scale, because of the advent of ChatGPT and foundation models everywhere. We want to be the ones to solve this problem.”

[ad_2]

Source link

Leave a Reply