I ran our AI demo on a live production error. Here’s what happened.
I joined Celigo in 2017. This is a photo from my first visit to our product lab in Hyderabad.

In 2019, before it was cool, we were running a GPU-filled machine to build a natural language processing model that could classify error messages. That was our first real foray into AI. It was foundational, and error management became one of the cornerstones of Celigo’s product.
I’ve been watching this platform evolve for almost a decade. And when we announced our AI launch this year, it was a real “back to the future” moment. It was the future because we raised the bar on what our customers should expect from automation platforms. We shipped:
- Agent Builder so you can build AI agents that take action across your systems with low-code simplicity.
- MCP Server to connect any AI agent to your systems through a secure, enterprise-grade layer — giving agents the access they need without sacrificing oversight or control.
- Celigo Ora, which gives anyone on your team a single, natural language interface to build, manage, and troubleshoot integrations across the entire platform.
But it was a look back too, because all three of these capabilities are built on the same foundation we’ve been extending since 2017.
I co-hosted a webinar last week where we took a look at all 3 capabilities, but there was one that stood out for me.
The live demo of Celigo Ora: handling a real error with no safety net
When I demo’d Ora live in front of hundreds of customers, I went back to our roots. I pulled up our own live Celigo instance, the one our entire company runs on, and I found a flow with real errors sitting in it.
The flow I found was for our professional services team. It synchronizes Jira tasks with Smartsheet. Smartsheet is for project tracking; Jira is for individual task management. We use Smartsheet in our communication with customers, so it matters when this breaks.
I didn’t know exactly what was wrong with it, but I found four errors and decided to let Ora work through it. No safety net.
The first thing it did wasn’t jump to a diagnosis. It asked itself the same questions a human would ask: was this working before? Did something change recently? It ran historical analysis, found similar errors in the past, and also noticed the flow had succeeded in the last 21 days. So it wasn’t fundamentally broken.
Then it pulled documentation from Smartsheet to understand the specific 404 error it was seeing. About a minute later, it came back with a recommendation. This looks like a record-specific issue, it said. Not systemic. The flow is running most of the time. It’s just failing on specific records.
From there, I was able to assign all errors like this to me.
It found all four errors. Then it paused and asked me to confirm before doing anything. That’s an intentional step. We’ve designed Ora, as well as the rest of our AI capabilities, for human-in-the-loop reviews and guardrails.
That moment of asking before acting is how we make Ora safe by design. It won’t make changes on your platform without you explicitly accepting. Because right now, I don’t think any of us are quite ready to hand the automation keys fully over to the AI. Maybe that changes. But for now, especially as Ora is in beta, we think that’s the right call.
Here’s what we heard from you
We ran out of time to answer all the questions we received during the webinar, so I thought I would capture some here for the record.
The biggest concern you have about AI agents is governance.
Before the demo, we ran a poll. We asked the audience: when it comes to AI agents in your business, what’s your biggest concern right now?
Governance and guardrails and trust came back as the clear number one. Connectivity and complexity of connecting agents to systems was second.
We hear this from customers constantly. That’s why governance is foundational to our platform. We know the question our customers are asking isn’t whether AI can do something useful. The question is: can you trust it?
“What if my workflow isn’t fully agentic and has rule-based components?”
This is the normal case, not the exception. We have an accounts receivable example that illustrates this. The agent reads an email from an inbox: that’s deterministic. Look up the customer in a system of record by domain: deterministic. But what is the email actually asking? Is this customer happy or frustrated? What should happen next based on that? That’s where you bring in the LLM step. And then the actions downstream — send an account statement, retrieve the W-9 — those can be deterministic again.
It’s not one or the other. It’s knowing where in the workflow judgment is actually needed, and putting the intelligence there. The agents themselves are also API addressable, so they can be decoupled from flows entirely and called directly if that’s what you need.
“How do guardrails actually work? Are they there to stop the agent from doing something it shouldn’t?”
Yes, but they’re more flexible than people expect. Guardrails are just another kind of flow step. They can sit before the agent, after the agent, or both, and they can be used outside of the agent context entirely.
Out of the box, we have things like PII detection — don’t leak Social Security numbers — and hate speech filtering. But you can also write your own in natural language. I wrote one myself that strips out any kanji characters from a text string. You can be as specific as you need to be. The point is you’re validating what goes into the agent and what comes out before anything executes downstream.
“Is there a risk that using your AI capabilities means our data is being used to train the model?”
No. We use an enterprise OpenAI license underneath Celigo Ora that explicitly prohibits the use of any customer data for training. If you want to read the specifics, visit our Trust Center. Customers have full access to that. I wanted to be really clear on that one because it’s an obvious and legitimate question, and it deserves a straight answer.
“Can I train Ora to automatically assign errors to me or a teammate? Can I assign errors to Ora itself?”
I think there’s a temptation right now to assume every problem is an AI problem. Some of it isn’t. Detecting a specific 404 error from Smartsheet and routing it might just be a basic rule. So we’re looking at this holistically: proactive error management that builds on the foundation we’ve had since 2019, combined with the ability to define rules where rules make sense, combined with Ora where judgment is actually needed. More on that soon.
Top of mind: governance and guardrails
The repeat theme of governance and guardrails is the right concern to have.
In our view, IT should still define the rules: role-based access control, what an agent is allowed to do, and so on. But within those constraints, a monitor-only user can go figure out what’s going on with a flow. Someone with edit access in a given workspace can go ahead and make a change. The goal is that more parts of the business can get engaged with automation because the platform makes it safe for them to step in.
Want to see it in your own environment? Request a demo and our team can help you get your next AI win.