2 min read

Improved UX for branching and filters

Published Aug 20, 2025 Updated Jan 27, 2026
Priyanka Koundal

Product Manager

Priyanka Koundal

We’ve introduced a redesigned rule editor for filters and branching in Flow Builder, focused on improving readability and control when configuring logic.

The updated interface supports both simple and nested conditions with precise operator handling, reusable actions, and a consistent experience across rule types, with no scripting required.

Filters and branching demo 

See what’s new in the rule editor, including easier logic grouping with AND/OR/NOT, searchable operands, drag-and-drop reordering, and a unified experience across filters and branches.

A cleaner, more visual layout for logic

The rule editor has been restructured for readability. With collapsible branches and nested condition groups, it’s easier than ever to navigate even complex logic structures.

Define logic with AND, OR, and NOT

You can now define rule logic using AND, OR, and NOT operators—each with clear visual labels and structured condition blocks. Expand or collapse groups as needed to focus only on what matters.

Search, customize, and edit operands

Fields are searchable, operators are automatically scoped by datatype, and you can add custom values directly, so missing mock data doesn’t block rule setup.

Clone, clear, and reconfigure without starting over

You can clone individual rules or clear condition sets without deleting the entire block. This gives you more control and faster ways to adjust your logic without rewriting it.

Drag, drop, and reorder rules and groups

Rules and groups are now fully movable via drag-and-drop, making it easy to restructure your logic visually and quickly.

Delete conditions or entire branches

Delete individual conditions or entire branching blocks while choosing which branch to retain as the primary path, so your downstream flow steps remain intact.

Consistent across filters and branches

The new rule editor works the same way in both filter and branching contexts. Whether you’re configuring input/output filters or conditional branches, you’ll get a unified, predictable experience.

Redesigned for clarity and control

The new rule editor removes unnecessary complexity, making it faster to define and manage logic across your flows without relying on scripts.

Integration insights

Expand your knowledge on all things integration and automation. Discover expert guidance, tips, and best practices with these resources.

4 min read

Resolve issues faster with trace key visibility

Published Jul 15, 2025 Updated Feb 13, 2026
Vani Amara

Principal Product Manager

Vani Amara

In any integration workflow, visibility into individual records is critical. Whether you’re syncing customer data, processing transactions, or managing inventory, even minor data issues (such as a missing value or formatting error) can cause a workflow to fail and impact downstream systems.

Celigo addresses this with built-in error management that combines automatic resolution with tools that make it easier for both technical and business users to investigate and take action when needed.

A key part of this capability is the trace key, which, when enabled, provides granular tracking and visibility into each individual record as it moves through the integration flow. The trace key helps uniquely identify records, allowing Celigo’s runtime engine to detect duplicates, isolate errors, and apply auto-resolution logic more effectively.

Trace keys in the export preview screen

In the July release, trace key visibility is now available directly in export previews. This provides teams with earlier insight into how records are handled, making it easier to identify issues during design and improve reliability before the flow runs.

To enable, check the “Show trace keys” box:

Built-in differentiation

Other platforms lack the ability to identify when a previously failed record has been corrected and reprocessed, so the original error remains unresolved unless manually cleared.

Celigo leverages trace keys natively, making exception management a first-class capability in your automation lifecycle. By integrating traceability into the design phase, Celigo provides full visibility and control over every record, helping teams resolve issues faster and build more resilient workflows.

This feature provides early visibility into how records are identified and processed across the integration flow and enables teams to:

  • See trace keys before running the flow, not just when an error occurs
  • Confirm that trace keys are configured properly
  • Quickly locate problem records later in the Exception Manager

It’s a subtle but powerful improvement that enhances error handling across all phases of integration.

Proactive error management

Previously, trace keys were only embedded in error messages and were visible only after an issue occurred. Now, they appear before execution, making it easier to set and validate them as part of your standard integration build process.

This empowers users to:

  • Move from reactive troubleshooting to proactive flow design
  • Reduce time spent hunting for the source of an issue
  • Accelerate remediation and auto-resolution

Auto-resolution accuracy

Celigo’s auto-resolution engine leverages trace keys to intelligently identify when previously failed records are reprocessed. When a record with the same trace key is detected, the platform automatically resolves the original error without any user intervention.

If the record is successfully processed, no new error is logged.  However, if it fails again, a new error is logged with the same trace key. 

This approach minimizes manual reprocessing, accelerates error resolution, and helps efficiently clear exception queues as upstream data issues are corrected.

This update helps ensure better consistency in record tracking, resulting in:

  • Fewer false positives in error detection
  • Higher success rates for automated retries
  • Less time spent on manual follow-up

Benefits

Sales ops & business users

  • Faster issue resolution keeps orders moving and revenue recognized faster
  • Fewer bottlenecks in quote-to-cash
  • Greater confidence in automation reliability

Integration specialists

  • Easier trace key configuration lowers support overhead
  • Proactive setup means fewer urgent escalations

Data stewards

  • Improved traceability of individual records
  • More accurate auto-resolution with cleaner data corrections
  • Better oversight of data quality across systems

Powering resilient, intelligent automation

This release reflects Celigo’s commitment to delivering automation that’s not only fast but also resilient, governed, and built to scale. By bringing trace key visibility into the design phase, we’re reinforcing our platform’s ability to proactively surface issues, accelerate resolution, and support intelligent workflows from the start. 

Whether you’re designing integrations or managing exceptions, this enhancement helps your team move faster, with confidence, clarity, and control.

4 min read

How to use Lookup Cache for managing environment-specific variables

Published Apr 15, 2025 Updated Jan 16, 2026
Automate low inventory alerts with Lookup Cache.
Priyanka Koundal

Product Manager

Priyanka Koundal

Lookup Cache is a versatile tool for storing and retrieving frequently used reference data in a centralized location. Beyond mapping static data such as country codes, it is particularly valuable for managing dynamic, environment-specific variables that may change over time, such as inventory thresholds, pricing rules, or region-specific settings.

In another article, we show you how to use a Lookup Cache for large dataset mappings.

Here, we’ll explore how Lookup Cache can be used to manage environment-specific variables through APIs.

Lookup Cache demo

Example use case

Automate buffer stock alerts with Lookup Cache and Slack

Managing buffer stock is essential for preventing stockouts and maintaining operational continuity—especially when inventory levels vary due to seasonality, shifting demand, or regional nuances. In this use case, we’ll demonstrate how to automate Slack notifications when inventory in Shopify falls below predefined buffer thresholds.

By leveraging Lookup Cache and the integrator.io API, you can dynamically manage these thresholds in a centralized repository, enabling real-time alerts and proactive inventory control.

Let’s say you need to monitor inventory levels in Shopify and notify your team when stock for specific SKUs drops below a certain threshold. Because these thresholds often differ by SKU, location, or season, managing them with static data isn’t scalable. While it’s possible to maintain these values manually using CSV files or external tools, that process is time-consuming and hard to maintain as your product catalog grows.

Instead, you can use Lookup Cache to store and manage inventory thresholds dynamically—automating updates and enabling seamless retrieval across your integration flows.

In this example, we will:

  • Automatically populate and update a Lookup Cache with SKU inventory thresholds using the Integrator.io API.
  • Retrieve threshold values dynamically from the Lookup Cache and trigger Slack notifications for SKUs with low stock.

Populate the Lookup Cache with thresholds

To manage thresholds dynamically, I’ve set up a flow that:

  • Exports inventory data, including SKUs, stock levels, and item IDs, from Shopify.
  • Uses the integrator.io API to populate a Lookup Cache with this data.

Here’s how the Lookup Cache is structured:

  • Key: Inventory Item ID (unique identifier for each item).
  • Value: A JSON object containing the SKU, location, and threshold.

Steps in detail:

1. Export inventory data: Create a flow to fetch inventory details from Shopify, including SKUs and stock levels.

2. Import to Lookup Cache: Add an import step to the flow that uses the Integrator.io API to update the Lookup Cache. The API supports bulk updates in a key-value array format.

3. Default thresholds: Use the Mapper to set a default threshold value during import. These thresholds can later be adjusted directly in the Lookup Cache UI or via CSV for bulk updates.

Once the flow runs, the Lookup Cache is automatically populated with SKU-specific thresholds, creating a centralized and consistent repository.

Trigger notifications for low stock

Now that the Lookup Cache is populated, let’s create a main flow to monitor stock levels and send Slack notifications when stock is low.

Steps in detail:

1. Fetch inventory levels: Set up an export step to fetch current inventory levels from Shopify.

2. Retrieve threshold values: Use the Lookup Cache API to dynamically retrieve the threshold for each SKU based on its Inventory Item ID.

  • Send the Inventory Item ID as the key in the API request.
  • Retrieve the associated value from the Lookup Cache, which includes the SKU, location, and threshold.

3. Validate stock levels: Add an input filter to check if the current stock is below the threshold or empty. Only records meeting these conditions proceed to the next step.

4. Send Slack notifications: For SKUs with low stock, trigger a Slack notification alerting your team.

With Lookup Cache, managing environment-specific variables like inventory thresholds becomes seamless. You can automate updates, dynamically retrieve data, and maintain consistency across flows—all without the need for manual intervention.

  • Centralized management: Store thresholds in one place, making updates simple and consistent across workflows.
  • Dynamic updates: Automatically populate and update thresholds using flows or APIs, eliminating manual maintenance.
  • Flexibility: Retrieve and consume data in real time through APIs, enabling responsive workflows.
  • Scalability: Handle complex variables like SKU-specific thresholds, pricing rules, or regional settings easily.

Make environment-specific variables easier to manage and scale

By using Lookup Cache as a dynamic, centralized data layer, teams can streamline complex logic across environments without relying on static files or manual workarounds.

Whether you’re scaling your operations or fine-tuning individual flows, Lookup Cache gives you the control, consistency, and automation you need to build smarter, more adaptive integrations.

→ Learn more about the Lookup Cache.

Let’s get started

Integrate, automate, and optimize every process.

Already a user? Log in now.

2 min read

Migrate your Shopify flows from REST to GraphQL in minutes

Published Apr 10, 2025 Updated Feb 13, 2026
Adam Peña

Technical Product Marketing Associate

Adam Peña

Shopify is phasing out its REST API in favor of a GraphQL-based implementation. While Celigo customers have a temporary extension until July 1, 2025, now is the time to begin converting your flows to ensure long-term continuity and performance.

To make this transition easier, we’ve released a new REST to GraphQL Conversion Tool, now available in the Celigo Playground as part of our April 2025 release.

Here, we’ll show you how to quickly convert Shopify REST API steps to GraphQL with Celigo’s new Conversion Tool.

Watch the quick demo

This tool automatically converts your Shopify REST API Import and Export steps to their GraphQL equivalents, saving you countless hours of manual work, ensuring uninterrupted integration functionality, and eliminating the need to handcraft GraphQL queries.

  • Access the Conversion Tool: Go to Tools > Playground > REST to GraphQL Converter.
  • Find Your REST Steps: All eligible Shopify Import and Export steps will be listed.
  • Click ‘Clone to GraphQL’: The tool automatically generates a new GraphQL-based flow step.
  • Swap It In: Clone your original flow, remove the REST step, and replace it with your new GraphQL step.
  • Adjust Mappings: Review your data mapping—GraphQL responses may differ slightly from REST.

Best Practice

Clone your flow before editing to keep your original REST version intact for reference and side-by-side testing.

Why This Matters

  • Future-proof your flows before Shopify sunsets REST APIs
  • No manual query writing—GraphQL steps are generated for you
  • Save time and effort with guided mapping suggestions
  • Ensure stability across your integrations now and into 2025

Ready to Migrate?

Head to the Celigo Playground to get started and convert your Shopify REST flows to GraphQL in just a few clicks.

Let’s get started

Integrate, automate, and optimize every process.

Already a user? Log in now.

8 min read

Optimize integrations faster with Celigo’s AI Assistants

Published Mar 17, 2025 Updated Jan 16, 2026
Adam Peña

Technical Product Marketing Associate

Adam Peña

Celigo offers a range of AI tools to help you build, optimize, and troubleshoot integrations throughout their lifecycle. These tools accelerate development by automating tasks, generating scripts, and providing intelligent recommendations.

In this article, we’ll explore how these AI assistants enhance efficiency, reduce manual effort, and optimize automation workflows. We’ll highlight key features and demonstrate when and how to use them.

Use case

We’ll use an ITSM use case to demonstrate how Celigo’s AI assistants—Celigo GPT, Script AI Assistant, and Handlebar AI Assistant—streamline integration development and troubleshooting. However, these tools are applicable to any automation scenario.

Let’s suppose we’re the internal support lead in our organization. We are building a flow that transfers support ticket information from ServiceNow to a spreadsheet. This flow aims to provide a clear and immediate overall picture of support within their organization without entering ServiceNow. This also provides a helpful workaround when ticket information needs to be shared across teams to which ServiceNow licenses are not provided.

Here is the resulting spreadsheet we get from running the flow so far:

The flow is working, but the result is not ideal. We’ve identified a few features that would improve this integration:

  • Flow configuration: From the start, we’re only receiving the internal IDs for fields that show who the ticket was assigned to and who opened the ticket. This is not particularly useful. We can use a lookup step to find the actual names of these individuals, but there may be a faster, easier way to get these results.
  • Date field transformation: Transforming the date field to something more immediately readable would make this spreadsheet more readily understood. This could be done with a JavaScript transformation, though we’re somewhat unfamiliar with the syntax needed to write the relevant script.
    • Advanced mapping: We could map a Hyperlink function into our spreadsheet to make cells clickable. With this feature, we could immediately go from this spreadsheet to the ticket or employee of our choice for deeper detail or record modification in ServiceNow. However, this involves some difficult construction with handlebar expressions, especially for the ticket assignee, a field which is sometimes empty if nobody has been assigned to the ticket yet.

    We will start by trying to get names in addition to IDs. With both, we can display the names of individuals (rather than their internal IDs) and construct hyperlinks using those names.

    Celigo GPT

    We need more descriptive data for our spreadsheet than the internal id of records associated with tickets, like who opened the ticket, and who is assigned to the ticket. The ServiceNow step might be able to give us links if it is configured to do so.

    We could dive into ServiceNow API documentation to understand some of the more low-level details, but an AI assistant could save us a lot of time here.

    In this case, Celigo’s custom GPT is the AI assistant best suited for this task. This custom-built GPT is trained to understand all things Celigo, making it particularly helpful for troubleshooting issues related to configuring flow steps for the wide range of connectors Celigo’s integration platform supports.

    We’ll present our problem to Celigo GPT and see if we can make any changes to our flow to get links for ServiceNow records.

    Providing an explanation of my goal (and, optionally, a screenshot) will give the AI what it needs to help troubleshoot your issue. Immediately, the AI assistant honed in on the query parameters.

    It appears that by setting the ‘sysparm_display_value’ parameter to “all” we can get both the id and display name of records associated with tickets, like who the ticket is assigned to and who opened the ticket.

    With some minor mapping changes, we can now display the names of records in our spreadsheet rather than the ids. By keeping the ids in the flow in addition to the name, we can construct hyperlinks on these names with some handlebar functions.

    Before moving on to this hyperlink construction step, let’s put the date in a more readable form.

    Script AI Assistant


    To alter the data coming from the ServiceNow export, we’ll add a transformation to the first step of the flow. We can use handlebars here or a script. We’ll use a script to change the field ‘opened at’ to a more readable date format.

    Once we’ve named and saved our script, we can edit it. By clicking the AI icon in the bottom right, we can use a special AI assistant whose goal is to write a script for us based on our description.

    This AI will not only reference our prompt and current script but will also consider the ‘function input,’ which is the data in your flow up until this point. This means we can directly reference the data our flow uses in our prompt.

    We can broadly explain what we want the script to do; the AI can largely infer the purpose of the data, though it helps to reference fields specifically and give examples of the output you would like.

    Immediately, the AI helper generates a script that we can test for functionality. 

    The AI assistant ensures best practices are used in the script. The code generated ensures the data being transformed is in the record present before it attempts to alter it. This also helps with error management in the future, where the field being referenced might not be present for some records.

    That was easy! Now, let’s fix our mapping. We will create hyperlinks in our spreadsheet using the id of records like the employee responsible for managing the ticket, the individual who opened the ticket, and the ticket itself.

    Handlebar AI Assistant

    We need to create hyperlinks on records like who the ticket was assigned to, who opened the ticket, and the ticket itself. These links will allow us to immediately enter ServiceNow from the spreadsheet to dive deeper into detail or modify details related to the ticket. 

    Links to incidents/tickets in ServiceNow follow this format:

    https://{{your-instance-name}}.service-now.com/incident.do?sys_id={{sys_id.value}}

    Links to users, like the ticket opener or assignee, follow this format:

    https://{{your-instance-name}}.service-now.com/sys_user.do?sys_id={{assigned_to.value}}

    Links in Sheets can be made using the “Hyperlink” function in the format:

    =HYPERLINK(“{{link}}”, “{{text}}”) 

    We’ll have to use handlebar mapping to build these hyperlink functions in our spreadsheet. Handlebars will allow to reference multiple fields and inject other text around these fields.

    We can also use handlebar helpers to add dynamic decision-making logic to our mapping.

    For example, while all incidents have internal ids and have someone responsible for opening the ticket, some tickets may not be assigned yet to someone on the support team. We can use handlebar helpers to display “UNASSIGNED” if the ticket has no assignee.

    So, let’s take the ‘assigned_to’ field for example.

    For this field, we want to create a multi-field mapping that will create a hyperlink of the form =HYPERLINK(“https://{{your-instance-name}}.service-now.com/sys_user.do?sys_id={{assigned_to.value}}”, “{{assigned_to.display_value}}”). But we only want to map that over when there’s someone assigned to the ticket (the record isn’t empty).

    With our goal in mind, let’s explain what we want with Celigo’s AI handlebar helper.

    Just like Celigo AI helped with the script, we can ask the AI to create our desired handlebar expression. The AI will be aware of the fields accessible to it, so feel free to reference those in your prompt.

    We’ll test out our expression with the preview button. Once we’re sure it works, we can move on and use the AI to create hyperlinks for the incident to the “Ticket” column and the user who opened the ticket to the “Opened By” column.

    Our flow is complete! Here is the finished result of a final run.

    Simply using AI assistants, a spreadsheet of difficult-to-parse dates and IDs has not only been made readable but now features a collection of hyperlinks that tie this sheet immediately to its source of truth, ServiceNow. 

    When you’re having issues with a connection or the connector itself, Celigo GPT can be a great tool to diagnose your problem. When you’re having trouble creating a script, transformation, or handlebar expression, Celigo’s internal AI assistants can get you there. 

    Celigo’s AI assistants can not only bridge technical knowledge gaps but also greatly accelerate integration development.

    Let’s get started

    Integrate, automate, and optimize every process.

    Already a user? Log in now.

    3 min read

    Celigo’s new Flow Builder

    Published Mar 6, 2025 Updated Jan 16, 2026
    A modern approach to integration.
    Laurie Smith

    Sr. Product Marketing Manager, Content

    Laurie Smith

    Building integrations should be fast, efficient, and adaptable — enabling teams to automate processes without unnecessary complexity. With Celigo’s March release, the new Flow Builder introduces a modern UI, improved navigation, and new productivity features to enhance usability and performance.

    Designed for both business users and developers, the new Flow Builder simplifies integration development with smarter automation, reusable components, and built-in error handling. These enhancements make it easier than ever to connect applications and streamline workflows.

    What’s new?

    • Compact view optimizes screen space, reducing clutter while keeping essential tools accessible.
    • Redesigned tool menu helps users quickly find the right options without searching through multiple menus.
    • Clone option allows users to duplicate any step instantly, eliminating the need for manual rebuilding.
    • Flow step bottom bar provides a clearer view of attached tools, making it easier to track modifications.
    • Simplified branch merge/unmerge options for easier branch management.
    • Refreshed icons and a cleaner UI create a more polished, modern experience.
    • New UI toggle lets users switch between the classic and updated experience at their own pace. The new and classic UIs are fully compatible, so you can switch back and forth without losing work. 

    Flow Builder demo

    A smarter way to build integrations

    At the core of Flow Builder is its visual programming model, which allows users to:

    • Drag and drop integration steps
    • Define logic without writing code
    • Connect applications efficiently

    Every flow is structured and interactive, providing a real-time view of how data moves across systems. Instead of manually configuring each step, users can leverage prebuilt templates, reusable components, and guided interactions to build integrations faster and more accurately.

    Flow Builder also takes a data-driven approach to integration. Users can preview how data flows between applications at any stage, ensuring transformations work as expected before deployment. Real-time validation eliminates guesswork so that users can troubleshoot and refine integrations instantly.

    AI and automation reduce complexity

    Flow Builder simplifies integration development with AI-powered tools that assist with field mapping and scripting:

    • AI code assistants generate JavaScript, SQL, SOQL, GraphQL, and Handlebars transformations based on natural language descriptions.
    • Reusable components eliminate redundant work—any integration step (export, import, mapping, script) can be repurposed across multiple flows, reducing setup time and ensuring consistency.

    For example, if a team builds a flow to sync customer records between NetSuite and Salesforce, they can reuse that same transformation logic for other integrations without rebuilding it from scratch.

    Built-in error handling

    Managing integration errors can be time-consuming and complex, often requiring custom logic and manual troubleshooting.

    Celigo’s Flow Builder eliminates this burden with built-in exception management that proactively detects, categorizes, and resolves errors, keeping workflows running smoothly.

    • Automatic error resolution reduces disruptions by retrying failed records due to server-side issues like API rate limits or network timeouts.
    • Proactive issue detection flags errors such as missing fields or misconfigured data, ensuring nothing goes unnoticed.
    • Exception management dashboard enables users to categorize, assign, and track errors to streamline resolution and improve visibility.

    A more streamlined, intuitive experience

    The Flow Builder is the most powerful, user-friendly integration experience available. With an intuitive visual approach, AI-powered assistance, built-in error handling, and a streamlined UI, Celigo enables teams to build integrations faster, with less effort, and greater accuracy.

    Learn more about the new Flow builder user interface →

    3 min read

    How to use a Lookup Cache for large dataset mappings

    Published Feb 11, 2025 Updated Jan 16, 2026
    Priyanka Koundal

    Product Manager

    Priyanka Koundal

    Lookup Caches act as a central repository for frequently used data, stored in a key-value format. This allows you to store, find, and reuse data across flows without relying on repeated API calls or external database queries.

    Learn more about the Lookup Cache.

    Lookup Cache demo

    Lookup Cache overview

    Why use Lookup Caches in your flows?

    • Faster data lookups: Reduces dependencies on external systems, improving flow efficiency.
    • Scalable mappings: Handles large datasets better than static or dynamic lookups.
    • Reusable and flexible: Acts as a lookup table, environment-specific variable store, or centralized reference data repository.
    • Easier maintenance: Data can be loaded via CSV files or updated dynamically using Integrator.io APIs.

    Many integrations require mapping and transforming data between different formats.

    For example:

    • One system may store country and state data as two-letter codes, while another requires full names.
    • Maintaining these mappings manually is time-consuming, error-prone, and difficult to scale, especially with large datasets.

    A Lookup Cache solves these challenges by storing transformation rules centrally, ensuring faster, more reliable, and scalable mappings.

    Use case: Mapping country codes to full names

    Imagine syncing customer data from Microsoft Dynamics 365 Business Central to Shopify:

    • Business Central stores country and state information as two-letter codes.
    • Shopify requires full country and state names.

    Instead of relying on:

    • Static lookups, which are difficult to maintain
    • External API calls, which slow down processing

    You can use Lookup Cache to transform values within your integration instantly.

    How to use Lookup Cache for mappings

    Step 1: Configure the mapper

    • Open Mapper 2.0 in your flow.
    • Select the destination field (e.g., “Country” in Shopify).
    • Select the source field (e.g., “Country Letter Code” from Business Central).

    Step 2: Create a Lookup Cache

    • Change the field mapping type to Lookup and select Lookup Cache.
    • If an existing Lookup Cache is available, select it. Otherwise, create a new one.
    • Upload a CSV file containing mappings for two-letter country codes and full names.
    • Choose the key column (e.g., “Alpha-2” for two-letter country codes).
    • Choose the value column (e.g., “Full Country Name”).
    • Configure whether the data should persist when cloning or moving the flow.

    Step 3: Apply the Lookup Cache in the mapper

    • Select Lookup Cache as the mapping source.
    • Set the value field to return the full country name.
    • Save and apply the changes.

    Step 4: Run the flow and verify the data

    • Execute the flow to sync customer records.
    • In Shopify, confirm that country names now appear in full instead of two-letter codes.

    Benefits of using the Lookup Cache

    Lookup Cache provides a scalable, efficient solution for transforming large datasets in integrations. Storing and applying mappings improves accuracy, reduces reliance on external systems, and accelerates data processing.

    • Faster processing: Avoids repeated API calls for lookups.
    • Scalability: Handles large datasets without performance issues.
    • Easy maintenance: Update lookup values centrally without modifying each flow.
    • Greater accuracy: Eliminates errors from manual mappings.

    Learn more about the Lookup Cache.

    Let’s get started

    Integrate, automate, and optimize every process.

    Already a user? Log in now.

    8 min read

    Real-time integration: How to create webhook flows

    Published Feb 6, 2025 Updated Feb 13, 2026
    Adam Peña

    Technical Product Marketing Associate

    Adam Peña

    Celigo’s iPaaS platform offers users various ways to trigger integration flows. In contrast to flows that can be set to run on a user-specific schedule, real-time flows use Webhook listeners to run the flow in response to specific events in your applications.

    Real-time flows allow immediate response time for events featuring time-sensitive data. Because of their responsiveness and immediacy, webhook flows may be best suited for many time-sensitive business use cases, such as:

    • Lead handoff: In response to a prospect reaching a specific stage or showing interest by submitting a targeted form, Marketing will hand lead information to Sales for follow-up. A webhook flow can immediately trigger the transfer of lead data to Salesforce or a shared Slack channel when the prospect meets the criteria, ensuring Sales can act on the lead without delay.
    • Order management: Order management encompasses tracking and processing customer orders, from placement to fulfillment. Using a webhook flow, changes in order status can instantly trigger inventory updates, invoice compilation, and more, ensuring faster order fulfillment and fewer delays in the system.
    • Shipping updates: This process involves providing customers with real-time information about the status of their shipments. A webhook can send automatic, real-time shipping notifications to customers as soon as updates are available, improving transparency and customer satisfaction.
    • Support ticketing: Support ticketing involves managing customer issues and requests submitted through a helpdesk or support portal. A webhook can automatically create or update support tickets in real-time, and even handle assignments as soon as the ticket is submitted, ensuring quick responses and faster resolution of customer problems.
    • Transaction flagging: This process involves monitoring transactions for anomalies, such as fraud or unusual behavior. A webhook can instantly flag transactions for review when they meet pre-defined criteria, helping mitigate fraud risks and improve response time.

    Here, we’ll build a webhook flow in the platform using two flows: one centered around Salesforce and another for HubSpot. These flows will examine a simple lead handoff process.

    Webhook demo

    The configuration screen for a webhook may look different between the two applications, but the idea is the same. Understanding the webhook export configuration screen and navigating your source application allows you to build real-time flows easily.

    Events in your daily business that require the completion of a time-sensitive process in response can be condensed into an integration, ensuring a predictable, satisfactory, and quick process every time.

    HubSpot webhook

    Builders Toolkit - Webhook: HubSpot Flow

    The above flow uses a webhook (also known as a real-time listener) to gather lead information. A lead’s information is recorded when they express interest by filling out a marketing form. Once the form is filled out, a Contact record in HubSpot is created to represent the lead. The creation of a contact triggers the flow to run.

    Next, we use a lookup to gather the Contact’s information and pass that along to the final step. In the final step, the details of the Lead are posted to a Slack channel shared between Sales and Marketing. In real-time, the Sales team is alerted to the acquisition of a new lead and can follow-up accordingly.

    Builders Toolkit - Webhook: HubSpot Webhook Config

    Above is the webhook created in HubSpot. Notice the subscription type is “contact.creation” and the event is “Created.” Whenever a contact is created, this webhook will run.

    To learn more about how we can loop the platform into these events, let’s build the first flow step from scratch.

    Creating a webhook export

    When creating a new export, you’ll have a few options for how the step will export its information to the rest of the flow. In HubSpot’s case, we can export records from our application or “Listen for real-time data.”

    When you see this “Listen for real-time data” option, this indicates the application offers webhook functionality that the platform can leverage.

    Builders Toolkit - Webhook: Webhook Selection Option

    Once you’ve selected this option and hit ‘Next’, the calendar icon for scheduling options in the top right corner of the flow builder screen will disappear. Since a webhook responds to events to decide when it runs, no scheduling is needed.

    Next, we’ll move on to the form used to configure the webhook listener.

    Webhook configuration

    Like any flow step, creating a webhook listener export involves a form that guides you through what information is needed or must be used to finish properly configuring the step.

    Builders Toolkit - Webhook: Webhook Form

    Aside from a name and description, there are a few vital aspects you will likely see across webhooks for many applications.

    Key (secret)

    Many applications associate a secret key with each webhook created by the user to ensure the authenticity and security of data transmitted. This involves going to the source application and looking for the secret key, then pasting that key here.

    Every application may have a different process for accessing this secret key, so it’s important to be familiar with your source application and read the documentation if finding this secret is difficult. However, not every application uses this approach, as we’ll see in our second example.

    In HubSpot, this key is found in the private app we created that acts as a webhook for this flow.
    Builders Toolkit - Webhook: HubSpot Secret

    Configure response for challenge requests

    Some apps will further authenticate your request to receive information from a webhook using a ‘Challenge Request.’ You can think of this as a special test for your webhook: a ‘challenge’ is issued, and if the app requesting data from the webhook answers incorrectly, it fails and does not receive the data.

    This option frequently does not need to be touched by the user and did not in this flow. However, for users who do have special requirements around challenge requests, you have the option to configure your response.

    Public URL

    Unlike the secret that you retrieved from the source application, this link is something you supply to the source application. This link, generated by the platform, is where your application will send real-time data.

    Now, when information is posted to this link, the webhook listener will notice this request, run the flow, and send along the data retrieved from the application.Builders Toolkit - Webhook: Target URL HubSpot

    We’ve successfully finished creating a webhook export by supplying the key/secret to this connected flow step and giving HubSpot a link from the platform to send its real-time data.

    Whatever else we do in this flow following this step will run immediately after the events specified in the HubSpot webhook occur. Pretty powerful, I know!

    However, no two business applications are the same, as most people reading this (perhaps painstakingly) already know. As such, not every application handles their webhooks the same.

    To demonstrate how the platform simplifies these differences and guides you through varied processes, we’ll take a quick look at a recreation of this flow using Salesforce.

    Salesforce webhook

    Builders Toolkit - Webhook: Salesforce Webhook

    This flow accomplishes the same goal as the first, only with Salesforce. Let’s take a closer look at the webhook listener export step and see how Salesforce webhooks are handled in the platform.

    You can see that the options here differ from those for HubSpot.

    sObject type

    Specify the sObject (Salesforce Object; the record type) to which you want the Webhook listener to pay attention. The object selected for this flow was “Contact.”

    Required trigger

    Salesforce uses a special function-like syntax for “triggers,” which determines how and when the trigger responds.

    Depending on your selected record, the platform will help you start by generating a simple trigger template you’ll supply to Salesforce. You can modify this template to be more advanced, but if you need to know when a record is created or updated, this is all you’ll need.
    Builders Toolkit - Webhook: Salesforce Trigger

    Referenced fields

    According to the help bubble, we can add lookup fields to our returned data. In HubSpot’s case, we only got the id of the new contact from our webhook, requiring us to use a lookup step to get the plain data like name, phone number, email, etc.

    In Salesforce’s case, all this information is immediately returned from the record that triggers this webhook. However, since an account is a parent of a contact record, we could add account details to be returned here, such as the account’s address or name.

    Since we immediately get all the information about the record that triggered our webhook from this step, Salesforce webhooks have one more feature that’s very nice to have.
    Builders Toolkit - Webhook: Salesforce Filter

    Salesforce Webhooks also have a built-in filter to define conditions for your webhook. In this example, we ensured this webhook will only send along new Contacts created for a specific company, giving us a more granular approach with a simple webhook.

    Once again, we’ve successfully configured another webhook by filling in a few simple fields and getting things squared away on our source application.

    Even though Salesforce handles webhook functionality differently than HubSpot, the platform guides us through the details needed and changes that need to be made in our source application to get set up. Hopefully, this has shown you that building webhooks from the platform is simple, especially if you know your source application well.

    Webhooks are powerful tools that enable your integrations to respond in real-time to urgent events. The setup process is often not too difficult either; follow the steps on the platform and know your source application.

    When you find yourself tasked with a process that requires a time-sensitive response, consider a webhook!

    Let’s get started

    Integrate, automate, and optimize every process.

    Already a user? Log in now.

    8 min read

    Concurrency best practices for large data volumes

    Published Feb 6, 2025 Updated Feb 13, 2026
    Laurie Smith

    Sr. Product Marketing Manager, Content

    Laurie Smith

    In an iPaaS, concurrency is the platform’s ability to execute multiple tasks, processes, or workflows simultaneously. This enables integrations to run in parallel, efficiently managing overlapping operations such as concurrent API calls and large data transfers. By dividing workloads into smaller, parallel units, concurrency enhances performance, speeds up processing, and optimizes resource utilization.

    Think of concurrency like a highway system. Without it, data moves along a single-lane road, forcing each task to wait for the one ahead, leading to slowdowns and bottlenecks. With concurrency, the system expands into a multi-lane highway, allowing multiple processes to run simultaneously, reducing congestion and improving speed. Just as traffic signals optimize vehicle flow, concurrency ensures efficient resource allocation, keeping integrations running smoothly without unnecessary delays.

    Managing concurrency is critical for high-volume data processing. The right configuration speeds up data synchronization, prevents API throttling, and ensures consistent performance. Poorly managed concurrency, on the other hand, leads to delays, resource limits, and missed SLAs.

    Here, we’ll explore how Celigo’s tools help you configure, monitor, and scale concurrency settings to keep your integrations fast, efficient, and reliable—even under heavy workloads.

    Key benefits of concurrency management

    1. Enhanced performance and speed
      Processing tasks in parallel significantly reduces workflow completion times.

      • Example: Syncing 10,000 records with concurrency set to five threads can be five times faster than processing one record at a time.
    2. Improved scalability
      Concurrency ensures systems can handle growing data volumes without performance degradation, adapting to the demands of scaling businesses.
    3. Reduced latency for time-sensitive workflows
      Processes like inventory synchronization or payment processing benefit from concurrency by minimizing delays and meeting real-time requirements.
    4. Optimized resource utilization
      Concurrency ensures efficient use of system resources, minimizing idle time and maximizing throughput.
    5. Support for complex workflows
      Concurrent processing enables businesses to handle interdependent tasks, such as syncing parent and child records, without bottlenecks.
    6. Compliance with API governance
      Configuring concurrency appropriately helps avoid exceeding API rate limits, prevent throttling errors, and maintain consistent system performance.

      Examples of concurrency in an iPaaS

      • Ecommerce order processing
        Concurrency allows multiple customer orders to be processed simultaneously, syncing with an ERP system in parallel. This prevents bottlenecks and ensures timely order fulfillment, even during peak sales periods.
      • Real-time inventory updates
        Concurrency enables parallel API calls across multiple platforms (ERP, marketplaces, and warehouse management systems), significantly reducing the time required to update stock levels in real-time and preventing overselling and discrepancies.
      • Employee onboarding automation
        When a new employee is hired, concurrency allows multiple onboarding tasks (such as account creation, payroll setup, and software access provisioning) to be processed simultaneously across different systems, accelerating the onboarding experience and reducing manual delays.

      Core principles of concurrency in iPaaS

      • Parallel process execution
        Execute multiple workflows or tasks simultaneously while efficiently managing resources to prevent contention and performance degradation.
      • Dynamic workload scaling
        Automatically scale workloads up or down in response to demand, leveraging cloud-based architectures to optimize resource allocation for concurrent processes.
      • Error management and resilience
        Isolate and handle errors within concurrent processes independently, preventing failures from cascading and ensuring uninterrupted workflow execution.
      • Throughput and latency optimization
        Maximize data throughput while minimizing latency to maintain high-speed, efficient processing, even under heavy concurrent workloads.

      Best practices for managing large data volumes

      Optimize flow design

      • Partition data intelligently: Break down large datasets into logical segments (e.g., geographic regions, product categories) to enable parallel processing and reduce load on individual API calls.
      • Use prebuilt templates: Leverage Celigo’s preconfigured templates to streamline integration setup for common use cases.
      • Example: Partitioning order data by region (e.g., North America, EMEA, APAC) helps distribute processing load, reduces API throttling risks, and improves overall runtime efficiency.

      Configure concurrency settings effectively

      Tailor concurrency settings to align with system capacity and API rate limits to optimize performance.

      Steps to configure:

      1. Review API rate limits and thresholds: Check API documentation for rate limits, batch processing capabilities, and recommended concurrency settings.
      2. Set connection concurrency: Configure concurrency levels to match the API’s processing capacity without exceeding limits.
      3. Validate in a sandbox:  Test configurations in a non-production environment to prevent disruptions in live operations.

      Enable delta data processing

      Delta processing improves efficiency by syncing only modified or newly created records, reducing API usage and processing time.

      How to configure:

      1. Create a filter:  Define conditions to extract only records updated since the last successful sync.
      2. Activate delta processing:  Enable delta settings in the flow configuration to track and process incremental changes.
      3. Validate results:  Compare synced records against source logs to confirm accuracy.

      Monitor and scale dynamically

      • Use Celigo’s monitoring tools to track key performance metrics such as execution time, error rates, and throughput.
      • Scale runtime resources dynamically based on data volume fluctuations, ensuring smooth operations during peak loads.

      Leverage parallel flows

      • Running datasets through multiple concurrent flows reduces runtime while maintaining system stability.
        • Example: Instead of processing all inventory data in a single flow, split it by product categories (e.g., electronics, apparel) to prevent bottlenecks and improve processing speed.

      Celigo’s approach to concurrency

      Celigo provides powerful tools to configure, monitor, and scale workflows, delivering reliable performance even under high-volume demands.

      Key features include:

      Dynamic paging and parallel processing

      • What it does: Divides large datasets into smaller, manageable chunks for parallel execution, reducing processing time and system strain.
      • Benefits: Optimizes API efficiency, minimizes load on target systems, and improves data throughput.
      • Example: Adjusting Salesforce SOQL query page sizes to 200–1000 records balances performance while preventing timeouts and API rate limit violations.

      AI-powered optimizations

      • Predictive monitoring: AI continuously analyzes historical and real-time data to detect anomalies and suggest proactive adjustments, preventing issues like API throttling.
        • Example: If API usage nears rate limits, Celigo recommends reducing thread counts or increasing throttle delays to maintain smooth processing.
      • Dynamic scaling insights: AI optimizes key settings such as page sizes, thread counts, and throttle delays based on actual workload patterns, ensuring workflows automatically adapt to peak loads.

      Built-in exception management

      • Automated retries: Resolves transient errors  (e.g., timeouts, temporary API outages) without manual intervention.
        • Example: A network outage triggers automatic retries, preventing workflow disruptions.
      • Detailed error logs: Captures persistent issues (e.g., invalid credentials, permission errors) for manual review.
      • Tagging and assignment: Classifies exceptions by type and assigns them to the appropriate team members for faster resolution.

      Connection-level concurrency controls

      • What it does: Configures concurrent API requests to maximize throughput while preventing throttling.
      • Example: If an API has a 60-requests-per-minute limit, Celigo can limit concurrency to 5 requests with a 1-second throttle delay, ensuring efficient and compliant execution.

      Preserving record order

      For workflows requiring sequential data processing, Celigo’s Preserve Record Order feature ensures data integrity, making it ideal for:

      • Financial transactions (e.g., invoice processing)
      • Parent-child relationships (e.g., hierarchical customer or product data)

      Elastic scalability

      • What it does: Celigo’s cloud-native infrastructure dynamically scales to handle workload spikes, ensuring consistent performance without manual intervention.
      • Powered by:
        • Amazon SQS – Efficient message queuing for parallel processing
        • Amazon S3 – Secure and scalable storage for large datasets
        • MongoDB – High-performance data persistence for fast retrieval

      Recommended concurrency settings for common scenarios

      Scenario Recommended Settings Notes
      APIs with strict rate limits Concurrency: 5 requests

      Throttle Delay: 1 second

      Ensure the total number of requests does not exceed API rate limits.
      High-latency APIs Page Size: 50–100 records

      Concurrency: 2–3 threads

      Smaller page sizes reduce payload size and avoid timeouts caused by slow responses.
      Low-latency, high-capacity APIs Page Size: 1000+ records

      Concurrency: 5–10 threads

      Larger pages and higher concurrency leverage system capacity for faster data processing.
      Data requiring sequential processing Concurrency: 1 thread

      Enable Preserve Record Order

      Ensures data is processed in the correct order.
      Dynamic workloads Enable Dynamic Scaling

      Monitor using dashboards

      Adjust concurrency dynamically during peak loads based on real-time metrics.
      Error-prone APIs Enable Automated Retries
      Retry Delay: 1–5 seconds
      Configure retries to handle transient errors like timeouts or temporary unavailability.

      Customization may be required based on the specific API or workload, and regular monitoring is essential for maintaining optimal performance.

      Beyond performance tuning, proactive error management helps detect and address potential issues before they escalate. Automated alerts can notify teams of problems, such as exceeding API rate limits or connection failures due to high request volumes, allowing immediate action to prevent disruptions.

      Maximize throughput and keep data flowing

      Effective concurrency management is essential for optimizing large data volume processing in an iPaaS. By configuring parallel execution, leveraging AI-driven optimizations, and implementing robust exception handling, organizations can achieve higher throughput, reduced latency, and seamless scalability.

      By following these best practices, teams can ensure smooth, efficient, and resilient integrations—even under the most demanding data conditions.

      Additional concurrency resources

      3 min read

      Simplify platform DevOps with Celigo’s multi-environment feature

      Published Feb 2, 2025 Updated Jan 16, 2026
      Naveen Venkatesh

      Senior Product Manager

      Naveen Venkatesh

      Celigo’s multi-environment feature enables organizations to configure and manage multiple environments (e.g., development, testing, QA, staging, production) to support their software development lifecycle (SDLC) and DevOps workflows.

      Multi-environment demo

      Key benefits of multi-environment support

      These features enable organizations to configure, manage, and optimize multiple environments for a smoother development lifecycle.

      Custom environment setup – Configure environments based on DevOps, CI/CD, and testing needs.

      Improved access control and security – Assign user roles and permissions per environment for better governance.

      Seamless deployment and promotion – Clone and promote integrations, flows, and resources across environments.

      Scalability and flexibility – Supports enterprise-grade version control and release management for complex workflows.

      Exploring key benefits

      By leveraging multi-environment support, organizations can enhance efficiency, reduce risk, and improve governance across their integration processes.

      Proactive workflow management – Isolate development, testing, and production to prevent disruptions and ensure stability.

      Streamlined governance – Apply consistent policies to maintain data integrity, security, and compliance.

      Efficient resource utilization – Scale workflows based on demand and free up resources by enabling/disabling environments as needed.

      Step 1: Navigate to the multi-environment dropdown

      • Upon logging into Celigo, you’ll see a new dropdown menu in the top-right corner.
      • This replaces the old production/sandbox toggle and lists all available environments (e.g., production, dev, QA, staging).
      • Select any environment to access it directly.

      Step 2: Review environment entitlements

      • Navigate to Subscription → Environments to check your entitlements.
      • Your plan determines how many environments you can create.
        • Example: An enterprise plan may allow three environments, with one active.
      • Additional environments can be purchased for professional or standard plans.

      Step 3: Enable or disable environments

      • Go to the environments tab to view all available environments.
      • Production is always enabled by default.
      • Toggle environments (e.g., test, UAT, dev) on or off as needed.
      • If you’ve reached your limit, you can:
        • Disable an existing environment
        • Upgrade your plan to increase entitlements

      Step 4: Create a new environment

      • Go to Environments → Click “Create Environment.”
      • Enter a name (e.g., “Performance”) and an optional description.
      • Save the environment (it starts in a disabled state).
      • Enable the environment if entitlements are available.
        • If entitlements are exceeded, the system will notify you.

      Step 5: Manage users and permissions

      • Navigate to the user’s section within each environment to invite or manage user roles.
      • Environment-specific access: Invite users to non-production environments with specific roles (e.g., admin, monitor, developer).
      • Production admin access: If a user is assigned admin in production, they automatically gain access to all non-production environments.
      • Admin access to a non-production environment does not extend to production unless explicitly granted.

        Key points to remember

        • Entitlements dictate limits – Your production entitlements determine the maximum number of environments you can have.
        • Each environment operates independently – While following the same entitlement structure, they function autonomously.
        • Easy navigation – Quickly switch environments using the dropdown menu or the environments tab

        Implementing a multi-environment strategy

        By leveraging Celigo’s multi-environment feature, teams can streamline workflows, minimize risks, and accelerate time to production. With customizable access controls, seamless deployment options, and scalable resource management, organizations can optimize their integration processes and maintain operational efficiency at every stage.

        Whether you’re managing a small team or an enterprise-level infrastructure, multi-environment support provides the flexibility and control needed to drive continuous innovation and growth.

        Let’s get started

        Integrate, automate, and optimize every process.

        Already a user? Log in now.