Celigo named a Visionary in the 2024 Gartner® Magic Quadrant™

On Demand Webinar

Quarterly Integrator.io Product Update

Please join us for our Quarterly Integrator.io Product Update. Check out what’s new with Celigo integrator.io, see demos of our cool new features, and interact with our product team. 

In this session, we have lots of exciting updates to share including improvements to real-time and batch flow debugging, data security enhancements, new AI Automapping, and more! 

Topics that discussed include:

  • New connectors available
  • Latest AI-powered feature – Automapping
  • Improvements to real-time and batch flow debugging
  • Enhancements to PGP encryption
  • Enabling SSO 

Watch now!

Full Webinar Transcript
Good morning. Good afternoon. Good evening, everyone. Welcome to today’s quarterly product update on integrator.io. My name is Kim Lockett. I’m VP of Product Marketing with Celigo. I have with me my esteemed colleague, Matt Graney, who’s VP of Product Management for Celigo. So, we have quite a full agenda for you today, which we’ll get to in one second. So, a couple of just quick housekeeping items. So, this is being recorded. You will get a copy of the recording in your inbox, shortly after the session is over. There’s a Q&A panel on your screen, which you can ask questions. And we’ll get them answered either during the course of the webinar. And if we don’t have time to get to all the questions, we will respond to you after the webinar. But please, we want it to be as interactive as possible. So, feel free to ask your questions as we go. So, with that, I’m going to hand it over to Matt to get us going. Thanks, Kim. And thanks, everyone, for joining us today. We’re always so glad to present these updates to you. And this one’s no exception in terms of being full of content. And I have some slides, to begin with, and then a good amount of demo so you can see first hand some of the new things we’ve added. So, let’s begin with just an overview of our calendar. What we’re mainly focused on here is our June 15th release. So, we have a maintenance release coming up next month, and then another platform release in September. We tend to avoid doing major platform updates as we get close to the holiday shopping period. We have so many customers in the e-commerce space. We respect the need for platform stability around Black Friday, Cyber Monday, and so on. And so, our next major release up to that will be early in 2022. We’re roughly on a quarterly schedule. We were impacted. Our team in India, of course, was impacted a bit by COVID. So, we’ve been working very closely with the team as they’ve battled through that and done an amazing job, really, under extreme duress. And so, we’ve adjusted the calendar a little bit. So, these are, of course, just three releases. Typically, I think, in next year, we’ll be looking at four releases. All right. So what are we going to get to? Well, I’m going to begin with the connectors we’ve updated and introduced anew. We have an exciting new feature around automapping of fields, improvements in the way we can handle real-time listeners and logging of webhook requests, a new feature to help with debugging of flows, especially when you’re not sure what happened during a flow, such as which records succeeded or were ignored, what we’ve done around PGP encryption, and then SSO, single sign-on, something that we’ve been looking forward to introducing for a while. And then, I’ll get into a demo of a few of these features, not all of them. Of course, you can always find more information in our release notes. And I will be talking a bit about our community as well, towards the end of the webinar. Okay. So, with that, we’ve introduced a set of new connectors. And we tend to think of these and orient them around the business processes they support. You’ll see that more and more from us moving forward. So, on the order to cash side, we’ve released support for Walmart Canada and Mexico, an integration or connector for Fulfillment.com, and the 3PL fulfillment Maven Lincoln, quote, “cash”, and from a reporting analytics point of view, Yocto and Google ads. We’re continuing to add support for these. Of course, you would know that you can always integrate via our HTTP or rest adapters. So you have a lot of flexibility. But of course, it’s always a bit easier if we’ve done the work for you. So if you have any requests or things that pop up that you would like to see something pre-built for, built into the product, then do let us know. On the connector update side, one of the important parts of being an integration platform is, of course, making sure that we keep these freshened up to date. So you’ll notice some of these are just about getting up to date with the latest and greatest. In our Q1 release, we added delete support for next week, and we’ve done the same for Salesforce. We’ve mentioned in the past. Typically, we’ve avoided the delete operation because it presents huge power, right? You could accidentally delete all your records if you’re not careful. But having heard the request over and over and seeing how customers were working around the fact that we didn’t have this in the connector, we decided to add that. For Google shopping, and if I skip down one for Shopify, we’ve just moved up to the latest version and seamlessly migrated all users to that version. For integrated IO itself, which has a great API, we’ve added support for Error Management 2.0 API resources, and there’s separate information you can find about that on the knowledgebase. If you are interested in Error Management 2.0, again, the knowledge base provides instructions on how you can do that. And one of the things that this connector then supports is the ability to automate some of your error management processing, so a lot of interesting use cases there. We have done some webinars specifically about error management. And I’m sure we’ll do more in the future as well, but do let us know if you have any questions about that. We’ve made some updates to eBay related to filter formats. As you saw in the previous slide, we’ve launched Walmart Canada and Mexico. So we just needed to rename our Walmart connector to be specific and say it’s Walmart US. We’ve just rearranged the organization of the redshift and big query connectors to now put them under our database section when you’re building a connector and order for our partner for API-based EDI integrations. We’ve updated that as well to support their transaction endpoints. So plenty going on there. We have a dedicated team that’s focused on building up connectors and keeping them up to date. We’re monitoring the improvements in APIs and adding those to the product as we go. But again, if there are things that you would like to see supported in the product, let us know. And if it makes sense, we’ll certainly be happy to add that. I’ll also put out a plug if any of you are working with major systems that you’re interested in. I can think of some like Workday, for example, and would like to partner with us on building up those types of connectors. We’d love to hear from you specifically on that. So feel free to reach out. Okay, so now into the major features of the race. The first is auto mapping. We’ve added some AI capabilities in other areas of the product, such as error management, to support the automatic classification of errors coming back from APIs. But we’ve added automapping, which essentially allows you, at the click of a button, to populate the mapping table here based on everything that we’ve kind of learned from all the integrations on the platform. Integrate IO is a cloud-based multi tenant platform, which means we can sort of see across in a anonymized, aggregated fashion all the different mappings. And so we can sort of learn what the typical mappings are between applications. So when you see this, I’ll show this in the demo later. The button actually has a beta tag on it. But we will be careful not to overwrite any of your existing mappings and give you a chance to review the mappings that we’ve added. And it’s just a quick way to build accurate mappings based on flows that we know are working because we can see our flows across the customer base that are working given a set of mappings and then using natural language processing and so on. We’re able to make pretty good guesses as to what the mapping should be. So I’ll demonstrate that to you a little bit later in the in the webinar. We’ve made improvements to debugging for listeners or web hooks, essentially. So in the past, we know it’s been a bit difficult to see the payloads as they come in to see maybe what error messages integrator IO might be sending back to the source system and so on. So it just makes it a whole lot easier for users to understand what’s going on particularly when they’re building close. So here, we’ll demo later the ability to enable debugging and then to use that data effectively in multiple ways, in fact. So I’ll show that to you in the demo section. We’ve also added reports capability. At the moment, the only report we have available is one for basically running a report to show what happened in the flow. So for a given flow or even a set of flows, you can generate a report covering any three-day period that we have in our records. I think going back over the last 90 days or so and offline, it happens asynchronously. We will generate a report that you could then download. It produces a CSV file and that file will show you exactly what went through the flow. So as you know, integrator IO caches data for the purposes of error handling. So when when an error occurs in a flow, you can see the retried data, you can examine the payload, even make edits to the payload and enable you then to make any fixes you need and have the flow continue to run. We don’t log success and ignore details. And that was an early design choice. It is something we’re going to revisit. But in the meantime, this is a way of at least seeing the identifiers of the records that were successfully processed or successful or indeed ignored, right? So I’ll explain that more when we get into the demo. But we have a screenshot on this slide that shows first, the ability to define a trace key. And a trace key is essentially the primary key or the identifier like the unique identifier that could be used to understand what a given flow or which record is being processed by a flow. And then you can– this is, of course, a detailed view in using identifiers of given flows and the exports and imports that are used in those flows. But for someone that is truly doing some forensics to understand what happened in a flow, including how you might work with our support team to understand how a record that should have been processed like a sales order went missing or something. This is a way we’ll be able to support you in tracking down exactly what happened. So again, I will Show this a bit more in the demo, but it’s a key next step in our continuing evolution. Again, in future we will be providing logging of old payload data. But that’s something explicitly we don’t do today. And in fact, we wouldn’t change until we revise even the terms and conditions around the product, because part of our agreement with you is that we don’t store sensitive data of yours., right? So it’s part of a journey that we’re on and we’ll be providing more information on the continued evolution in the future. So I’ll come back to this when we get to the demo. So the idea of a trace key, is a unique identifier, but I want to mention, too, that it’s possible to overload or override the trace key. So this only applies in the context of these reports that I was talking about and indeed in Error Management 2.0. But it is possible if, for example, the trace key that integrated IO assumes to be the unique identifier of a given record. If it turns out that it’s not unique for some reason, then you have the ability to augment that using the handlebar syntax that you used to. So in this case, we’re combining both the idea of the record with the email address that’s on a given record and the combination of those things. If that’s unique, it will allow us then to handle better reporting and essentially trace that a given record was successfully processed. So this is just a way of ensuring that you have better visibility into what’s going on in your flows and that Error Management 2.0 is better able to understand when a record has been successfully processed, even if it had errors in the past. On the Crypto side, we’ve made some improvements to the way we handle PGP encryption and decryption for our file connectors. So we’ve always supported PGP, but we’ve now improved that by adding the ability to configure the compression algorithm and support for ASCII armor and signing hash. If you know what those things mean, you’ll realize why that’s important. This often comes up in working with FTP sites, for example, where the data needs to be encrypted and it just gives you a lot more flexibility to meet the external requirements that you may face when integrating with various file providers. And the final major thing here is the introduction of single sign-on SSO via OIDC, which is the open-eyed ID connect standard. So this is simply about enabling integrator IO to be better integrated with your corporate security requirements, and it simply allows the configuration of an SSO identity provider. And from there, the ability then to– once it’s enabled, to on a user by user basis, decide which users should be forced to sign in by SSO. There are still valid use cases for users using email and password. So, for example, always the account owner will be able to log in with email and password just in case the identity provider is down. Of course, you still want the owner to be able to log in to the product to maybe correct any issues that might be happening there. And we’ll see in the product that it’s very easy then to control the the configuration on a per-user basis. Here with we are supporting OIDC, the OpenID Connect. There is another older standard out there called SAML; and in our survey of the market, in talking even with the major identity providers like Okta, or Auth0, even though now, in fact, Auth0 is part of Okta, the strong guidance from them was to focus on OIDC. All the major identity providers support OIDC, and it is a more mature standard. It’s based essentially on old OAuth 2.0, so it’s quite seamless in that respect, and SAML seems to be falling out of favor, so that’s why we’ve opted for OIDC. If you have any, let’s say, pressing needs around SAML, do let us know. Although I will say at this point it’s not something we’re looking to prioritize, because OIDC does seem to be the default for new technology these days. Okay. So there’s plenty more that’s gone into the release, and I encourage you to check out the release notes. But meanwhile, I’m going to go into the demo, and if the demo gods work, I should just be able to slide across into my Chrome browser. You should be able to see now. All right. So the first thing I want to talk about is debug logs for listeners. So here I have a simple flow that’s listening to events via Webhook and just, well, what it’s doing with it dumping to FTP, it’s not really what’s interesting here. I can see from my dashboard that this flow has been running pretty steadily over the last twenty-four hours. So that’s interesting, but I want to know what’s actually happening at the next level of detail. So here I can go into the listener, and now at the top here we have this option to view debug logs. I think if I didn’t have it configured, it would say, “Start debugging,” but here I can see that I’m actually still in a debug period. Twenty-two minutes remaining, I could extend that for, say, the next forty-five minutes, and you’ll notice the refresh logs button just enabled itself. That means that I’m continuing to get more data. So in this case, this Webhook, it’s not really the point of the story, but in fact this is people using integrated IO themselves, right? So if I click on here, we’re going to see– I limited it just to solego.com email addresses, but you can see all these people are logging in, and we can see some information that’s coming from this external system that is monitoring our use of integrated IO itself, right? So I get all this information here. I can see exactly the body of the HTTP request, the header information, and other. So that includes the actual URL end point, the method, and so on. And then on the HTTP response, I can see the status code, the 204 that integrated IO is sending back, okay? And any other information that there might be. So in this case, everything is successful. But you can imagine– and I think in my case on this listener, it’s just secured via a secret URL, right? There’s no HMAC, right? Because there are other options here; basic, or HMAC, or token basic right. So this is a very simple example. But if I had an example where say, the authentication was failing, then I would be able to see that because I would see the body, the headers, and the response code. So it would give me a much better idea of what’s actually going on. The other thing I can do because I have a real sample, so here’s Matthew Murphy from Celigo who’s logged in, I can now grab all this, in fact, just hit the copy button and now go back to the flow and actually populate this as my sample data so that now when I go downstream, for example, and add a filter I can now use this directly. I have access to all the information from the sample data. So this is something that was just more difficult than it should have been before, frankly, because you were forced to rely on information from the API docs of the– of the vendor of the producer of the webhooks. And sometimes the information is spotty. You don’t know exactly what the payload looks like. With debugging of these webhooks you can see exactly what’s coming in. And it makes it much easier to work with webhooks. We will be doing similar things with other type of listers as well, so for the Salesforce or for NetSuite. At the moment, this is for HTTP-based listers. All right. So that’s that. And let me move on now to briefly show you a bit about the trace keys. So I mentioned now we have this new– well, this is the first type of report we can generate. We will be adding others in the future. But if I go to the reports menu, there are some that I’ve already produced. But if I click run report here I can choose a given integration. Let’s say from here I can choose the flows. I could choose multiple. And I could choose a date range. Let’s say in the last 12 hours– oops, last 12 hours and apply and then click run report. As I said, it’s asynchronous, right? So offline integrated IO is going through all the– all the archives to gather this information. And when I return at a later time it will be completed. So once it’s completed you can either view the report details to see who requested it and when and what the date range, what the time frame was. Again, this covers any three-day period so of a– for the period of time we have in our archives. And I’m just going to show you one. If I download it will download as a CSV file. So I took one of those CSV files and just put it in a Google Sheet so we can see it easily without me having to open Excel. And you can see an example. This is in fact, is the same example I used in the slide. But you get information from a given flow. But you can see the trace key. Again, this is an example where we’ve designed our own trace key as a concatenation of some ID and the email. This is the Unix Epoch time. Here you can see exactly which stage or what was happening. And so we can see even that records were ignored or there were errors. And in the case of errors, you can actually see where the error occurred, even what the error message was all the way up here, and how the error was classified by EM 2.0, Error Management 2.0 as a parsing error. So this is, of course, intended more for forensics, right? So if something went wrong and I need to understand, “Hang on, what was this given record processed correctly or not?” I can actually now go back into the system, use these reports to do that forensics, again, potentially working with our support team, but understand exactly what went on. So now we can see pieced together what happened. So again, in the future, we’ll be looking to provide similar types of reporting to what we can do with error reporting directly inside the product where you have access to all the payloads throughout the duration of a flow. But this is the next step in this– in that journey and is a way without leaking any information about the actual payload data itself. It gives me enough to continue my investigations to understand exactly what happened in the flow. Okay. So now I just need to switch accounts for a second in order to show you the next part of the demo which I’ve actually got over here. And that will be auto mapping. So I have a couple of simple flows that I built just to enable me to show this to so here I’m pulling some contacts out of NetSuite, and I want to write those as leads. And as you can see, I haven’t done the mapping on this yet deliberately. So that’s actually what I want to show you. If I go to mapping, you see, of course, the Salesforce assistant showing me the lead form. But I don’t have anything set up. And now I have actually this order map fields button. And if I click that, it’s going to automatically populate for me. And just for the sake of it, I can click preview so it can take the sample record I already pulled from NetSuite and show me how that’s going to be mapped in. And so some of it, of course, is going to be obvious, such as name. And in this case, we’ve got name mapping to full name. So we don’t have on the source side of things broken out into first name and last name. So it will learn over time. Of course, we will make improvements over time to help support those types of use cases, but we’re able to immediately populate some of these fields. And in a similar example, I think I’ve done one on contacts, you’ll see a similar type of thing happening here. So, again, auto-populated. And again, if I click preview, it’s going to give me the information there. So it’s a good way of very quickly getting fields set up. These are fairly simple examples. But again, it is based on machine learning that’s operating across the entire set of integration flows that we have in the whole system. And that means it will learn over time. It will get smarter as we go. And if there’s any instance of System A talking the system B, we should be able to leverage that so that the next person coming along can take advantage. Now, the typical way this will work, of course, is you would maybe hit that button to populate a lot of the base fields, the simple stuff, and then you would go through and do the review yourself. But a lot of it, it will just save quite a bit of time and I think improve accuracy because many of the sort of obvious fields are going to be mapped for you automatically. So that launches a beta here in the June release. Of course, you can still do it all in the way that you’ve been doing. So there’s no changes there. And if you already have mappings, hitting this button won’t cause those to be overridden. So you can rest assured there’s no impact to what you’re already doing. Okay, so the final thing I wanted to cover here was a single sign-on. And so here I have an example of single sign-on where first I log into Okta. This is a typical pattern where businesses will force users to sign into an identity provider in this demo. It’s Okta. If I click sign in here, it takes me to an off-the-dashboard. And this is not Celigo specific. This would be where your company might have the launch screens for Salesforce, for NetSuite, for some of your other Oracle products or ServiceNow, whatever it might be, Jira, everything listed here. And then there would be an icon for Celigo and just clicking on that will launch me straight into Integrated IO. So this is the basis of SSO. Of course, you log in once. Of course, it also means then if for whatever reason I was kicked out of Okta, let’s say I left the company, then the IT simply have to disable that in one place and I have everything taken care of, so now I would not be able to log in directly to integrate io. So from the admin point of view, if I switch into this account, the first thing you will see, actually – I’ll just stop here first – is that I’m an administrator on this account. This is the account I logged in as. And this is just a plug and a reminder that we introduced the new administrator role in our Q1 release. And so it means as an administrator, I have access to the SSO settings as well. So if I switch over to the security tab, you can see how it’s being configured. For those in the SSO world, this is all pretty straightforward, just using the various URLs. And, of course, there’s a client secret that has been provided previously. And this has enabled a very straightforward way of having this integrated with the identity provider. Then on each individual user, as Aman here is the account owner, so he can’t change anything about his own setup for SSO. But it does mean on individual accounts we could turn on and off, right? And this could be important. Let’s say you’re working with a partner. You want to give them access to the account, but you don’t want to add them to your identity provider. You could just simply allow them to log in with username and password. Okay, so you have full control over that. Okay, so I think that’s the end of the demo. And I think we do have a few questions. But before I do, I’ll just go back to the slides for a couple of things. The first thing I wanted to mention is we’ve launched badges for our community. If you go to the help center, what we call the knowledge base, you will also see a community section at the top of the page. In fact, why don’t I just do that for you while we’re here? Oops. Sorry, I hit the wrong button there. Go to the help center. You can see here there’s a community link and there’s a lot of good content in here, but that’s actually what I wanted to mention. So now on the community pages, there are badges for different users, so you’ll be able to see easily who is a maybe a Celigo employee, which partners and so on are certified through Celigo University, which I also invite you to check out. Do they have specific certifications for our integration apps? So this is just a way of elevating the profile of individuals. And the ultimate goal for us, of course, is to build a vibrant, dynamic community where there’s a lot of interaction that’s happening even independent of us. So as an example of that, I just took a screenshot and this is all directly accessible in the community pages. But here our friend Ramesh had a question. Our customer Justin responded and Ramesh got what he was looking for. And you can see this happened within about a 24 hour period. So this is the type of thing that we’re looking to encourage. If you haven’t visited the help center recently, it has changed enormously, especially over the last 18 months. A lot more content. The look and feel is great. It’s a pleasure to use. And now the community with our own team has spent a lot of time seeding the community with valuable content and now we’re seeing questions being posed. I mean, they’re not– sometimes they verge on support tickets, of course, so we tend to route those to support. But often there are more like how to. How do I do this with the product? And out of that, of course, we not only get the questions answered, but they typically also result in new community posts or new docs that we add to the knowledge base. So this is a great way of getting your questions answered. And if you would to visit it, I think you’ll find you will maybe learn something from your fellow members of the Celigo community. So really invite you to to participate in that. Okay. So that’s the end of the prepared content. And we do have a bit of time for for Q&A. So, Kim, back to you. Thanks, Matt. That was great. So I encourage you all to put your questions in the Q&A section. We do have a few that have come in as you were going through. But don’t be shy, those folks that are out there. So the first one was sort of going back to the beginning on the integrator.io API. So I think in general, maybe a bit more on what those are and how customers tend to use them. And then how do you get access to the io APIs? Yeah. Okay, so the first thing is all the integrater.io API docs are available on GitHub. They’re also linked by the docs. And, well, there’s many different use cases. So the first is there’s a set of crud, create, read, update, delete operations, you can perform on each of the resources we have like connections or exports or flows or imports, etc. So that’s one thing. Ten in terms of use cases, well, there are plenty. Hard to say. But it could be invoking the running of a flow or the running of an individual export from an API. So let’s take an example. So imagine I had a SQL query. I’d built an integrator.io export to essentially run that query. I can then use the integrator.io API to invoke the export from the outside world and that’s secured via a token. And I think I could show here. Which account am I in? Yes, I should be able to see here. I could create an API token. Right? And specify the scope of that token, so maybe it’s a custom token. It only has access to a given connection or a given export. But basically, then I could expose that API to another application to invoke. So that’s just one example. But there are others, of course, and we’d be happy to talk about those with you. It’s also possible to access the integrater.io API via JavaScript, so within Hools so there are some use cases for that as well. And maybe the suggestion or the question prompts the fact that– and in fact, Kim and I have been talking about the need for sort of a technical webinar series to get into those things more deeply than we can in sort of a fly by where I’m just touching on it. So definitely something we could revisit. But the integrater.io API is actually pretty powerful and more or less anything you can do in the product, you can do via the API. That’s the short answer. So next question is around the debugging capabilities. Are there any special privileges an account needs to be able to access the debugging logs or the reports. Do they need to be a certain level, or can anybody access them? No, anyone can access them. Because in a way, it’s the same as a monitor access. Just like a monitor access, it’s, essentially, read-only, and allows users to see the payload information. We’re assuming if they have at least monitor access, which is pretty much anyone, then they should, also, be able to generate the sort of data reports, and access the debugging information. So no, there’s no special privileges needed there. Got it. Okay. So back to the API. Can the API use one end point to trigger another? Yes. And I don’t know if we can take the question off mute to elaborate, but yeah. In short, yes. Let me see if I can find– looking for Todd. Oh, there’s Todd. Okay. Todd, your line’s open if you want to elaborate. Hi. Hey. Hi. Hi. Okay. My question is, is in using Celigo, I know that I can use Celigo to connect my API driven endpoints to communicate back to a master hub. My question is, can I use the triggering of one endpoint, which would automatically send information to one of my systems, to trigger another integration because it occurred? Yeah. You could. I think I’ll need to switch accounts again here, but if we take this simple example– so are you saying, like in this case, there’s an event coming inbound, and then I do something with it, right? Are you asking if I could run an additional flow because of that occurring? Yes. All right, so there’s a really simple way. And more importantly, not just can I add another step on that flow, but can I branch that so that I’m looking at something where there’s a branching operation, and I now have a flow that goes in different directions based on different conditions? Yes. Yes, you can. So I’m jumping between accounts because I have some things in one where I can demo, but I don’t have all the privileges that I want. But one of the things we have that might meet your needs, Todd, is something we call myAPI. And that enables you to expose an API endpoint., right, that, essentially, when it’s called, it will expose a rest API, and when you call that with whatever method you want, whether it’s a porter post, a patch, delete, get, whatever, it will invoke a JavaScript script. And that script can have whatever logic you like in it, which could include invoking other flows or imports or exports. So I think the short answer is yes. And obviously, this is probably not the ideal forum to get further into it, but I think, yes, we should be able to support you there that was it. Thanks, Todd. Thank you, Todd. All right, so next question, are there plans to add true branching logic to flow someday? Yes. We’ve got that all speced suspect out, actually. And well, I don’t want to overpromise, but we will definitely see it in 2022. I’m confident we will see it in the first half of 2022. And I’m optimistic. I’m hopeful that we could see it in Q1. We have it speced out. I think it’s going to be great because it will leverage– if I just go back a couple of steps here, it will leverage– I think I confused it when I did that. It will leverage the same sort of design pattern we have for filters. So you’ll essentially be able to define the– essentially, the filter logic per branch, and then do true parallel execution of those different branches. So as you know, we sort of emulate that type of functionality today by having bubbles placed in cereal with different filter criteria. Obviously, that’s not true branching, but we are indeed working on true branching and would be happy to discuss those plans with you further offline if you’d like. Great, thanks. So next question is back on– oh, we got a hurray from Jeff. So hurrays. Yes, I’m with you, Jeff, hurray. [laughter]. Yeah. On the debugging, are there any plans to be able to set the debuggers to run longer than 60 minutes? Essentially, the bulk of it is– if you’re trying to trap an error, the likelihood of it happening within a 60-minute period may be low. And you would then have to constantly reset the debugger to try to trap the error. All right. I understand so at the moment, no. But I will take a note of that. As I said, we do recognize that, ultimately, you want full payload logging. I mean, that’s the end goal, right. As I said earlier – and I hope I didn’t confuse anyone when I said it – at the beginning, we made a choice to say, “We don’t want to store your data because it raises a whole bunch of questions.” And it goes back even a few years ago when let’s say in the early days where we didn’t have SOC 2 Type 2 compliance and things like that, right. So we didn’t want the risk, in a sense, of holding on to your data except to handle the errors. Now, of course, we do that with confidence. We have all the security certifications that anyone would want. And now I think it’s something we need to get towards. So some of what you see with my API– oh, I’m sorry, the logging that I showed you, it’s actually based on some underlying infrastructure that we’ve built that will support handling a lot more information in logging. And I think, yes, we will be looking to add more. So while the debug capability there may be limited, I think limited to sort of a 60-minute period, I think in future either, A, we could extend that. So we’d be happy to understand more of the need. I think certainly our intention with this one was much more in the initial development and testing of these flow where you want– you’re deep in it. You are doing active testing and less related to sort of the ongoing monitoring. So on the ongoing monitoring side, I think in future when we have more payload logging across the board, then I imagine we will sort of handle that as a matter of course. But yeah, if you could share more on the sorts of use cases where you would expect for something to suddenly go wrong on that sort of thing working reliably– and as I say that, I realize these things do happen because you don’t always control what’s happening on the other end of things. But certainly the request is noted. Thank you for that one. So we have another one here. And this one may be better suited for our support team, but I’ll see how you do it, Matt. So the question is more sort of brass tacks on actually how to set up a particular type of flow. So can you give me an idea of how we can transfer multiple discounts to– so I think this is line item discounts from NetSuite to NetSuite via Celigo so that they they are trackable. But my feeling is that this is a question better suited for our support team, but I’ll let you comment first, Matt. Yes. And as a how-to question, it may even be a good candidate for the community. So I don’t know if it relates to a specific integration app or a custom integration that you’re building. But whoever asked that question, I’d suggest you check out the community. There’s quite a number of internal folks much smarter than me, I should add, that get automatically notified when there are community post. So our turnaround time on that is pretty good. Or of course, yes, you could also ask that via the support team. Great. Okay. And then we still have time for a couple more questions. We have one here back on the SSO. So I don’t know if you specified if there was any specific SSO or identity management providers that we support or if it’s just anybody that supports OIDC. Do we only support Okta or an active directory or is it sort of open? So this is the point of standards, of course, but we support Azure, Azure active directory, One Login, and Okta. So when I say support, they’re the ones that we explicitly did our QA on. However, because OIDC is as a standard and because it’s 2.0, I wouldn’t be surprised if it worked with others as well. So again, that’s available in the docs, and I just found that the doc there you see on my screen now, it is for Azure, Okta, and One Login. And we have dedicated docs, and I think that’s where it helps, of course. Like, say, for Okta, we will tell you exactly what you need to do in Okta, which links that you might need, etc., in order to get it configured. Okay. So it should definitely be possible with others as well. Perfect. So that looks like it’s the last question we have. So with that, I’m going to thank Matt for his time. Thank you, everybody, on the call for your time as well. And if you haven’t already seen them, we have a couple of webinars coming up. One on a new product that we have coming to market around PAY-S reconciliation. So if you’re in the e-commerce space and you or a customer is dealing with trying to reconcile payouts from their e-commerce platforms, payment gateways, etc., into NetSuite, this webinar may be of interest to you. We have a solution that automates that fully for you. The other one is around BigCommerce. So if you’re thinking about or using BigCommerce and NetSuite, we have an integration app for you. So this will be all the details of how that integration app works between BigCommerce and NetSuite integration. So, again, if you’re thinking about moving to BigCommerce or are already a customer and not automating your integrations with NetSuite, this webinar might be of interest to you as well. So, again, thanks everybody for your time, and that’s going to conclude our webinar for today. Thank you. Thank you.

About The Speaker

Kim Loughead

VP Product Marketing
Celigo

Kim oversees Celigo’s product marketing team where she is responsible for go-to-market strategies, pricing, product messaging, and content working closely with sales, product and marketing teams.

Prior to joining Celigo, Kim was VP of Marketing at Knowi, an augmented analytics startup, and Sr. Director of Product Marketing at Informatica. Kim has over 20 years of experience in the data integration space both as a customer and running product marketing organizations at various software companies in Silicon Valley.

Kim hold a B.S. in Business Administration and MIS and a M.S. in Management Systems both from Notre Dame de Namur University.

Matt Graney

VP Product
Celigo

Matt Graney is a seasoned product management leader with over 15 years experience in the discipline across B2B software enterprises and startups. At Celigo, Matt is responsible for the company’s overall product vision, strategy and roadmap. Prior to joining Celigo, Matt held senior product management roles at Axway, an integration middleware vendor, where he was responsible for the global portfolio strategy. Before that, Matt led product management for strategic products at Borland Software and Telelogic (now part of IBM Rational). Matt began his career as a software engineer in the defense and telecommunications industries in Australia. Matt holds a B.E. in Computer Systems Engineering from the University of Adelaide, Australia.

Meet Celigo

Celigo automates your quote-to-cash process with an easy & reusable integration platform-as-a-service (iPaaS), trusted by thousands of eCommerce and SaaS companies worldwide.

Use it now and later to expedite integration work without adding more data silos, specialized technical skillsets or one-off projects.

Q

Related Resources

SaaS Executive eBook

SaaS Executive eBook

This guide is for software companies who want to grow and scale their business quickly. One of the most critical ...