Thank you, everyone, for joining us today. As we talk about using data and analytics to power data-driven culture on our side, we will have myself on the VP of Strategy here at Celigo, also be joined by David Cody, our Director of Operations, and Ray Fang, our Senior BI Analyst. It’ll be actually kind of fun today. We’re going to be taking a look at how we’ve used our own products to solve a very common topic that we see out there today, which is getting better insights and access to your data. So what we’ll do is we’re going to start things off by talking about the modern data stack changes we’ve seen, being in the integration and data space, and looking at kind of emerging best practices to streamline your data and analytics. We will then get into the Celigo journey and David and Ray will kind of share with you what we’ve done, how we’ve gotten here, and we’ll take a look at how iPaaS can help you achieve these objectives and some parting tips. And we’ll go to Q&A at the end so everyone can ask some questions of our team. So the modern data architecture space has been rapidly changing for a lot of reasons. So the power of modern SaaS applications is absolutely amazing and their ability to quickly solve business problems can transform your business. What’s amazing is how many of these applications you can find in your business. And this is regardless of size. I think at this point at Celigo we have over 40 different applications that are business applications that are used within the business. And we’ve got these fantastic proliferation of applications for solving what previously were some very difficult, challenging point problems. You can now pick up a specialized tool, activate it, implement it very quickly. And this is extremely powerful for business. It’s really increasing productivity and the ability to solve problems within a department or sub-department. It’s fantastic. But this also presents some challenges along the way. These tools often have either no integration or limited point-to-point capabilities that are specific to some basic ecosystem assumptions, to some foundational apps. So the end result here is that you quickly end up with data silos and so you solve some problems. You’ve enabled your teams, but you’ve also created a whole other set of problems. And the whole objective here is to become more efficient, to scale better. And often we find that the solution creates a side effect, I’ve called it sometimes an app hangover. They’re there. It’s great but now you can’t quite get the source of truth. You’re not sure where it lives. It is often a combination of data out of multiple systems and integration between the systems that can solve some of that. That’s what we specialize in. And a lot of what we’re doing is integrating those business processes. But you have to need more holistic views and that’s really where a modern data stack, modern sort of data warehousing approach comes into play. And you really end up as a business very quickly slowing your growth or inhibiting it because you don’t have visibility and access to the data you need to make an informed business decision. So that either slows down your decisions or the quality of decisions goes down or some combination of the two. And to be honest, we certainly ran into that ourselves during our growth journey at Celigo so far. Some of what we’ll talk about today. So as a business, you need to move faster and quickly pull your applications together, the data for them together, bring in what you need, not everything generally, but what you need to solve specific problems and move forward. And with the changing of the landscape, both in business systems that are out there and in the data warehousing tools that are available and data repository tools, because it doesn’t have to be a grandiose data warehousing vision to solve a lot of these problems either. And so this has been challenging some of the previous patterns about the best way to architect this. And for us, working in the IPaaS space, in working with our customers that are leveraging Integrator.io, they’ve been coming to us and asking for help on this. They’re like, “Hey, we’re using Integrator.io to connect our CRM to our ERP.” Connections from maybe the ERP back into their product databases, for example. “Hey, now, how do we get this? Can you help us get data somewhere else?” So we’ve definitely seen a pull into this and both from large customers. I’ve been working in the business system space for it’s pushing– it’s going to be close to 15 years now. And previously used to only see customers doing centralized data analysis that were big. Yeah. A hundred million plus; planning to go IPO on this path. And we see this moving– the consolidated data analysis moving downstream rapidly over the last several years. And now it’s becoming a key business advantage we’ve seen in a lot of our customers in verticals that previously would– relatively small e-commerce companies, for example, fairly early in the revenue journey are saying, “Hey, we– they are stepping forward. They’re pulling their data together and analyzing it with very sophisticated methods even to the extent that the hiring data scientists, well before other companies in the space, would traditionally do that. And they’re using this to their advantage to accelerate growth. And so we see it as this whole downstream with the data. So earlier and earlier, getting access to data is a key part of many companies’ growth strategy. And with that, that’s definitely changed to change the landscape. So we get these customers coming in and asking us how does Celigo and Integrator.io fit into a data solution? Should I use a legacy UTL tool? Should I use an IPaaS-based approach? Is Integrator.io the right tool for that? And also, “Hey, and where should my data transformations live? Where’s the best place to put those? And we’ve seen an emerging pattern come out of this. And that’s what this diagram represents here is this new pattern we’ve seen that’s emerged, and focused on using the best tools for the best job at the appropriate point in the process, and so the previous concept of extracting data, transforming it, and then loading it into the data warehouse. And when you’re talking about several big systems, that paradigm can work, but we’re often seeing there’s a proliferation of smaller systems and they often don’t have the file-based output that the ELT systems often are looking for intrinsically. And that’s where we’ve seen this pattern with leveraging iPaaS solutions to take your SaaS data from your different SaaS applications, just starting with your foundational ones generally, and then expanding from there and then mirroring the data that you need to solve your problem at that point in time. So mirroring that data over into a set of what here, we’re calling them staging tables. You could call them anything. But starting out with getting that data mirrored and over into a data repository; or, a term that everyone commonly throws around is a data warehouse but that can have a connotation of big, complex solutions with a canonical data model. It can be that on one end but it can be honestly, something much simpler on the other end in just getting a mirror of a few cables out of your ERP and then your CRM and being able to join those and query those in a repeatable fashion with BI tools. And so with this process, we see the transformations actually living in a different place. And this is kind of a key part of this architecture is it frees up companies from trying to do the major transformations as they’re bringing the data in to instead getting the data from the different systems into these staging tables in your data warehouse and then applying a best-of-breed transformation tool. So we see we see companies out there using tools like DBT, data build tool, patchy airflow for this. Different platforms also have solutions built-in on the data side, so Snowflake, for example. Different database tools have tools that are built into them. SSIS within some of the Microsoft stack and using some of these tools to then do the big transforms. And once you have all your data together in one spot and we’re seeing that emerges as a really strong pattern that we’ve seen a good number of customers have success. And it also allows different tools, for example, iPaaS, to really focus on the rapid connectivity to the South applications and the rich connectivity there. Bring that in, have a best-of-breed transformation process and tool within your data system, and then from there leverage your BI tools to get insights. Another advantage of this and we’ll talk about this a little bit later as well is that iPaaS enables you just as well to access the data when you’re done with your analysis. And both the complex big T transformation, but also maybe more sophisticated calculations and building of insight-based data that now you can then pick up and then push back out into other systems. In our diagram here that’s what we’re representing in the lower right, pushing it back out to a sales force or gain site for our CSMs to have access pushing messages out to slack. And so we combine that data analysis BI with an iPaaS for inbound and outbound, really finding that we’re having a best-of-breed solution, that customers can very affordably and quickly get in place and leverage to their advantage. So why is that– why do we see the market moving to this model? The big thing is everyone needs to get data and insights faster. It’s a key part of the speed of business and demands are changing. I think 2020 showed everyone how quickly the world can change. Some businesses out there were clear winners, some were clear losers, and there were some businesses in the middle that through their ability to adapt and evolve rapidly capitalize on new opportunities. And the pandemic is just one example of that. But this increasing speed inside is something we see across companies of all sizes from Fortune 50 down through startups and SMBs. And so how to get that data faster is it everyone’s forefront? And leveraging iPaaS in this data model is is a way to do that. And it’s really similar to what we hear at Celigo. By leveraging an iPaaS you can better reuse your data pipelines by moving those closer actually where the data is housed, instead of keeping your big transformations there. Allow for better reuse and more efficiency, and then this model leads to a more agile development paradigm. So more quickly iterating over, and refining what you have, and expanding it. So you’re not doing the big bang approach, but you’re able to tackle, point problems, solve those and then move forward more quickly in a more adaptable manner. And then we also find with this model, and through leveraging iPass tool and Modern BI, we’re also able to get empowerment across the organization. Get more people in departments to engage in this process. So it’s not just the businesses and the teams, just IT or BI group, or one individual maybe in a smaller company. But the ability to get departments involved in moving their own data by leveraging an easier to use iPass platform, that still has the power behind it that you need. But taking that approach moves some of the work downstream, and it moves it closer to the business experts that know those endpoint systems. And we see increases in efficiency with that often very dramatically. So with that, we kind of want to get ready here and jump into what we’ve done on the IT side, and talk a bit more about that. And with that in mind, I will hand things over to David Cody to talk about our journey. Thank you, Mark. And good morning, good afternoon to the attendees. Very excited to share with everyone on how we leveraged at Celigo integrator.io to evolve our data analytics motions. So first, where we started. Like all companies, ITSligo began its reporting journey in Excel. Exporting multiple data sets, joining data manually, saving off versions of these Excel files as timestamps, or points of reference. The business was doing this over and over again, week after week, month after month, quarter after quarter. This manual process was incredibly time-consuming, and painful. Often resulted in data integrity issues, and really didn’t provide the business with any real-time insight. Once the manual reporting cycle was finished, the business had already moved on. So not having that real-time insight didn’t really give leadership and stakeholders the opportunity to review the data in a meaningful way. And despite our cumbersome processes and manual work, the appetite for data and insights was growing significantly in our business as the business was growing. Leaders across the business were looking for real-time, unfettered access to data. This spans the spectrum from the executive team wanting a real-time lens into our key metrics to our sales leadership needing to look at a real-time plan versus actual performance success, needing to look into churn and contraction and double click into different segments or business and product, looking to better understand our product adoption and utilization. So we had a situation where we had this very strong appetite for data and insights. And at the same time, we were kind of stuck in this paralysis of the manual Excel work. So what we did is we went out on a journey. And our journey really had two workstreams for kind of phase one of our big data evolution. So the first workstream was to replicate what we were doing in Excel and BI Visualization platform to provide real-time performance metrics such as AR, contraction, bookings, etc. To do this, we leveraged Celigo’s own integrator.io platform to export the data from our SaaS tools like NetSuite, Salesforce, HubSpot, etc, and put it into a data warehouse in Postgres. The second workstream was really about enabling our stakeholders with dashboards where they could go and self serve without feeling like they needed to wait for the manual Excel work to be done or wade through multiple spreadsheets. We wanted to give them really unfettered access to a dashboard so they can feel informed about their particular business. and to share more about Celigo– pardon me, about using Celigo’s integrator.io to build the data warehouse, I’m going to pass things over to our senior data analyst, Ray Vink. Thank you, David. And hello, everybody. So really to kick things off, I just want to start by saying getting our data from these Excel workbooks and these Excel sheets into a data warehouse, our Postgres database, using IO was really unparalleled. It really allowed us to start delivering our data reporting in a timely matter, deliver it accurately. But most importantly, it allows the scale, right? It allowed us to go from pulling extracting reports from our different data systems like Salesforce and NetSuite and really just have it centralized in one location so that there’s no debate whether data is accurate or not. And so when it comes to reporting, the most important thing is that we make sure that all of our data and reports align with each other. And so there was this constant problem where there was this one analysis that was done a week ago, and then another person does their analysis today, and none of the data lines up. And so by centralizing our data in this one kind of database in Postgres, we were able to resolve that issue. And the steps are actually quite simple. To use the IO tool, all you have to do is essentially set up your credentials in the form of the connection. And then you create flows from one point to the other. And really that’s essentially it. And so from our perspective, our two major sources of data came from NetSuite and Salesforce. And for NetSuite, it was as simple as leveraging NetSuite saved searches. And then for Salesforce, we were able to just leverage queries. And even if you’re not super technical in terms of brain queries on using Salesforce, you can always use other tools like Workbench or some other tool that can help you kind of create those queries for you. And so all of our flows are actually set to run on schedule. And this really allows our team to gain more insight and visibility in terms of when was this data last run? When was our data last integrated into our platform or into our database? And by shifting to this kind of structure rather than downloading the supports, pasting them into another report, and going through all these manual processes, it really alleviated many common problems such as copying and pasting the wrong data, point data at different points in time, and overall just using the wrong datasets. And this really helped our team at least save a lot of time and overhead in terms of reporting. And so like Dana mentioned before, a lot of our time was spent on creating these reports rather than doing the analytics behind it. And once we did the report, all the useful information that was being driven from the report is now obsolete. The month’s gone by and you still haven’t used that insight in any useful manner. And so now once that data is loaded into our database, we over our BI tool on top of it, which essentially gave the BI toll the building blocks to kind of transform the data the way you want. So our BI tool we use right now is Domo. And in Domo, we have a built-in ETL. And so we basically gave our stakeholders access to the raw data behind everything. And they can transform the data as needed. And so once you have your data transformed, essentially, we’re able to dashboard, create visualizations and just create other analytics behind it. And this really allowed our stakeholders to gain more access and size up their data. That’s awesome, Ray. I can tell you, personally, I don’t miss the proliferation of Excel analysis in Google Sheets that we were having to rely on before and pulling data out and pushing those. And was this updated when it was? Is this in sync? Is it not? I think it’s easy to discount how much inefficiency there was in that the dashboards in our BI platform, I mean, honestly, just amazing. Another thing here that I wanted to mention for everybody, I think a few people in the audience might be looking at the diagram here and they see PostgreSQL. And maybe anyone has a little more of a technical background or more familiarity with the data solutions might be like, “Oh, that’s just using a standard relational database. And why didn’t you go with the data warehouse? Why didn’t you use something more sophisticated?” And that’s an excellent question. And I would be asking that if I was looking at this. And what we did was actually we started out that journey there. And we started by saying, “Hey, we need a data warehousing solution.” And we’re basically taking too big of a bite to start, honestly, and trying to go out and select the perfect end solution. Should we use Snowflake? Should we use Amazon Redshift, something in Azure? Should we use a BI tool that has some of those capabilities within it? We’re pushing this data to big query. All these are great questions. But try to solve that problem. That problem got in the way of actually solving the business problems that we have, which was we just need to– and what this diagram solved for us is some really key questions around how this product utilization by our customers impact renewal and churn, being able to do analysis on what are predictors of churn, for example. Well, we need a big data warehouse to do that. We just need our data in one spot and start analyzing. And that, I think can get sometimes, you can start these projects and miss that and lose sight of that by focusing too much on the frame working. Well, that’s really important. I think, in a project like this, that matters less than getting the momentum and getting started. And we kind of came to this realization, “Hey, we just need to get some of this data that’s spread across our organization in one spot just to solve a single problem. Take a look, and go from there.” And that really started to– that got a lot of excitement building. We got early wins, and then that just started to build on itself from there. And then from there, we’re going to evolve in different directions as we grow. So moving on, David, you want to take us through the results of the process? Yeah, absolutely. Just a hit on what Ray said, this really allowed us more capacity to do more strategic analyses. So having that kind of real-time look ARR in turn, having everything automated, just freed us up a boatload. And it was a really, really good win. Second is the insights, right, by giving stakeholders really that unfettered access to real-time data, their appetite for data does not stop. It actually increases, but it increases in a more strategic way of really double-clicking into not really the top-level metric, but by segment, by the team, by channel, and really to kind of pinpoint those business problems to drive value. In terms of agility, we’re giving people the ability to self-serve. A great thing about Domo is the usability. Our team publishes data sets and gives our customers, our users the ability in Domo to dashboard and chart themselves, which is great. The data flexibility, we have set up a framework to add more and more data sets, and we’re continuing to do this week over week. And really having that framework leveraging integrator IO really helps accelerate the utilization of data. And then lastly, we’re able to turn around data more frequently, both on a recurring basis, on an ad hoc basis, because we have automated the recurring analysis on our key metrics, turned contraction, bookings, etc. Yeah, David, you mentioned that recurring frequency. And the one that I think of the date of our booking, essentially our daily sales data, that it’s coming off the dashboard and our BI platform in Domo. But now it’s just it’s fully automated. Just going out, we can see where we are against a target, how the teams are doing individually, and aggregate. And it’s just functioning hands-off. And that’s been a big load off the operational team from what I’ve seen yeah, absolutely. Getting that delivered to stakeholders every morning and for them to be able to click into it or double click into the dashboard has been great. And it’s great every morning getting a question from a sales leader; “Hey, tell me more about this,” or “What do you think’s happening here?” And we’re able to double click and add value. It’s very exciting. Definitely. I think another result that we saw from being able to do this, we’ve just seen a reduction in data confusion overhead because a lot of people relying on– and this is where I encourage companies to do this earlier and earlier possible, earlier than you think you need to embark on this journey because we had this subtle drag in particular in leadership meetings where there was always a question on a lot of data. Is that accurate or is it not? And a lot of data came with an asterisk. And so we could only get it so close, and so now we’re trying to make decisions. And so there was often a lot of overhead on, “Okay, is that correct? Is that right? Okay, how right is that?” And that’s just an organizational drag. It pulls you backward. And we really started to cut away against that in a major way. And a lot of cases, it’s completely disappeared. And I see us being able to move towards focusing on insights and action and not in data discussions, simply through being able to quickly move. And where we find ourselves getting in that, we quickly identify and say, “Oh, hey, gosh, we’re agonizing about what we have.” Okay, we go, “We need to run another data project there to get this consolidated, get this working.” We know we can do it because we’ve done it in other areas and we keep expanding and becoming an agile approach. And I’ve seen a lot, a tremendous amount of value to that that I would have– there’s been more to that than I would have guessed or predicted. And the other thing there along with that is, and not to get too technical, but a lot of organizations you see rely really heavily on the built-in reporting capabilities in their foundational apps. So we identify foundational apps, kind of the big ones you might have like your ERP, your CRM. So for us, that’s NetSuite and Salesforce, and they’ve got some really strong, very great capabilities, to be honest. But they have their limits like anything does, and you start to bump against those and you often don’t– and in one way sometimes, you become so accustomed to those limits you don’t realize what you can do when you break through them. I don’t usually get the fear of being too technical, this concept of what we call time-series data where as soon as you start slicing your data. And now most financial platforms, you can do this around your transactional data. That’s part of the process but you have other attributes. You want to slice your sales data by maybe an attribute, maybe a customer segmentation value that maybe changes over time. And as soon as you start to do those things, you run into some problems in reporting on data that changed over your time period. And by moving out to this model, we were able to build out capabilities around time-series data reporting. And Ray did a tremendous amount of work there and really helped enlighten us as a data specialist on the right methods and techniques to go do that. But I’ve seen that absolutely transform the quality of the data and our ability to analyze in a much more confident manner. And I don’t think that can be understated as a key part of the journey. Yeah, and just to touch base now a little bit, like Mark said, there’s always a limitation in regards to where your data is. And so there’s– for us, there are some data that’s off of Salesforce. There are some data that’s off of Netsuite. There are some data that’s off of anywhere else. And so sometimes that data is useful in some of our analyses. And so by kind of pushing our data into this centralized area, our warehouse, we’re able to join those data pieces together and really gain more insights than we would have if we just left them in their individual silos. So, yeah, it was really a really useful way to go about doing our analyses. Definitely right. It’s been huge for us to have that capability. And I feel like we’ve only just begun to tap into it really. So moving on to end the presentation here, I wanted to talk about some of these tips for building for agility. And we just touched on some of those. But continuing with that theme and if you look at the diagram here, this diagram is just– we’re just trying to be representative. We actually can’t– we’d have a bunch of spaghetti if we showed you everything we wanted to push in here, if we show you everything we wanted to push in, all the systems we wanted to get in here, over the next 12 months. But really just trying to exemplify that the one constant is a state of change. And we’ve been very successful leveraging a fairly basic cloud-based data repository. But we’re starting to see the limits to that, where the success of the data repository is leading us to need to spend more time tuning. We’re running into– start running into performance problems and realize, “Hey, we need to– we need to step up in sophistication. We want more transform capabilities in Platform.” And we know, looking into the future– and that’s already started the planning and assessment of where to go to next. And looking at whether we go to a Snowflake – and we’ve represented that here – but it could be any of the tools that are out there, where we know our journey has to continue going in that direction. And also simultaneously, our journey continues by getting more system data in, bringing other platforms in here, bringing Gamesite in, bringing website – Google Analytics data, bringing our HubSpot data in here from our marketing platforms. Several more systems that we need to get in here. And also just expanding the sets we started with. We started reasonably small. You don’t have to bring everything in from a system. You bring in what you need to solve a particular problem. But that just becomes a continual evolution of bringing more and more in here. And then also a big thing for us is the right box, is getting down at the bottom here. You’re getting the insights and processes that Ray and his team have been building in Domo in our BI platform, getting the outputs from those pushed back out to the systems. And we started that journey already. We’re leveraging tools there to systematically score customers’ product usage, apply an algorithm to their usage of our product versus their entitlement to make a decision of, ” Hey, are they effectively using our product?” Because we know that the underutilization, if they bought more than they’re using, that’s more likely to result in churn. So doing analysis in Domo, pushing that back out to sales gain sights so our customers, success managers, and our account executives can see that saying, “Hey, we’ve got a renewal coming up in six months and customers are– we need we need to dig in here. We need to schedule a meeting and see if there’s something we can do to help the customer be more successful.” And that was previously very difficult, had to go to external systems, do that remotely. We’re seeing that have a big impact and those– being agile with those, not trying to bite everything off. But as these arise, we we identified focus, make some movement there to improve. Going through this process, you should realize that every single business team– they’re going to have reporting these cases that they need, identify those early on, try to figure out what you need to do there. And importantly, think about reporting very early in the process. Whenever you start a new initiative -you’re making a business process change, you’re looking at adding a new system- ask yourself about the reporting needs as early as possible. That is going to save you a lot of headache and help to build into the functioning of your organization the right reporting and data mindset. Are we going to need to report on this? Are we going to need to get this in the repository in phase one? When should it come in? From my background, I spent a lot of time in business systems, consulting. and selling cloud based ERP systems and implementing those. And one of the big lessons I learned early on after I started leading the practice was we needed to move reporting discussions to the very beginning of an implementation process for– if we’re accounting an ERP system. And it can be really transformational. You should be talking about the reporting and the reporting needs at the beginning. And you’ll find that you just– you save a tremendous amount of headache. Like I said, you’ll build this into everything. And now along the way, do everything you can to take what we call a federated approach, if you can. So you might have a central center of excellence in your organization or centralized management in some way and oversight. And you especially need to surround data. Make sure you’ve got compliance issues addressed. Make sure you’ve got the correct source of truth. You have to have some control, but also deciding where you don’t have to have absolute control. And you can set up a sector of this that maybe the marketing team gets to work on, they have access to. They can manage what they can do, how marketing ops can build out some of their flows, some of their reporting. Endeavor to do that empowerment as early as possible around both the integration side and on the on the data side will build this culture within the organization. And by using that federated approach, you’ll get what you need and– from a stability, security, governance, compliance. But also, pair that with speed of movement. And that’s a much more agile approach than just looking centrally and expecting one one team or person to do that thing on the side of your organization. So IPASS enables analytics here, kind of sprinkled discussion on that in along the way. But we feel for us at Celigo and what we’re seeing with our customers that leveraging an integration platform as a service– it has really become the modern paradigm for powering data transfer into a warehouse, leveraging a tool that’s focused on SaaS applications that has power but also ease of use Leveraging a platform like our own integrator.io that has pre-built inside of it, where you can leverage pre-existing templates, more sophisticated connectors into both your SaaS applications and your data tools is going to result in faster time to value, reduce costs, and quicker insights. And that just allows for– by going that approach, it’s just more flexible. You’re going to be able to iterate, evolve, and move quickly as the pace of your business continues to increase. You get better reuse with these approaches. So, you can centralize your transforms, your big transformation, heavy transformation work in the data repository, your best of breed tool. Let the iPaaS do what it’s best at, which is moving that data and doing some lighter shaping. If you need to, you still have the power to do big transformations out there. But then, you can be very purposeful about that when you do. And you get a lot of flexibility by using an iPaaS because you can have– you can build this model so that you have real-time data transfer in certain locations. So, you might have a reporting need that you want to have data accurate within minutes. Well, you can facilitate that. You can do this with a sophisticated connector into your ERP or your CRM. You can have the iPaaS receive a new record say and do sales order in Salesforce and bring that out and push it into both your ERP immediately, but then also send that out to the data repository very quickly as well. So in a matter of minutes, it’s now in your data repository. It can then show up in your real-time BI reporting just as quickly. And you have areas where that can bring tremendous value to your business, to having insights on a very short timing threshold if you need them. It’s not the best approach to do with everything that has loads on all these systems. But you can pick where you get value from a more real-time approach, use that, and then have a scheduled approach, whether it’s hourly or multiple hours or maybe once a day. And you can mix and match this approach with an iPaaS base data transfer paradigm. And then, again, the key thing here is to keep in mind is also, “Can we move this into the data repository we’ve analyzed? How do we get it back out? How do we move it out to the business system?” And this is something we see continually missed when customers are sometimes looking at this and doing their planning. It’s so centered on, “Oh, how can we analyze it?” Well, you analyze it. Then what? You’re going to have some actions taken about it, but also likely need to get a lot of that data back upstream. How are you going to do that? Make sure you’ve got a tool like an iPaaS tool, like integrator.io that can facilitate that just as quickly and efficiently. And those are kind of some of the ways that we’re seeing where an iPaaS can really change how you approach this data and drive the value from it. And with that, I’d really like to wrap up our presentation and move to some Q&A here. Let’s see, does anybody out there have any questions? Looks like we’ve got a couple of questions that came in here. I’m going to throw this question at Ray, actually. What skillset is needed on our team to build out the integrations, do we need developers? Well, just personally, I’ve never actually developed an integration before I started here at Celigo and so really picking it up it was actually really simple. Really the hardest part is looking at the endpoints that you’re connecting to really getting a feel for how those specific endpoints need to be written in. But in regards to the kind of data warehouse piece of it, really, I just went through Celigo university, went through the help articles. I talked to some of our support team, and it was a pretty easy way to pick things up and after that, the flows are pretty similar to one another if you’re kind of running from the same endpoints to the next endpoint and so it wasn’t too difficult. You don’t really need a team of developers to create these flows, really just go through a bit of Celigo university maybe like the level one, or even maybe a little bit of level two and really utilize those help articles will get you a really– will get you a long way. Awesome, thanks. Thanks, Ray. I think that’s good. Yeah, a really good perspective for everybody. All right, we’ve got a question here about data warehouse connections. Does Celigo have any plans to improve the Snowflake connector to automate the table and column creation based on the source system’s fields and schema being mapped in? This would significantly improve the time to implement and maintain our data warehouse connections. I’ve got to laugh to myself a little bit here, because this question and the answer to this is very close to my heart, honestly. It’s something that we’re viewing as basically the next step in our relationship to two data connectors is solving this problem that you’re– exactly what you’re talking about is schema replication and then automated population of that data. I’ve been working with our product team, and they’ve actually specced this out specifically for Snowflake and are in the process of building this right now. And the goal with this is to do exactly what you asked here is to make it much easier. So instead of having to do as much manual building or in the case of actually build out your table schema, really want to make this easier for everybody to do a quick start on your staging. So the goal of this is to be able to go in and point at your other SaaS system and say if you want to– we’ll take NetSuite, for example, or it could be any ERP, but you go into your, say, sales order, order tables and customers, and customer sales orders, invoices, and you say, “I want to replicate those. Where are they going? Okay, they’re going into Snowflake,” push a button, analyze both the standard fields on them but also your customized fields, so your custom fields on those and then build out that schema for you automatically, build out the flows automatically because we’re really just replicating that data, so we know it where it needs to go, we know the types, and then work through populate those automatically. And so at the push of a button, have a sophisticated method of doing this that’s also intelligent to the SaaS system we’re connecting to it. There’s some other tools out there today that do a little bit of this and try to focus on it. But they tend to be, obviously, dumb around this SaaS, so you end up with schemas or models that, honestly, are next to useless sometimes, or they create way more work. You’re better off with a flow-based approach like ours because you get exactly the right representation of your data. We know we can leverage our experience with the endpoint systems and provide a solution for customers that will greatly speed this up, and we’re starting with snowflake there with that and then looking to move beyond. Okay. Looking here, what else– we’ve got another question here. Looks like we got some time for some more questions. We know our business is limited by not having the ability to report across multiple SaaS systems, however, we’re still not sure how to start. Gosh, that’s an excellent question; I certainly have thoughts on that. But maybe give David Cody a chance to answer one of the questions as well. Yeah, Mark. You know, really good question. I think you could really tackle this from two directions. The first one is, take one problem you have today, whether it’s, hey, I want to look at plan versus actual bookings or current churn and contraction by segment, try to tackle that problem first and be very pinpointed about how you approach it, how you query the data from the source system, how you put it in your SQL Server and then how you visualize it. The second approach is, start with something basic. We happen to use Salesforce. You know, just create a table on the account object and do a deep dive or a summary of your customer base at the account level and you’ll see quickly how easy it is to both query the data using integrator IO and put it into a SQL server. That’s awesome, David, thank you. I would add to that a little bit, just don’t be afraid to start. Just pick something, like David said, just go sort of as simple as you can. Just call it a proof of concept and start playing with it and start– you know, what’s a burning question? What’s something that we can’t solve right now or is really painful and just start getting going, kind of building, getting that process moving. I think for us it was just getting started on this journey was maybe the hardest part. And I think if you get over that part and get a little momentum, you’ll just see the value, and all of a sudden you’ll get– it’ll be a lot easier either to get more resources or more resources in your organization will want to join into this because they see how successful it is. All right. I don’t see any other questions out there. We’re just about out of time anyway. We covered a lot here today. There’s obviously, as everyone gets a chance to think about this, there’s going to be more questions that come up and don’t– I want to encourage everybody here; We’re very happy on our side to work with you, whether you’re an existing customer or you’re new Celigo, you’re a new prospect. Just reach out and engage with us. Shoot an email over to [email protected]
and we’ll engage one of our team members and we’ll just– let’s brainstorm on your problems around integration in general, but specifically as well on obviously the data architecture topic and how we can help you. And, honestly, we’re here to help, and help you solve your problems, and if we’ve got a good perspective and can draw from about 3,500 customers experience with them, and most likely we’ve seen the problem you’re facing before and we’ve helped solve it for a customer already. So with that, I want to thank everyone for joining in and attending today and really appreciate your time. Thank you.