#1-ranked iPaaS on G2 among 236 competing solutions

On Demand Webinar

Knowledge is Power: Driving Growth with Product Metrics

Every product and business leader looks for ways to measure customer experience, accelerate adoption and improve customer retention for their product. Measuring the right things requires gathering data from multiple systems so cross-functional teams can automatically gain insights from data, and are empowered to identify growth potential or churn risk. 

Integration platforms (iPaaS) play a key role in addressing the many app integration challenges product teams face in modern SaaS companies. In this session, Matt Graney, VP of Product at Celigo shares his expertise on what metrics are important to track, and how Celigo is using this framework to measure customer health and drive growth. 

Topics that we discuss include:

  • Customer experience metrics that matter
  • Integrations that built Celigo’s customer health score
  • Closing the product metrics feedback loop
  • Practical tips for getting started with product metrics

Watch Now!

Full Webinar Transcript
We have a lot to cover, so I’m speaking to you from a beautiful spring morning here in the San Francisco Bay Area where Celigo is based. I lead product there.Celigo is an integration platform. And that turns out to be pretty significant and kind of germane for our topic today. So let’s get into it. So the first thing I’ll say is we’re actually going to be talking about a lot of different products today. But I don’t want you to think that this is some sort of cheap product placement for the various apps that we use, because I think you’ll find that no matter which app you’re using to handle some of the things I’m going to talk about, you will you’ll be able to get very similar results. Of course, we work with some very fine vendors, but that’s not really what it’s about. However, there is one product, of course, that I am here to talk about, and that’s our own. And that’s because it’s not only our product and this is about product-led growth, but it’s also the way in which we measure our product. So we measure our product using our product and move the data around to be able to take action on it. So this is kind of a bit of a special case, I think. And that’s really, I think, an exciting part of the opportunity to speak with you today. Okay, so in terms of agenda, let’s have a look at what we’re going to cover. Four main areas, all related to how to get the power from product metrics. The first is to think about customer experience metrics that matter. And we’ll talk about both sort of adoption metrics and user metrics. Then from our own experience, I’ll share how we built a health score, first related to providing power and the ability to act for customer success. But then how we’ve gone beyond that. I think one of the key parts of this, too, then, is how to close the loop. It’s not just about gathering the metrics, but what can you do about it? And I’ll share a little bit about how we’re automating that as well. And the final thing is just a few tips to get started on your own journey. And I think you’ll find you can begin with some pretty modest steps. And throughout the presentation, you’ll see some of what we have to do along the way. All right. So I mentioned we use our product to measure our product. So let me just spend a moment to talk about the product. Again, it’s not really an advertisement for the product, but I think it’s important because it highlights some of the particular challenges that we have. So our platform is called Integrator.io and we’ve been in the integration space for a long time. Integrator IO is a cloud-native integration platform as a service or iPaaS for short. And what iPaaSes do typically is enable business processes to be automated by integrating business applications that all need to then work together in service of some business process. So in order to support our customers in doing that, we have hundreds of pre-built connectors connecting to all the typical apps you’ll see. I’ve even tried to get a bit of an A to Z there from Amazon to Zendesk and many, many in between. We’ve tried to find the balance between full power for developers while also a friendly enough UI, a lot of pre-built integrations available in our marketplace, and then a set of integration apps. So basically SaaS apps that sit on top of the integration platform. So that’s the product and it actually poses some interesting challenges. This is not a product where all the action happens when a user is logged in. And so imagine as a user, you’re building an integration, we can monitor the use of the platform, of course. But it’s not just about what the user does in the UI. Because if they were to build an integration and schedule it to run maybe every time events trigger or on a schedule, what really is interesting from a value point of view is what happens when the integration runs. So this is a typical challenge in middleware. You have the user interaction part that happens during daylight hours and then you have everything that has to happen reliably throughout the day and night. And somehow combining those things is one of the key challenges we’ve had to face. And I’m sure some of you out there, whether you’re in the middleware space or not, will probably recognize this same challenge. So how do you go about dealing with product metrics when you’re faced with a product like this? Now, of course, dealing with product metrics means moving a lot of data around, getting it into the right places. And we have this advantage because, well, our platform is an integration platform designed to do exactly that, which is why we’re able to use our product to help measure our own product. I love that sort of bootstrap idea. Okay, so when we think about understanding the user, there’s really a couple of dimensions that we like to think about. The first is about the user journey. And so this is about how the user experiences the product, how they first on board and begin to get value and derive some sentiment. And the other dimension we’ve shown here on the Y-axis is around adoption. So how far is the product adopted? And of course, if you’re in the bottom left of this simple 2×2, you’re in trouble because you have a negative user sentiment and the product is not well adopted. And of course, you want to get to a point in the upper right, as always with these quadrants, where you have a very positive user journey, where users are able to onboard themselves and move through the process smoothly and the product is widely adopted. And in our case, wide adoption, of course, means lots of different integrations driving critical business processes. So these are the main signs that we care about. And I think you could, again, apply this simple model to your own business as well. Okay. So in terms of the metrics that matter for us, we have a health score that is really comprised of both the user journey and adoption. We then think in terms of stickiness. And I’m going to drill into this in a second. But that’s related to value engagement and advocacy. And then the final is about proficiency. And I won’t get too far into this. But again, proficiency is about how capable are the users? Are they building very complex integrations? Are they using the product in a very sophisticated way? And how capable of doing the job themselves as opposed to maybe engaging a third-party services firm to do the work for them? And really, it’s this combination of metrics that are critical to us. And I think you’ll see probably in your own business, even if the specific metrics might vary, this idea of an overall health score, some measure of stickiness and a way of assessing user’s proficiency will probably fit your model as well. So just to take a moment to talk a bit more about stickiness, again, there’s the three dimensions, value, engagement, and advocacy. So if I think about value for us, again, this is specific to us. But I think it’s important for you to understand the way we have to look at our own problem. When we think about value, well, it’s about how much data is being moved, how many integration flows are running, how many apps are connected. How much are they done using templates from our marketplace? So this is truly the value that they get when they think at the end of the day, “Do we have a product that delivers on the R.O.I. promise?” This is what our customers are thinking about. We have to think, too, then about how engaged they are. Are they logging in often enough? Are they taking care of error messages as they come up? Have they gone through our certification process through our university site? Are they reading the documentations versus just submitting tickets? So these are proxies for engagement. And the final is around advocacy and these, I think, are pretty typical around net promoter score, customer references, and so on. So that’s a lot of different metrics and, of course, you’re not going to get them all right. You’re not even going to be able to measure them all at once, but these are the sorts of measures that we have in mind when we think about how sticky our product is in our customers. So how do you boil that down? Well, in some spaces, you might have heard about the North Star metric and that’s something that we’ve adopted as a way of trying to wrap all of that into a single metric, of course, with all these contributing metrics, but a way of understanding really how our product is doing. And for us, that’s the number of flows, running flows, that were built or deployed by our customers’ own users. And there’s a lot going on there with saying, well, it’s important for us that the flows are running. This is, of course, how users experience value. If you have middleware and it’s not actually delivering value, it means it’s probably not running any flows. So that’s key, but what’s also important here is that our customers are capable of running or deploying pre-built integrations themselves. So this is what we use to ensure that the user experience we’re building is serving our market. We typically focus on mid-market businesses, maybe with small I.T. teams that might need to get integrations up and running and then hand them off to the line of business for ongoing maintenance. So it’s important for us to use this metric as kind of a proxy for the usability of the product and, if this is going up, it means a lot of those contributing metrics are moving in the right direction, as well. So there are some of the key metrics for us, but again, I think if you consider the dimensions that I’ve talked about, you’ll find you’ll be able to apply them to your own applications, as well. So I took a bit at the beginning about a health score and I’ll just dive into this in a bit more detail because, in the top-right there, I’ve got a little picture. This is a screenshot from Gainsight CSS. Again, not really an advertisement for that product as much as we benefit from it, but in order to build that health score, it was quite a journey. So I just want to share some of the key integrations that got us there and to point out that this was indeed an incremental effort. It didn’t happen overnight. It was step-by-step and each step delivered concrete value. So at Celigo, our main system of record for customers is actually NetSuite. So the first thing we did was to get customer contact information from NetSuite into Gainsight. And that was really a foundational step because it allowed then the CSMs, our customer success managers, to have everything they needed in Gainsight. So it began to become their single pane of glass, as they say. The next was about subscription data. So what have our customers bought? When are they up for renewal? So getting that also in there, but then we started to overlay some of the product information. So beginning with actually another gain, so product gains like PX for product experience. So we started to move product usage information and NPS data. In the past, we had NPS scores that we were– or NPS surveys we were sending via email. And that wasn’t really a good method. And by moving to inner NPS, we improve response rate by about 5X, allowed us to get very timely feedback from customers and feed it directly into the dashboard for customer success. And it became immediately a very key contributor to the overall health score. We then, of course, folded in support tickets, and it allowed us to be very proactive about managing escalations and ensuring that CSMs could step in if it looked like sport tickets were at risk of breaching their SLAs. The next part of product data, again, remember, this is middleware. It’s not just about what happens inside the product, it’s about what happens as the product runs. So we use Splunk for logging on the back end of the product. So we routed that data for various reasons. We routed that first through NetSuite. In a sense, we’re using NetSuite as kind of a general-purpose database, using custom records and such. So, getting that data in and then routing that into Gainsight. So that meant a really important thing for the CSMs because they are now able to see trends of usage. If a renewal was coming up, for example, and customer success managers saw product usage falling off a cliff, then they would know that the renewal was at risk. It allowed us to get ahead by prioritizing different use cases. We could sort of begin to see a measure of value directly within the customer success team. And in fact, this proved to be very useful for us during the pandemic. Silico does a lot of business with e-commerce vendors. And by having a customer success team with ready access to this information, we were able to see the way in which, say, some of our software– some of our customers in the software space were suffering while those in e-commerce were actually picking up as the pandemic forced an acceleration in digital transformation initiatives. So that was a really important insight for us and allowed us, I think, to anticipate the sorts of things that we saw play out during 2020. And the final thing was about project data. We use FinancialForce as professional services automation tool. And by bringing all that information together, it saved our CSM team an enormous amount of time, right? So, this is a whole product metrics, right? So it’s not just the product metrics coming out of the software itself, but it’s everything around it. For Zendesk, it might be the docs that they’re reading, the support tickets they’re raising, how the product is being used through the UI and at runtime, as we can see through Splunk and all the other business-related information as well. And this enabled us, in fact, to better scale the CSM team to allow CSMs to more easily pick the signal from the noise and therefore actually serve a larger number of customers without putting anything at risk. Okay, so this is really then about knowledge is power, we could say. By going through these steps, we’re able to provide CSMs a 360-degree view of their customers, proactive monitoring of at-risk customers, a way of stepping in when support tickets were at risk of missing their SLAs, proactive monitoring of sentiment. Any time we see a low NPS score, CSMs see it immediately because of this integration and they can step in. More success when it came to onboarding and ultimately, reduce churn, both in terms of initial churn where customers just never get off the ground and maybe churn out in the first 12 months as well as long term churn because we’re able to monitor exactly what’s going on inside the account. So this was a step-by-step process, as you could see. We were leveraging NetSuite here as kind of a general-purpose database, you could say, and that got us a fairway. But we needed to do more to get even more product information into the mix. So for that, we turned to a data warehousing solution, and we began by using PostgreSQL, just a Amazon-hosted instance of PostgreSQL and pulling information from a range of sources. Again, we had this customer information coming out of NetSuite. But then, from the product point of view, we had even more metrics we could use. We had the same usage and error logs we were getting from Splunk. But the static configuration of customer integrations is managed by Mongo product. So we’re able to get a lot of that information into PostgreSQL and then also in Flux, which we use as a time-series database. So by bringing all this information together all in one place, we could get much more in the way of usage stats, do a lot more mining of customer data, overlay tiers of customers and see how different tiers are behaving, use that to proactively identify upscale. So to look for patterns in product behavior that point to an upscale opportunity. Maybe the customers might not be building new flows, but they’re creating new connections. They’re visiting the marketplace and use that as a way of flagging to a growth account exec, like, hey, there’s an opportunity here. We can also see as product features are being adopted. We can use this as a way of influencing the product roadmap and ultimately, of course, increasing sales. But this has been an incremental thing again. Right? So this whole initiative began first as what we might call a skunkworks project. We just quickly stood up a PostgreSQL database. We dumped a few of the tables we thought were interesting, obviously going from Mongo, which is a NoSQL database into a regular SQL, and we had to figure out a few things on the way. And we didn’t get it all right at once. No question of that. But that wasn’t the point. It was really about bringing all the information together and making it actionable. And since that time, we’re continuing to expand, so bringing in Salesforce opportunity data, other analytics from Google Analytics, even more information from Gainsight PX. And we’re looking to move towards Snowflake as this dataset grows and grows, in fact, with Domo for BI on top of that. So again, this is not about product placement. I’m just pointing out that in order to get this done, we need to be thinking about what it is we’re trying to measure, as well as all the different apps that are at play. So in terms of closing the loop, having done all of this work, we have to now think, okay, what do we do with this data? And I think that’s one of the traps sometimes for product metrics. You can end up almost having the data as some sort of museum piece or it’s just a set of dashboards that are presented at a business review and it’s not necessarily actionable. So again, what we’re doing, whether taking data from Gainsight, from Snowflake, or from Domo sitting on top of Snowflake is to push that back into the product. Okay? So we’re synthesizing new insights, things that you couldn’t tell from any one of these products but you’re able to tell through the combination of all this data and feeding that back into the product so that then inside the product, we can engage the customers better. We can help with new user onboarding because we know what typical users do. We can help with feature adoption. Hey, are you aware that this feature exists and drive that from understanding of what’s going on in the backend, whether the product configuration has been set through the UI or through the API. It doesn’t matter. We have a consolidated view of that and we can drive customers to take the actions that we know lead to this success. So just a couple more points to make. The first is just brief on integration. Obviously, we’re in the integration space and we’re trying to use integration to solve this product metrics problem. So I have plenty of thoughts on this and I’ll leave you with three ideas. The first is that some of these integrations that we’re using are the out-of-the-box integrations. Say, a standard integration between, say, Jira and Zendesk, right? But it’s important to know the limits of those integrations. They’re designed with generic use cases in mind, and they probably weren’t designed with precisely your use case in mind. So use them where you can, but know that you may need to look at alternatives because your business process is yours, right? And out-of-the-box integration wasn’t necessarily designed with you in mind. Second, although our product teams are very capable and with lots of engineers who say, “Hey, this is just another problem to solve,” just beware of doing the integration yourself, as in writing a bunch of custom code. It’s a case of just because you can, it doesn’t mean you should. This isn’t necessarily where you should be spending your own engineering bandwidth. And there are tools out there. Obviously, Celigo is one of them. But there’s a good reason for it because integration platforms are thinking about, what do you do when maybe endpoints are not available? How do you back up data? How do you handle situations where one system can produce information faster than the other can consume it? How do you deal with error handling and so on? It’s not that simple. And the final thing is, don’t take it on all at once. Like anything, you need a backlog. And I think what you saw as we built the health score, in a sense, we were executing it against a backlog. We were organized, we prioritized and we iterated. We didn’t expect to get it– get it right the first time. Okay. So what could you do to get started? Here are a few practical tips I think you could see from the rest of the presentation. The first is to define the metrics that matter. I think when you think about the user journey and adoption, think about the metrics that make sense for your business. And that’s really the key place to start. You don’t have to get them all right at once. This is meant to be an iterative process. Then, in order to fulfill those metrics, where is the data and where does it need to be? What’s the system of record for different data? Which stakeholders in the business need to be able to act on that data? How are you going to route that data where it needs to be? And it’s not a matter of doing it once. It’s about automating it, right? In order to scale and in order to iterate, you want it automated so you can just set it and forget it and know that the data is going to be where it needs to be when it is needed. And the final thing is really this feedback loop, just like a fancy thermostat. You’re taking measurements, but you then need to act. Turn those analytics into action. All right. So I know we covered a lot there, but I think you can see why I began with the product placement idea. There are a lot of different products out there. In our case, we are able to use our product to help with the metrics journey for our own business, which I think is a pretty interesting use case. But I think you’ll find for your own business that there are plenty of little things that you can do to get started on this journey, itself. So with that, I think we have some time for Q&A. I’ll hand it back to Heather to moderate for us. How do you define– how to define what data that matters? Well, that’s a question I think that will vary from business to business. As I said, when you think about a health school, what is it that represents success for your customer both from adoption, so the customer as a business, but as well as individuals as the customer? So I think you need to dig deep to really understand how your users experience value and then how your customers actually enjoy the ROI from the product. So I don’t think there’s one answer I can give you but that’s where I would suggest you start looking. Can you please share an example of what you’re able to do with closing the loop on insights driven from your metrics? Yeah, I think one example is really just around feature adoption. We’re able to see features adopted sort of at the first level. But by integrating other pieces of information, such as related to subscription information that we might have, customer success information, even support tickets, we’re able to sort of highlight to a particular segment of users, “Hey, you should be using this feature.” So, for example, related to an integration, you have to think about site concurrency. What sort of volumes should you be– or how many parallel pipes should you open in order to handle integration volumes? That’s not the sort of thing you can tell just by looking at the UI. You need to look at the back end, see how information is queuing up, see how much is being done, and indeed what features are being used. Essentially, therefore identifying the set of customers who specifically would benefit from using the feature and then target them directly, I mean, as a small cohort. So that’s just one example. But I think there are many like that if you’re in a situation where it’s really the combination of this data that provides new insights that you wouldn’t get from any one piece of data. Awesome. Thanks, Matt. We got, when the system is not in place, what is the one metric you would choose to get started? Again, I think it would vary, right? I would begin with how our customers experience value. So for us, it would simply be the number of flows in the platform. If I didn’t have anything else, it would be that because it would show– if the number of flows that people are building using our product is not increasing, we have a problem. So that would be in our case. And I think probably, again, each app will differ. But you’d probably want to go for a single metric, like this North Star metric, that is that one thing that is really a proxy for a whole range of different things, whether it’ll be the usability of the product, the general adoption, the ramp-up, the ability of the customer to do the job themselves. So this isn’t something we arrived at overnight. It takes a bit of digging to think through. But it’s well worth it. And in fact, the metric that I showed, this North Star metric, is one we defined six months probably before we were even able to measure it, right? So it was somewhat aspirational but realized that, “Yeah, that is the direction we want to head. And if that is trending in the right direction, then we’re doing our job.” Nice. Greg would like to know, how do you balance new product development versus support requests that can put customers at risk? Well, I think that is a great question that could probably fit in any one of the sessions for the festival, no doubt. In our case, I tell everyone there are three different things that I look at. It’s the voice of the current customers. It’s the voice of the market who we’re trying to be and that might not actually be the same as current customers are telling us. and then the voice of the company strategy. And how you balance those three voices, I think, will vary depending on the stage of company, maybe if the company’s in the middle of some sort of inflection point. So, I don’t think there’s a hard and fast rule. But you at least need to know what those different voices are saying. I think that’s a really good way of approaching it. And how you weigh them will vary depending on your own needs, I think. To implement this product monitoring in our organization, what are the companies doing? Do they have dedicated team members that work on this exclusively? What is the time frame to implement such a system? Yeah. Well, I think if you take an incremental and iterative approach, you can get started very easily. In the examples that I’ve shown, until, really, the last three to six months, it was all just done out of existing teams, right? Because it was solving specific problems that we had. If a product manager had a problem, let’s say, they wanted to understand feature adoption, they went and solved it themselves. And yes, it helps a bit that we have at our disposal our own product to help with this. But, I think, the key here is to start small, to look for those small wins, and be pretty modest. And so, I think if you take that approach, you could probably get started within a couple of months, really. Because even if you’re just moving data, even manually, some way of downloading data out of your product, getting those metrics, and then putting them somewhere where you can begin to analyze them, slice and dice, even if it’s a plain sequel, even without the iSolution on top, I think you can begin to make progress and probably, again, sort of bootstrap this metrics journey that we’re all on. Nice. Maria would like to know, could you provide an example of a proficiency self-service metric? So, again, in our case, proficiency in self-service would be related to how complex an integration is a customer capable of building. So, maybe all they’re doing is– maybe they’re downloading a CSV file from Salesforce and putting it on our FTP site. That’s going to be pretty basic, right? That doesn’t tell me a user’s particularly proficient. Whereas if they’re making like a real– triggering a flow in real-time, and they’re doing a lookup into another system to sort of cross-reference something, and then calling to multiple other systems along the way, right, in our case, we can think of that as a complex type of integration. Maybe they’re talking to a custom API. These are the sorts of things that would indicate that a customer is pretty proficient. And again, if we can see it’s the users at that company doing the job themselves, rather than engaging our own services team or a third-party service provider, then we can be pretty sure they have a growing competency. And it feeds into this notion that they’re engaged, likely to be successful with the product, and show good growth potential. So, again, I don’t want to keep saying, “It depends.” But, I think, it really does. I mean, it’s just these dimensions of proficiency and adoption and the user journey that are generic. And then, the precise examples will vary from product to product. Nice. We’ve got one more from Artie. If you are in a startup, knowing the resource constraints, how do you balance between putting all of this instrumentation to– putting all this instrumentation to capture data, versus building out the actual product? Well, I think, we’re a startup too. So, I don’t think that’s kind of either an excuse or a special case. I think, this really applies to any business. And for Artie, I guess the question would be, without doing this, how do you know you’re building the right thing, all right? And I think there’s a big aspect of this that really relates to– if you’re building features, and if you’re not doing enough to measure and understand that they’re being adopted, then you could ask yourself, “Are you doing the right thing?” So, it’s more like, “Can you afford not to put these measures in place as a way of closing that loop and making sure that you’re actually solving the right problem?” And this is not a start-up-only kind of thing or something that a startup should say, “Oh, well, we’re too small to worry about it.” I really think it’s fundamental as the checks and balances to make sure that you’re building the right product for your customers. Thanks, everyone, for joining.

About The Speaker

Matt Graney

VP Product
Celigo

Matt Graney is a seasoned product management leader with over 15 years experience in the discipline across B2B software enterprises and startups. At Celigo, Matt is responsible for the company’s overall product vision, strategy and roadmap. Prior to joining Celigo, Matt held senior product management roles at Axway, an integration middleware vendor, where he was responsible for the global portfolio strategy. Before that, Matt led product management for strategic products at Borland Software and Telelogic (now part of IBM Rational). Matt began his career as a software engineer in the defense and telecommunications industries in Australia. Matt holds a B.E. in Computer Systems Engineering from the University of Adelaide, Australia.

Meet Celigo

Celigo automates your quote-to-cash process with an easy & reusable integration platform-as-a-service (iPaaS), trusted by thousands of eCommerce and SaaS companies worldwide.

Use it now and later to expedite integration work without adding more data silos, specialized technical skillsets or one-off projects.

Related Resources