The principles & 4 metrics you need to know to crush consumer mobile

Giuseppe Stuto
10 min readMay 25, 2021

Over the past few years in particular I’ve found myself giving many consumer mobile entrepreneurs advice stemming from prior consumer building experiences. We built Fam, a group video conferencing app (pre Apple’s group FaceTime) and as my former cofounders on Fam Kevin Flynn and Franky Iudiciani wrote, we went from 0 to 1 million users in just 10 days and with 0 to little marketing spend. Fam was then acquired by Draftkings where I spent time in product & strategy and gained even more insight into what makes for an engaging, highly retaining consumer mobile experience. Now I have the opportunity to invest in companies who nail many of the items I lay out in this piece via 186 Ventures in addition to my role today at Pison.

I continue to find that founders are always asking me what principles & metrics matter when attempting to build a world-changing consumer mobile startup. I finally decided to put together the thoughts that I’ve shared with many over the years in hopes that it will at least be helpful to one builder, and also so that I don’t have to keep digging up the same email along and forwarding along to the next person!

FYI that there are a few caveats I mention at the end of this article that should be considered when synthesizing the following guidance.

Rapidly iterate on actual insights drawn from qualitative and quantitative data points

In the pre-PMF (product market fit) days for a consumer mobile product, it is not as much about KPIs as it is about developing qualitative pattern recognition around user behavior. Gathering qualitative feedback from users / customers, observing trends, and internalizing those trends in your product & engineering cycle is a pre-requisite for any successful consumer mobile product team.

In other words, data and analytics are only as good as the actions you take based on them. The analogy I always like to give relates to taking a difficult math class and doing well in it — setting up the right metrics and benchmarks is akin to making sure you have the right supporting documentation to learn from, access to a good tutor / mentor, and just ensuring you have all the guidance needed in the event you hit a roadblock, but getting a good grade actually comes down to the hard work and iteration that goes into the homework and exam execution — getting a good grade when it comes to building successful consumer mobile experiences comes down to the actual engineering velocity and prompt iteration in response to data and user feedback.

MindTools.com
  1. Implement high level process around routinely extracting insights from your analytics setup — end goal is to constantly collect user feedback and inform your product roadmap, both short and long-term.
  2. You know what metrics to track (see next section) to understand how well or how poorly your product is performing. You will want to integrate this constant metrics measurement into the workflow of your operation. One way of doing this is having one person responsible for gathering data on a weekly basis and presenting it (on a high level) to the rest of the team during a weekly strategy sync of sorts. From here you will organically grow the process and find what works for your team, such as integrating it in with a periodic product & technology sync as you will want to make sure your roadmap stays somewhat flexible due to the plethora of data and feedback you will have gathered on such a routine basis.
  3. Along with the raw numeric data that I will get into in the next section, looking for key trends in user behavior and correlation analysis on a chart is helpful and will further inform your building (I get to this later on).

Focus on the right metrics

Now that you have the right mindset for developing a consumer product, it is now imperative to establish and maintain a strong mobile analytics assessment framework. The end goal is to track the right data against the proper consumer mobile benchmarks for your product experience. Lay out what performance indicators are important for your user experience, high level usage and low level event based.

The following are KPIs that are typically strong indicators of poor / strong product performance:

  1. Cohort retention — % of users who come back on any given day following their download / registration date, e.g. if 10 out of 50 users who downloaded the product on Day 0 are still using the product 7 days later, your product has a 20% Day 7 cohort retention. Assuming your product is meant to be used daily, this is usually a simple, high level measure of how useful your product really is to the end user. As a general benchmark, for a consumer mobile product you want day 7 cohort retention to be 40%+ and day 30 to be 25%+. Below you’ll see an illustration of this in table form (chart form is also useful — you want flattening cohort retention curves). Cohort retention specific to one cohort will tell you how users are responding to your experience; cohort retention specific to any given day will tell you how your product is trending as a whole as you iterate on it.
Cohort retention graph: CleverTap.com
  1. 5 day Stickiness — % of weekly active users who use the product 5 or more days per week. The general benchmark here varies quite a bit and is relative to the scale your product has achieved, but I know for products that have scale, 10%+ is an indicator of strong daily necessity. In this case, daily necessity means that a significant portion of your users have come to rely on your product experience on a daily basis, and consequently have likely integrated it into their daily lives & routine. The average consumer mobile product that is meant to be used on a daily basis is probably a great example of a product whose performance can be measured by this. Trends that emerge within this data set (as well as your cohort retention data sets) will be immensely telling of how well (or not well) your product is solving a particular consumer problem.
  2. DAU / WAU / MAU ratio — this ratio is directly correlated to the aforementioned engagement metrics, but is also a very easy way to measure if a meaningful portion of users your product acquires find your service to be useful. Fred Wilson lays out one perspective on benchmarking this — 30% of your total user base should use the product monthly, and 10% of your total user base should use it daily. Another benchmark that is worth noting is that 50%+ of your MAU should be using the product weekly. Again, like any metric or benchmark, it is all relative to the business & product experience alike that you are building. I would take these benchmarks and then apply them to the type of usage you need. For example, you know how much / what kind of usage is needed from a particular user for it to create strong network effects (if applicable), therefore you should map out the different journeys the user can take on a periodic basis to achieve this goal for themselves, and consequently for your general user base.
  3. DAU growth — track the growth of your product (once it is publicly launched) and track the virality of the product by tracking the number of referrals / shares on a per user basis. If you initially tested your product in a beta environment, this simple metric will become increasingly important as you come more out of private beta and there are ways via a referral system of sorts (among many others) to create a strong viral and network effect. Understanding the the power of network effects is critical — this piece breaks it down nicely, as well as bakes in some of my aforementioned retention markers: A16Z on network effects and growth
  4. Session based data around average time spent, average number of sessions, and any other engagement metrics you find important specific to your core UX should be tracked. All of these should trend upwards as you enrich your product experience with added features, robustness, etc.

Put in place a quality mobile analytics platform, don’t just rely on Google

It’s easy to underestimate the learnings & findings that can come from an analytics platform. You may think Google Analytics suffices, but it pays its dividends the sooner that you integrate a “dedicated” mobile analytics platform. There are two kinds of analytics platforms in my opinion — one that you rely on for quantitative insights and another that you rely on for qualitative insights. The former is pretty straight forward and you’ve probably heard of many of them, the latter is not as well known.

For pure quantitative insights, Amplitude is the best I’ve seen and what I’ve used in the past. I know many also like Mixpanel, or Tableau for more custom analytics build outs, and there are many others that are quality analytics tools. All of the aforementioned metrics can be tracked by these platforms, and there’s many, many other things these tools offer that you absolutely should explore, e.g. onboarding funnels, correlation analysis, etc. I may publish learnings on advanced metrics in the future.

For pure qualitative insights, check out Smartlook as a secondary platform to gather insights from. Platforms like this video record user sessions (anonymizes / removes personal info) and allow you as a developer to look at where on the screen users tap and so on — you would never believe the kinds of things you will learn from simply observing users actually “using” your app. This is basically the purest form of a focus group, except you don’t need to get a bunch of people into the same room and there’s little to no bias involved in the actual recorded user session. I think it’s important to centralize as much of the analytics process as possible within your primary platform, e.g. Amplitude, but if you are unable to gain access to enough “viewings” of watching your users interact and behave with your product (without asking them questions or leading them on) then a solution like this can help you get that form of data.

Most early product building efforts should be in pursuit of the “wow factor”

Developing a fundamental understanding of what makes for genuine user engagement and whether it is core to your product can make or break whether you develop an experience that consumers love. Sarah Tavel’s piece on product engagement factors & measures deconstructs the different layers of user engagement and what to look out for in the early days of building a product. This is a must read for anyone who is building / is on a team building a consumer software product.

Giphy: DJKhaled

One of the biggest conclusions from this is to focus on getting the user to experience the “wow factor” as quickly as possible, which would in turn imply two things: 1) you have identified, validated, and thoroughly understand what keeps a user on your platform and what the biggest value proposition for them is, e.g. this is usually more specific than you would think — one simple feature could mean the entire world to your user base, perhaps a daily push notification communicating specific information about their health (total hypothetical example), and 2) you have successfully built that “need” into the user experience and have made it very easy for the user to unlock that said value, perhaps through a simple onboarding, or by making that experience the core feature of your application (at Fam we did this — creating & sharing a group video into a group iMessage chat was the core feature).

What’s the exact formula in finding the “wow factor”? Well, that’s the multi-billion dollar question. There’s many of the tidbits Sarah lays out in her piece and many others beyond those. One useful tool in particular is correlation analysis. There are many ways via correlation analysis within your metrics that you can find plausible reasons as to why a user may be obsessed with your platform, e.g. you’d segment out cohorts of users who exhibit very strong retention and then conduct various analysis around that cohort’s behavior throughout your product and see which events they triggered most. Then you could use those insights to experiment different ways of organizing the user journey and helping users get to the “wow factor” as quickly as possible. But ultimately finding the wow factor is usually a culmination of piecing together the right product iteration cadence, coming to play with the same level of resilience and intensity day in & day out, and religiously following user data & feedback.

Above all, when it comes to consumer mobile experiences, nothing is a replacement for high velocity iteration and savvy growth tactics to keep the feedback loop going.

Few caveats to consider when digesting all of this:

  1. Not all consumer mobile products premise on daily active usage. The benchmarks contained here should be filtered and applied to a product experience on a case by case basis.
  2. Although I emphasize “mobile”, these same benchmarks are valid for web / desktop counterparts of these products or similar consumer products.
  3. The benchmarks contained here are not completely exhaustive for all consumer mobile experiences. For example, if you were building a free consumer mobile experience I would argue that these are pretty thorough and enough to focus at first, but when you layer in in-app purchases, monthly subscriptions, and so on, you have a whole other layer of metrics to base decisions on such as cost of acquisition, subscription churn, renewal, and other factors related to the monetization workflow. Overall though, I am very confident that the following are prerequisites to most consumer experiences working out, free or paid.

Big thank you to Kendall Tucker and Kevin Flynn for the fantastic feedback in finalizing this piece — she is a narrative savant. If you’re working on a cool consumer product or enterprise product that hits consumers and that has the ability to change the world or a select industry, feel free to reach out at giuseppe@186ventures.com.

--

--