“I’m analytical” implies that you either have it or you don’t. The truth is that we are not born to do anything, we’re born with the potential to do many things.
I'm known as an analytical product manager, but I started as a lost beginner. I’ve learned that you don’t have to be a “numbers person” to be sufficiently savvy. You also don’t have to write queries to identify insights. They’re bonuses, not blockers!
Being analytical is simply a way of thinking. It can help you crack dreaded analytical interview questions, and problem solve like a ninja on the job. And it starts with a healthy skepticism and curiosity towards numbers.
Here are some examples of using an analytical approach, ordered from easy to more advanced.
Know your denominator
My stomach dropped: only 2% of people clicked on the new emails. My manager was knowingly skeptical: what’s the denominator?
Always know the denominator, and how it’s defined. A percentage is meaningless if you don’t understand what went into the calculation.
Example: 80% of people placed an order! You can’t tell if this is good without knowing who the “people” are. Visitors to a website? Email recipients? Those on the last step of the order flow? The narrower the denominator, the less impressive the 80% becomes.
In my case, I freaked out over a 2% click-through rate, but the denominator was only 50…. it was too early to tell. Curb your enthusiasm or concern by understanding your denominator.
Hacking percentages
There are two ways to boost a percentage. You can increase the numerator (usually the real intent), or you can decrease the denominator (usually bad and unintended).
Example: sign-up rate increased to 50%! But what change was made? You can boost the rate by paying people to sign up, changing the definition of the denominator, or cutting the # of steps (maybe good, but could also hurt retention).
Before you pop the champagne, make sure to dig into the specific change, and how the numerator and denominator shifted.
Know your baseline
Percent changes (e.g., 20% lift) are commonly used to report on AB tests. But lifts mean little without a baseline measure (what the metric is today).
Example: last year, our experiments generated over 50% lift in sales, this year it sank to 20%. Why do we suck now?
Assuming your results were real, your baseline has grown. If your baseline used to be $1000, then growing to $2000 is a 100% lift. If your baseline is $1M then you would need to grow to $2M(!) for it to qualify as a 100% lift.
As your baseline improves, percent changes get harder to unlock. A lift captures magnitude, but what you’re really optimizing for is a higher baseline. Knowing the baseline helps you contextualize change.
80/20 numbers
Are you drowning in a sea of numbers? When you interview at a company or join a new team, the most important numbers to know are the big drivers for the business: traffic, conversion, sales, profit.
A small number of things tend to have outsized impact. They deserve outsized attention.
I once spent hours diagnosing why an AB test had gone south. None of the changes looked suspicious. Turns out it was a subtle change: one of our highest-converting recommendation carousels got bumped down by half a page.
The experience looked harmless, but that carousel was a big driver of traffic so a tiny change made a huge difference. A solid grasp of these key drivers is far more important than remembering all the minutiae.
Intent is hard to change
We like big numbers, so we gravitate towards working on them. Often, this leads us to upstream changes like making sign-up smoother. Sadly, a 10% lift in sign-up will not directly translate into a 10% lift in the bottom line.
By making sign-up smoother, you have increased the number of lower-intent customers. You haven’t meaningfully raised their intent, which means a greater share of them will not end up buying. Some of the lift may flow to sales or profit, but it’s not guaranteed.
What can you do then? Diagnose the bottlenecks for your high-intent customers. For under-optimized products, focus on friction that prevents people who intend to buy from doing so.
Once that’s taken care of, you can move up the funnel, but keep in mind that there’s leakage at every step. The further your change is from what you truly care about, the more diluted the impact.
Segmentation is your friend
Whenever you see a notable number, try to segment until you get to a specific insight.
Example: order conversion grew from 15% to 20%! Cool, but there’s more to the story. Which customers were affected: first-time or returning? Did they come from a specific acquisition channel? Did they order through search or browse? What was the timeframe: 1-day or 7-day?
The devil is always in the details. Knowing the details gives you a deeper understanding of what’s happening, and plants the seeds for new ideas. Perhaps they reveal an opportunity to double-down on an overlooked channel, or that a small sacrifice in immediate conversion leads to better overall conversion.
Cohorts speak the truth
A special type of segmentation is cohorts. A cohort is a group of people who share something in common: typically a start date. They give you a cleaner read, especially in measuring retention (i.e., how much customers are sticking around).
Example: 90% of your sign-ups are active on a weekly basis! Seems promising, but when you look at the monthly cohorts, the real story emerges: most sign-ups from 3+ months ago have actually stopped using the product, but a recent surge in sign-ups have juiced retention.
Cohorts show you whether older customers behave differently from newer customers. When it comes to retention, older customers are a more reliable signal because they’ve actually been around long enough to, well, retain.
Trash in, trash out
Don’t spin fancy insights out of complete trash. Before you do anything, check the quality of the dataset by reviewing a random sample row by row. Scan for missing data, wacky numbers, duplicates and other things that don’t make sense.
Sophisticated calculations can fabricate insights that look real but are total BS. Even if you’re not the one wrangling the data, at least one trusted person should be reviewing it by hand. This is critical if you are using the data to make hard-to-reverse decisions or quoting them in investor pitches.
If your foundation is off, the tower will eventually crumble.
Metrics are the worst, except for all the others
There’s no such thing as a perfect metric. Everything can and will be hacked unless you put checks in place.
Common pairings of metrics and checks:
- Sales, check: profit — fun to grow, but are you doing so profitably?
- Number of accounts, check: average contract size — fun to get more accounts, but are they high-quality?
- Number of shipped features, check: impact of features, # bugs detected — good to ship, but are they worthwhile and high-quality?
Often, the primary metric measures whether your product is delicious, and the check metric ensures that it’s also nutritious. There’s a natural tension between the two, and many products skew towards empty calories.
Verbs come first
None of these learnings are programmed into your hardware, they require repeated exposure and practice to be built into your software. Everyone starts as a beginner.
“I’m analytical” is catchy, but a more empowering internal dialogue is “I’m the type of person who methodically breaks down problems”. It’s a thing you do continuously, not an identity you inherit.
A poet once said, we like to be the noun without doing the verb. But we are nothing without our verbs.