# Ten Data Upgrades That Make All Levels of Leaders More Credible

Want to make your mark in the meeting? Whether you’re presenting in the boardroom or a team standup, here’s how to be more impactful with your storytelling.

Call me a numbers nerd, but it’s a title I’ll wear proudly. I strongly believe that a command for data analysis is the fastest way to build credibility both internally within an organization as well as externally with investors and customers.

Throughout a career that has spanned Wall Street, high-growth startups, and most recently an assortment of board seats, I’ve watched professionals at every level—from early-career analysts to seasoned executives—field confusion and critical feedback from how they present metrics and data.

Over the years, these reps have empowered me to develop a perspective on how even the not-so-mathy among us can better articulate their points. Here are a few of my favorite techniques.

## 1. You can’t tell a story without a timeline

Standalone metrics in a business presentation are a very dangerous thing. If you’re presenting a number, it needs* context*: if you simply report the value of new logos closed last quarter, the reader has no way to understand whether that number is good or bad, better or worse than the target, etc. Using a simple timeline to show how a metric is trending is an extremely straightforward technique for providing context.

## 2. Turn pie charts into stacked bars to offer a richer perspective

If you find yourself pulling together a pie chart, I’ll remind you of the famous words of Nancy Reagan: “just say ‘no.’” Pie charts are ineffective for storytelling for two reasons: first, they are hard to read and second, they lack context because the pieces of the pie are effectively just standalone numbers.

The pie chart below reveals that Company X lost 50% of its closed lost deals on account of some perception around price/value, but we lack any context around how that has trended over time. In translating our pie chart into a stacked bar chart (and adding in data from prior quarters), we quickly see that the company consistently loses ~50% of its opportunities due to the price/value question, and we also observe (through the decline in the orange bars) that the company has significantly* improved *its ability to identify why they are losing deals, perhaps because they launched closed lost interviews to better capture more intelligence on why deals were lost.

The stacked bar is a powerful visual for explaining shifts in a mix: portion of contracts that are one-year vs. multi-year deals, portion of contracts that are paying monthly vs. quarterly vs. semi-annually, etc.

## 3. State the change, but don’t forget basic arithmetic when you do it

When presenting charts,* do the math *for your audience; if there’s a material change, name the numbers in the headlines. And while it deeply pains me to have to write this next sentence, rest assured that I’m penning it for a reason: make sure you are calculating percentage change correctly. It is baffling to me how often executives–MBAs included–miscalculate the basic percentage change formula.

The % change calculation is straightforward: (new value ÷ old value) - 1. If your win rate increases from 20% to 30% between Q2 and Q3, the % change is (30% ÷ 20%) - 1, or 50%. The % change is not 10%, and if you report it as 10%, you are wildly underreporting your improvements (and just plain wrong).

(Editor’s Note: If you want to double check your math, there are lots of websites like this one that will do the correct calculation for you.)

## 4. …and let’s not forget high school statistics, either!

While we’re on the topic of grade school throwbacks, let’s make our way back to high school statistics and the concept of statistical significance, which is important for explaining the materiality of any change.

My fellow math nerds out there will recall that when we test for statistical significance, we are ultimately trying to prove that the difference between two numbers is not due to random chance. The key drivers of statistical significance are 1) the sample size and 2) the magnitude of the difference between the metrics being compared.

Despite both charts below reporting identical open rates in their respective A/B tests, the smaller sample size in the chart on the left is not enough to achieve statistical significance.

That said, it is important to note that statistical significance *can *be achieved with smaller sample sizes, doing so just requires a larger delta in the numbers being compared. The charts below hold sample size constant, but the chart on the right is able to reach significance because of the magnitude of the difference between A’s open rate (24%) and B’s open rate (12%), versus a smaller degree of difference in the chart on the left.

(Editor’s Note: SurveyMonkey offers a free tool for basic statistical significance testing.)

It’s not lost on me that earlier-stage companies often will not have the luxury of statistical significance, and that is okay! Companies with less data at their disposal should get comfortable making decisions on directional insights, but it is absolutely imperative to *revisit *data over time to ensure any assumptions made hold true (and to double-check for significance down the line).

Statistical significance is important for storytelling with data; it’s not worth having a cow over a change in metrics when the difference does not qualify as significant!

## 5. Embrace first principles thinking: are you evaluating the right metric?

Before reporting on* any *metric, it is important to consider the merits of that metric and more importantly, to ask the question of whether there is a more compelling way to tell your story. If Company X only reported out on average first reply time on support tickets (blue line below), the reader would likely walk away thinking that support had improved. But when Company X opts to report on customer wait time (how long the customer is waiting for a reply from a support agent), they see a very different story: customer experience has worsened. Great data storytellers always challenge their own thinking on the metric that matters most.

*Pick the metric that really matters to tell an intellectually honest story.*

## 6. Normalize your data to tell a more compelling story

What if I told you that the side-by-side charts below actually reflect the exact same data? It’s true! The chart on the left reports support ticket volume by year, but the chart on the right normalizes that support ticket volume against ARR (e.g. 2020 is 770 tickets against $10MM ARR for 77 tickets per $1MM ARR, 2021 is 1,785 tickets against $35MM ARR, or 51 tickets per $1MM ARR).

Before presenting an absolute number, always consider whether there is a rate or ratio to be used to help augment the number (a quantity metric) with a quality signal, as the quality metric will often tell a different story from the volume trend.

*Example*:* Normalize BDR Meeting Volume with Meetings per BDR Metric*

In this example BDR meeting count is increasing, but the BDR team has become less efficient (fewer meetings set per BDR). This decline in efficiency may be okay–perhaps there are new BDRs ramping on the team–but it could also be a symptom of a bigger problem.

## 7. A secondary axis is a powerful way to juxtapose quantity and quality

Using a secondary chart axis is an easy way to juxtapose a quantity metric (absolute numbers) with a quality indicator (rates and ratios). The chart below shows an improvement in marketing qualified leads (MQLs) over time, but an increase in MQLs is immaterial if it does not yield an increase in*sales*qualified leads (SQLs) further down the funnel. By adding in the red line (MQL>SQL conversion), the marketing leader at Company X can now brag that in addition to increasing MQL counts overall, they also brought in better quality leads (outside of perhaps that seasonal dip in December!).

## 8. Be sensitive to sensitivity drivers

Forecasts and projections are key ingredients in any thoughtful business plan, but when those projections are based on assumptions, sensitivity tables and analysis make it easy for your audience to understand the implications and risks associated with*not*achieving those assumptions.

The three tables below model out sensitivity scenarios for lifetime value calculations. The first table reflects confidence in the ACV and gross margin assumptions, but walks the reader through what happens with any variability in churn. The second example assumes that ACV and churn are reasonably baked, but highlights how LTV changes with different margin attainment. The final example assumes how fluctuations in ACV impact LTV even if gross margin and churn stay constant.

## 9. Avoid “elevator” analysis; talk the walk

This is a lesson I learned during my first week of Wall Street analyst training back in 2005 and will never forget: “elevator” analysis is when we just say that a metric went up or down. And it is not helpful. Bridge/walk charts are a tried and true way to transcend elevator analysis and show your work:* why *did the number go up or down? In the first example below, instead of just reporting on starting and ending sales pipeline in the period, we walk through all of the inflows and outflows. In the second example, we do the same for gross margin: we “talk the walk” for how gross margin will improve from 2023 to 2024.

## 10. Fear the forward; let your headlines tell a standalone story

Last but certainly not least, always fear the forward! If a constituent passes along an analysis or deck without clear takeaways, who knows how the information might be interpreted! An important best practice in storytelling with data is ensuring that your slide headlines hold standalone value. Said another way, if a reader of your deck only scanned the slide headlines and never even examined the charts and graphs, they’d still walk away with a strong command of the story you sought to tell.