A Case Study: Growing DTC CPG Revenue with Smarter Data Analysis | Menza

A Case Study: Growing DTC CPG Revenue with Smarter Data Analysis

Mariam Ahmed
Co-founder & CTO ·

Person using macbook pro on white table

Every DTC brand has data. Most of it sits in dashboards nobody checks, exported CSVs nobody opens, and platform reports that tell you what happened without explaining why. The difference between brands that grow and brands that plateau often comes down to whether anyone is actually asking the right questions.

This is the story of how one consumable CPG brand (a specialty food company selling direct through Shopify) went from feeling data-rich and insight-poor to running a meaningfully more profitable business. The specifics have been generalized to protect confidentiality, but the patterns and lessons are real.

The Starting Point

The brand was doing around $3 million in annual revenue, mostly through DTC with a small wholesale presence. Growth had been strong for the first two years but had started to flatten. Customer acquisition costs were creeping up. ROAS on Meta and Google looked decent but not great. The team had a sense that something was off but couldn’t pinpoint what.

They had the usual stack: Shopify for orders, Klaviyo for email, Meta and Google for paid ads, and Google Analytics for site traffic. Each platform had its own dashboard, its own metrics, and its own version of the truth. Nobody had connected them in a meaningful way.

The founder’s instinct was to spend more on ads to push through the plateau. But before doing that, they decided to take a month to actually understand what their data was telling them.

Step One: Understanding True Customer Profitability

The first question they asked was simple: which customers are actually making us money?

Platform data showed a blended CAC of around $35. Average order value was $55. That looked fine on the surface. But when they dug into Shopify data and segmented by customer cohort, a different picture emerged.

About 30% of customers bought once and never returned. Their average order value was lower ($42), and a disproportionate share came from discount-driven campaigns. When you factored in acquisition cost, shipping, COGS, and payment processing, these customers were break-even at best.

Another 25% of customers bought twice within six months. These were solidly profitable after the second purchase.

The remaining 45% were repeat buyers (three or more purchases), and they accounted for nearly 70% of total profit. Their acquisition cost was often higher, but their lifetime value more than made up for it.

The insight wasn’t complicated, but it changed how they thought about marketing. The goal wasn’t to acquire the most customers at the lowest cost. It was to acquire the right customers, even if they cost more upfront.

Step Two: Finding the Product Problem

Next, they looked at product-level performance. Revenue by SKU was easy to pull, but revenue doesn’t equal profit. They needed to factor in production costs, shipping weight, return rates, and whether a product led to repeat purchases or dead-ended customer relationships.

One of their top sellers by revenue turned out to be a problem. It was a lower-priced item that attracted first-time buyers, but those buyers rarely came back. The product also had the highest return rate in the catalog (issues with packaging that caused damage in transit) and the lowest margin. It was generating sales while eroding profitability.

On the other hand, a mid-tier product that had never been a marketing focus showed surprising strength. Customers who bought it had the highest repeat purchase rate and the highest average order value on subsequent purchases. It was a gateway to loyalty, not just a transaction.

This led to a straightforward but impactful decision: reduce ad spend on the problem product and shift resources toward promoting the mid-tier product to new customers. They also fixed the packaging issue, which cut the return rate in half.

Step Three: Connecting Ads Data to Real Outcomes

The brand had been optimizing Meta and Google campaigns based on platform-reported ROAS. The numbers looked reasonable: around 3.5x on Meta, 4x on Google. But platform ROAS doesn’t account for returns, customer quality, or lifetime value.

When they matched ad campaign data to Shopify customer records, they found significant variance. Some campaigns with strong reported ROAS were actually acquiring low-value customers who bought once and disappeared. Other campaigns with mediocre platform metrics were bringing in customers who became long-term buyers.

One prospecting campaign that looked like a failure (2.1x ROAS, which was below their target) turned out to be their best performer when measured by 90-day customer value. The customers it acquired had a 60% repeat purchase rate within three months. The “successful” retargeting campaign with a 5x ROAS, by contrast, was mostly capturing people who would have bought anyway or who were cherry-picking discounts.

They restructured their campaign measurement to weight toward cohort-based LTV rather than immediate ROAS. This meant accepting lower short-term numbers on some campaigns while trusting that the payoff would come later. It required patience and buy-in from stakeholders, but the results validated the approach.

Step Four: Fixing the Email Gap

Klaviyo data showed that email was driving around 25% of total revenue. That sounded healthy, but further analysis revealed a problem: almost all of that revenue came from campaigns (one-time blasts) rather than flows (automated sequences).

Their flows were underbuilt. The welcome series was only two emails. The post-purchase sequence was a single thank-you message. There was no replenishment flow for consumable products, no win-back sequence for lapsed customers, and no VIP program for top spenders.

They rebuilt the email program from scratch. A seven-email welcome series that introduced the brand story and guided first-time buyers toward the high-LTV product. A post-purchase flow with usage tips, recipe ideas, and a timely prompt to reorder. A win-back sequence that triggered at 60 days since last purchase (their data showed that customers who didn’t return within 60 days had only a 15% chance of ever returning).

Within three months, flow revenue went from 8% of email revenue to 35%. More importantly, repeat purchase rate across all customers improved by 12 percentage points.

Step Five: Building a Cohesive View

The biggest shift wasn’t any single tactic. It was connecting data sources that had been siloed.

Previously, the ads team looked at ads data, the email team looked at email data, and the founder looked at Shopify revenue. Nobody was stitching it together to see the full customer journey.

They started using Menza to pull Shopify data into a format where they could ask questions directly. Instead of exporting CSVs and building pivot tables, they could query things like “which products have the highest repeat purchase rate among customers acquired in Q1” or “what’s the average time between first and second purchase for customers who came from Meta versus Google.” This cut analysis time dramatically and made it easier to spot patterns that would have taken hours to find manually.

They also set up a weekly review ritual: 30 minutes every Monday looking at the same five metrics across platforms. New customer count and source, repeat purchase rate for recent cohorts, email flow performance, top and bottom products by margin, and CAC versus 90-day LTV by campaign. The discipline of consistent review surfaced issues early and kept the whole team aligned.

The Results

Over the following 12 months, revenue grew 40% while ad spend increased only 15%. More importantly, profitability improved significantly. They weren’t just selling more; they were selling smarter.

Some specific outcomes:

Customer acquisition cost actually increased slightly (from $35 to $38), but 90-day customer value increased more (from $72 to $105). The math worked out in their favor.

Return rate dropped from 8% to 4% after fixing the packaging issue and reducing promotion of the problem product.

Repeat purchase rate improved from 35% to 47%, driven by better email flows and a deliberate focus on acquiring customers with higher retention potential.

Email revenue as a percentage of total increased from 25% to 32%, with most of the gain coming from automated flows rather than additional campaign sends.

The founder reported spending less time guessing and more time making decisions with confidence. That’s hard to quantify, but it might be the most valuable outcome of all.

What Made It Work

Looking back, a few things stand out about why this effort succeeded where previous “data projects” had stalled.

They started with questions, not dashboards. Instead of building reports and hoping insights would emerge, they began with specific business problems: why is growth slowing, which customers matter most, where is money being wasted. The data served the questions, not the other way around.

They connected data across silos. Ads data alone, Shopify data alone, and email data alone each told an incomplete story. The value came from linking them to see how a customer acquired through one channel behaved over time and responded to another.

They acted on what they found. Every insight led to a decision: cut spend here, fix this product, build this email flow, change how we measure that campaign. Analysis without action is just expensive trivia.

They made it a habit, not a project. The weekly review ritual meant that data analysis wasn’t a one-time audit. It became part of how the business operated, which allowed them to catch issues early and compound small improvements over time.

Tools That Helped

Menza was used to query Shopify data without needing to export and manipulate spreadsheets. Being able to ask questions in plain English made it easier for non-technical team members to participate in analysis.

Klaviyo’s analytics improved significantly once they had proper flows in place. The platform’s cohort reporting and flow performance metrics became more useful once there was actual data to analyze.

Google Looker Studio was used to build a unified dashboard pulling from multiple sources. It took some setup time but provided a single view that everyone could reference.

Triple Whale was evaluated but not implemented. For this brand’s size and complexity, the cost wasn’t justified yet. They kept it on the list for when they scale further.

Data doesn’t fix businesses. Decisions fix businesses. But good data, organized well and reviewed consistently, makes better decisions possible. This brand didn’t hire a data science team or invest in enterprise software. They just started asking better questions and built the habit of finding answers. That’s available to anyone willing to put in the work.

Stop guessing. Start knowing.

Menza connects to your Shopify, Klaviyo, ad platforms, and 650+ other data sources. Ask questions in plain English and get answers you can trust — no spreadsheets, no code, no waiting.