Most loyalty programs in cannabis retail are designed once, launched with optimism, and then quietly bleed margin for years. The discounts get baked into customer expectations. The point math gets generous because nobody wants to be the one who "took something away." And the program's actual contribution to retention or visit frequency rarely gets measured against what it costs.
This is the story of a six-budtender, single-location Colorado dispensary that ran the math, redesigned the program over two weeks, and recovered roughly $38,000 in annual margin in the first 90 days post-launch. No customers were lost. Average ticket went up. The data did the heavy lifting.
The Setup: A Loyalty Program on Autopilot
The dispensary launched its loyalty program in 2023 with a structure that made sense at the time: customers earned 1 point per dollar spent, and points could be redeemed at 100 points = $10 off. Effectively, every loyalty customer was earning a permanent 10% discount on top of the regular menu and any in-store promotions.
By late 2025, loyalty redemptions accounted for about 14% of total transactions, and total discounts (loyalty + promotional + medical) were eating roughly 18-22% of gross revenue. The owner suspected something was off — net margin had been compressing for three quarters even as top-line revenue stayed flat — but had no clear read on which discount layer was the culprit.
What kicked off the audit wasn't a strategy session. It was a quarterly P&L showing gross margin down 3.4 points year-over-year with no obvious vendor-cost driver. The team pulled 14 months of POS data and went looking.
What 14 Months of POS Data Showed
The first cut was simple: segment loyalty customers into three tiers based on annual spend and look at redemption behavior, visit cadence, and basket composition for each.
The patterns that fell out were uncomfortable.
Tier 1: High-Spend Regulars (top 20% by annual spend)
These customers were responsible for about 58% of loyalty-driven revenue. They visited weekly or more, had high baskets ($90-120 average), and redeemed points consistently. On the surface, they looked like the program's success story.
But cross-referencing their behavior against a control group of high-spenders who didn't enroll in loyalty revealed something the team hadn't expected: the two groups visited at nearly identical frequencies, and their average tickets differed by less than $4. In other words, the loyalty program was capturing customers who would have shopped at this frequency anyway. The 10% they redeemed was, in margin terms, almost entirely incremental discount on guaranteed revenue.
Tier 2: Mid-Frequency Customers (middle 50% by annual spend)
This was where the loyalty program was doing real work. These customers showed clear behavioral lift after enrollment — visit frequency rose roughly 22% in the six months following sign-up versus the six months prior, and basket size rose modestly. The redemption rate here was healthy but not aggressive.
This was the segment where the program was actually buying retention. About $11,000 of annual margin contribution could be traced to this tier's incremental visits.
Tier 3: Dormant Accumulators (bottom 30%)
A surprisingly large share — roughly 31% of enrolled members — had accumulated points but redeemed zero or one time in the trailing 12 months. Many had stopped visiting altogether. The points sat on the books as a deferred liability with no behavioral upside.
The program was, in effect, paying its biggest discounts to customers who didn't need them and offering nothing meaningful to the ones who had drifted away.
The Three Levers in the Redesign
The redesign was deliberately conservative. The owner's directive: nothing that risked alienating regulars, nothing that felt punitive, no surprises. Three changes were made simultaneously and tested over 90 days.
Lever 1: Tiered Earning Rates
The flat 1-point-per-dollar structure was replaced with tiered earning based on visit recency and frequency. Customers visiting 8 or more times per quarter earned at the original 1x rate. Customers visiting 4-7 times earned at 1.25x. Customers visiting 3 or fewer times earned at 1.5x — a deliberate reach to reactivate dormant members.
The mechanic flipped the program's incentive: instead of rewarding customers who would have shopped anyway, it tilted the larger reward toward the behavioral change the dispensary actually wanted to buy.
Lever 2: A Redemption Ceiling
The single biggest margin leak was high-frequency customers redeeming on every visit. The fix: no more than one redemption per 14-day window, capped at $10 off any transaction over $40. This preserved the perceived benefit, kept the program feeling generous to anyone who only redeemed occasionally, and capped the worst-case discount drain.
The team modeled this against historical data before flipping the switch. The redemption ceiling alone was projected to recover roughly $22,000-$26,000 in annual margin based on 14 months of trailing redemption frequency.
Lever 3: Point Expiration and Win-Back
Points were given a rolling 12-month expiration for the first time, with two clear communications to members: a notice 60 days before expiration, and a one-time double-points reactivation offer for any member who hadn't visited in 90 days.
The double-points offer was the only piece that was tested as a true reactivation lever. It was sent to roughly 1,400 dormant accumulators as a 14-day window. The conversion rate — defined as members who completed at least one in-store visit during the window — came in at just under 9%.
That sounds modest, but the math worked: those 124 reactivated customers averaged a $72 first-return basket, and 41% of them came back again within the next 60 days. The cost of the doubled earn rate on those visits was about $1,800. The first-return revenue alone was roughly $8,900 in gross sales, and the margin contribution from the cohort that returned a second time pushed the program well into positive territory.
What Happened in 90 Days
Three months after launch, the team ran the same cuts they had used for the audit, comparing the post-redesign window against the trailing 90 days pre-redesign.
The headline numbers:
- Loyalty-driven discount as a percentage of revenue: down from 6.8% to 4.1%
- Average ticket among loyalty members: up roughly 7%, driven mostly by the redemption ceiling pushing customers to consolidate purchases rather than splitting them
- Visit frequency among Tier 2 customers: stable (no detectable churn signal)
- Reactivated dormant members: 124, with 41% returning within 60 days
- Estimated annualized margin recovery: approximately $38,000, with the redemption ceiling contributing the largest share
The follow-up survey — sent to a sample of 300 loyalty members four weeks after launch — returned a net sentiment score of +18, with the most common complaint being mild confusion about the new earning tiers. No one mentioned the redemption cap unprompted.
It's worth being honest about what this story is and isn't. It isn't a finding that loyalty programs don't work. The audit clearly showed Tier 2 customers were responding to the program in measurable ways. The story is that a loyalty program designed without segmented behavioral data tends to over-reward the people who least need the incentive, and the only way to know whether that's happening in your business is to actually look.
What This Means for Other Operators
The patterns in this case study show up regularly when operators run their first segmented loyalty audit. A few things to take away if you're sitting on a flat-rate program of your own:
- Segment before you redesign. A program that looks healthy at the aggregate level often hides three different stories at the cohort level. The cost of the analysis is one weekend of pulling and slicing POS data.
- Compare loyalty members to a control group of non-members. This is the single most useful test of whether your program is actually changing behavior or just labeling existing customers.
- Watch for the dormant accumulator share. If more than 20-25% of your enrolled members haven't redeemed in 12 months, your program is almost certainly accruing liability without producing lift.
- Cap, don't cut. Customers feel cuts. They rarely notice ceilings. A redemption frequency cap preserves the perceived benefit while limiting the worst-case drain.
- Run the reactivation experiment before assuming a member is gone. A targeted double-points window costs almost nothing relative to the value of a returning customer.
The Bottom Line
The loyalty program at this dispensary wasn't broken — it was just designed by intuition four years ago and never re-examined. A two-week audit, three structural changes, and a simple reactivation campaign recovered close to $38,000 in annual margin without sacrificing customer relationships.
The takeaways:
- A loyalty program's real cost is what it pays to customers who would have shopped anyway, not its line-item discount total.
- Segmented data — by spend tier, redemption frequency, and visit recency — turns a vague feeling that "something is off" into specific, fixable structural problems.
- Conservative redesigns built around behavioral evidence rarely cost retention and can pay back in a single quarter.
At Chapters Data, we help dispensary and small retail operators turn their existing POS data into the kind of segmented analysis that this audit ran on — cohort behavior, loyalty program contribution, redemption efficiency, and the discount-vs-incremental-revenue questions that conventional reporting hides. If your loyalty program hasn't been reviewed in over a year, it's worth a look. Your margin is probably telling you something the program isn't.



