
Role
UX Designer (Consulting Engagement)
Team
4-person consulting team from NTU's Critical Thinking and Problem Solving Lab, applying BCG-style methodology
Timeline
Feb 2021 - Jun 2021
At a Glance
Problem
Of 8,600 registered influencers on Influenxio, only 3.4% were active. SMBs couldn't tell which creators would perform.
My Role
Designer on a 4-person consulting team. Led interface redesign and A/B testing with 16 SMBs.
Outcome
The redesign projected a 4.5% lift in admissions of high-quality beginner influencers, presented as a recommendation to Influenxio's leadership.
Two sides, one matching problem
Influenxio's matching problem hurt both sides of the marketplace at once.

Quotes from interviews with SMBs and influencers during research.
Over time, this created a feedback loop. SMBs ran fewer campaigns because they couldn't find good matches. Influencers gave up because they weren't getting picked. The active pool kept shrinking.
Finding the underserved segment
To understand what was breaking the matching system, we ran cohort analysis on Influenxio's registered influencer base.

52%
of registered influencers were Nano-tier (1K to 5K followers)
52%
of those Nano-tier influencers were not being selected by SMBs
15%
of Nano-tier influencers had high engagement rates, and were being filtered out at nearly 5× the rate of their lower-engagement peers
The pattern was counterintuitive. Influencers most likely to deliver good campaign outcomes (high engagement, smaller but devoted audiences) were the ones SMBs were least likely to choose.
The platform was filtering out its best matches.
This gave us our target: high-engagement beginner influencers, the segment the existing platform was systematically overlooking.
The fix wasn't to attract more influencers. It was to make high-engagement beginners visible to SMBs at the moment of decision.
We made three specific changes to the influencer profile card, each backed by interview evidence about what SMBs actually used to make selection decisions.

Self-Recommendation prompt
Was buried on page 2. Most SMBs never saw it.
We surfaced the prompt to the influencer's primary profile card. This let influencers articulate their unique value (campaign style, audience focus, past collaborations) where SMBs were actually looking. For high-engagement beginners with no track record yet, this gave them a way to compete on context, not just numbers.
69% of SMBs in interviews said self-recommendations would influence their selection.
Unified Influencer Rating
Original ratings showed "received cases," which beginners had none of.
We replaced the case-count rating with a unified, site-wide average rating. Detailed historical case data moved to each influencer's individual profile, accessible if SMBs wanted depth. This stopped penalizing new influencers for being new, while still letting SMBs verify track records when they wanted to.
75% of SMBs said "received cases" was not their primary consideration.
Expected Impressions metric
Original card showed "average likes," which SMBs found unreliable.
We replaced average likes with a forward-looking Expected Impressions estimate, calculated from each influencer's engagement rate, follower count, and category. This gave SMBs a predictive signal for campaign reach, which mattered more to them than past like counts.
56% of SMBs said impressions and reach were what they cared about most.
All three changes shared one principle: replace metrics SMBs didn't trust with signals they did. The next question was whether the redesign actually changed how SMBs picked influencers in practice.
Testing the redesign with SMBs
To validate the redesign, we ran prototype-based A/B testing with 16 SMBs in moderated interviews.
16 SMBs
participated in moderated interviews
2
cohorts of 8 (control vs. redesign)
Figma prototypes
identical except for the three indicators that changed
Each SMB reviewed a hypothetical campaign brief and selected influencers from a curated pool. The control group used the existing platform; the redesign group used the test prototype. We measured which profiles each group selected and how confident they felt in their choices.
With 16 participants and prototype-based testing, results should be read as directional rather than statistically conclusive.
What the test showed
The test cohort consistently preferred the redesigned indicators over the original ones, and that preference translated into different selection patterns.

2×
as many high-engagement Nano profiles selected by the test cohort vs. control
All 3
redesigned indicators preferred by the test cohort
4.5%
projected lift in admissions of high-quality beginner influencers, when extrapolated from the cohort comparison
Each change contributed to the shift. The self-recommendation prompt let high-engagement beginners articulate value beyond their numbers; the unified rating stopped penalizing new influencers for having no past cases; and the Expected Impressions metric gave SMBs a forward-looking signal they trusted more than likes.
The 4.5% projection assumes the prototype behavior would translate to deployed traffic at similar rates. With a small sample and prototype context, the projection is directional, but the consistency of preference across the test cohort gave the team confidence to recommend deployment.
What I took from this project
Influenxio taught me that marketplaces don't always grow by serving the paying side first. SMBs were the revenue source, but the platform's growth depended on creators being engaged enough to keep showing up. Some of the most useful design moves I made were on creator-facing visibility, even though we were ostensibly designing for SMBs.


