Myna: Designing an AI Marketing Tool SMBs Actually Use

Myna began as a swipe-based approval, marketing app for restaurant owners, but real-world testing showed owners didn’t want content, they wanted guidance and proactive intelligence.

This case study shows how research reshaped Myna into a task-driven AI partner with measurable improvements in task completion, clarity and trust.

Overview

Myna is an AI-powered marketing assistant for small and medium businesses. It simplifies marketing tasks, reduces cognitive load, and drives engagement by guiding owners through high-value actions.

Problem Space

Restaurant owners juggle content creation, daily operations, and customer acquisition, leaving little time for marketing.

Most owners relied on costly agencies or complex tools, with 70% reporting difficulty managing marketing consistently.

Initial Product State

• Swipe-based app gamified marketing tasks, but were confusing, felt gimmicky, and shallow.

• Users struggled to understand the value of actions.

• No clarity or guidance.

Initial Outcomes

• <10% users returned; task completion was near zero.

• low adoption, low willingness to pay.

Solution

Pivoted from a swipe-based, approval app to an AI-assisted, task-focused experience combining chat, tasks, and insights.

My Role

Founding Designer - led research, UX/UI, brand, and product strategy.

Team

CEO, CTO, AI Engineer, 2 Developers, 2 Junior Designers

Project type & Timeline

Mobile app design B2B SaaS · AI for Restaurant Tech · 12 weeks

Project Outcome

65%

Increase in task completion rate

70%

Decrease in user confusion

50%

Decrease in AI-related complaints

STAGE 1

Initial Concept

Context

The Vision


The founder envisioned a Tinder-like swipe UI. Users swipe through cards to approve or dismiss suggested marketing actions for fast approvals. 'One gesture, zero friction.’

The Challenge

1

Misalignment and unclear value

I joined after the swipe concept was defined but before any user validation.

2

No UX research

Early decisions were assumption-driven.

3

Heavy speed-to-market pressure

Investors expected fast results.

Usability Breakdown

Usability tests & UX audits to validate assumptions and identify friction points

User testing revealed major confusion and low perceived value:

“What am I looking at?”

“It would make things faster… but I wouldn’t pay for it.”

“I can just open ChatGPT and get all this done for free.

Core issues identified (UX audit + tests):

1

No clear hierarchy

2

Weak guidance

3

Ambiguous cues

4

Users couldn’t identify the core action or navigate confidently

UX Audit and user test issues mapped on swipe cards

Business Impact

The experience failed to communicate value

1

<10% returning users


  • 60% felt swipe UI was a gimmick

  • Owners wanted help running their business, not approving content

2

Near-zero task completion


  • No hierarchy or guidance

  • Users didn’t know what action mattered

3

Low willingness to pay


  • ChatGPT seen as a free substitute

Research

To validate assumptions and understand real needs, I led the research plan + analysis:

  • Interviews with 13 restaurant owners

  • Desk research on owner behavior, workflows, and pain points

  • Competitive benchmarking + SWOT to understand market gaps

  • Empathy & affinity mapping to synthesize themes

  • Quantitative & qualitative analysis of time spent, costs, and operational bottlenecks

Competitor Landscape Review

Market Positioning & SWOT Findings

Insight Synthesis: Affinity Mapping

Research Repository: Interviews & Analysis

Behavioral & Business Findings

What owners actually cared about:

Sample size:

13

“If you can bring me catering orders, I’ll pay you tomorrow.”

“If the app could make my everyday processes easy, I’d pay for it."

“ I want something that’s actively chasing opportunities for me.”

What Owners Valued


  • Operational + marketing support for daily tasks

  • Examples: catering leads, local events, competitor activity, weather impact, sales/labor metrics, trends, ingredient pricing, team motivation

  • Proactive guidance, not guesswork

Quantitative Insights


  • 6–10 hrs/week spent responding to reviews (~20% admin time)

  • 70% struggled with content creation + trend monitoring

  • CAC rising 15–25% annually

Competitive Findings

Revealed exsisting solutions and best practices, weakneses and opportunities for Myna.


  • Existing tools required owners to pull insights

  • Dense dashboards that shifted the burden of analysis onto already time-constrained owners

  • Clear opportunity for proactive, context-aware intelligence

  1. Takeaway

Restaurant owners don’t want to pull information. They want intelligence pushed to them. The swipe-card concept, built on assumptions rather than validation, could never deliver the proactive guidance owners actually needed.

1

Key Findings

Usability issues (confusion, low hierarchy)

2

Behavioural Disconnect

Owners want guidance

3

Business Misalignment

low perceived value, low adoption, overlap with existing tools like ChatGPT

STAGE 2

Pivot to Task-Focused Experience

Ideation

How can I make Myna less of a tool… and more of a partner?

Hypothesis

"If we reduce cognitive load and reframe marketing as small, meaningful weekly tasks, owners will take consistent action and feel more in control."

To validate this, I aligned the team on:

1

Clear user value derived from research

2

Engineering feasibility with the CTO

3

A flexible workflow that could evolve with the company

Information Architecture (IA) and System Design.

I led task-flow architecture and product IA connecting tasks, analytics, AI, notifications, user actions, and outcomes to ensure alignment and catch issues early.

Clarifying:

  • How much structure owners wanted

  • When AI should take initiative vs. stay passive

  • Which tasks drove meaningful business outcomes (reviews, social, intelligence)

Task focused process flow on figjam

Iteration

I explored 3 early concepts

Internal reviews and critiques made the weekly task model the clear winner.

Enhanced swipe model

still shallow and content-centric

Smart suggestions feed

too noisy, didn’t reduce cognitive load

Weekly task system

clearest structure + sense of control

Converting the Concept Into a System

I collaborated with the CTO and AI engineer to translate the idea into a buildable framework:

  • AI capabilities and limits

  • Task-trigger logic

  • Required data signals

  • How task success should be measured

Solution

Designing a task focused experience

Clickable prototype

Old vs New

Old Concept (Swipe)

  • Pull-based (user has to come look at content)

  • Gamified gestures

  • Shallow actions (“approve content”)

  • No clarity on value

  • No guidance

New Concept (Task-focused)

  • Push-based (app highlights what matters)

  • Structured tasks & flows

  • High-value actions (reviews, metrics, leads)

  • Direct business outcomes

  • Proactive intelligence

Unexpected Roadblock

Tested with 10 restaurants in cohorts over 2 weeks; tracked task completion and feedback.

Findings:

Flow felt rewarding for most users

Owners naturally preferred chat, showing a clear path toward guided assistance.

Weekly tasks were easy to complete (finished in 1–2 days).

Some users found campaign activities overwhelming.

AI hallucinations and generic insights reduced trust.

See usability comments here

Quick Fix


  • Removed chat input, simplified to tap-based actions

  • Helped reduce hallucinations and stabilize the agent ahead of launch.

Before: with chat input

After: tap-based flow replaces chat

While detailed A/B testing or extended usability studies are ongoing, early testing and direct feedback from users helped inform rapid iterations post-pivot.

Impact

Outcomes (Weeks 1–4):

Early adoption metrics focus on user behavior and engagement rather than revenue, reflecting the product’s testing phase.

Metric


Baseline (Old System)


After Redesign (Week 1-4)


Change


How we measured

Task Completion (Adoption)


Less than 20% of recommended actions completed (6 pilot users, Usability test)


85% completed (13 pilot users)


+25% points


Analytics and task logs.

User Confusion (Clarity)


Avg. 3 navigation errors per session


< 2 errors per session.


Confusion: ↓70%


Usability testing and session recordings.

AI-Related Complaints (Trust)


Almost everyone had issues trusting the AI insights.


3/10 users complained about AI hallucinations and generic insights.


Complaints dropped post iteration: ↓50%


Direct user feedback and internal testing.

Continued Engagement (Retention)


N/A (new product pivot)


Drop after Week 1; users completed tasks within 1–2 days


-


Activity logs and follow-up interviews.

Session Duration (Activity)


-


< 3–4 min avg.


↓ (needs engagement loops)


Analytics tracking session duration

Execution

On a tight timeline, I focused on delivering high-quality key screens rapidly:


  • Delivered key screens + edge cases while guiding junior designers

  • Led junior designers through structured QA and accessibility tests scoring 90%+

  • Developers leveraged Claude for vibecoding

  • 85% of final screens met quality standards

  • Detailed handoff in Figma and QA tracked in GitHub

Detailed dev handoff in Figma.

QA documented in Figma, issues tracked in GitHub.

Constraints & Tradeoffs

With a small engineering team and early-stage AI performance to stabilize, we focused on strengthening the core task-driven workflow before layering on complex features. This meant temporarily deprioritizing multi-channel automation, advanced analytics, and deeper engagement loops. These trade-offs allowed the product to mature in the right order, building a reliable, scalable experience while informing future design decisions.

Key Trade-offs:

Speed-to-Market vs. Research Validation

  • Early shipping led to assumption-driven swipe UI design.

  • Resulted in low adoption and confusion.

  • Insight: Validating assumptions early prevents costly pivots.

Core Features vs. Advanced AI Capabilities

  • Advanced AI features delayed to stabilize the MVP.

  • Resulted in low trust

  • Insight: Reliability and trust matter more than novelty, especially for SMBs.

Shallow Engagement vs. Measurable Business Outcomes

  • Gamification prioritized “fun” over real results.

  • Users engaged briefly but didn’t complete tasks or pay.

  • Insight: Design must solve actual problems; real value > engagement.

Investor “Wow Factor” vs. True User Needs

  • Demo-ready polish impressed investors but didn’t meet user needs.

  • Early adoption was low, highlighting misalignment.

  • Insight: Long-term adoption and trust outweigh initial “wow.”

Cohort Testing vs. Broad Rollout

  • Small test groups allowed rapid iteration but limited exposure.

  • Metrics were early indicators, not full-market validation.

  • Insight: Controlled cohorts enable faster learning with lower risk.

Wrapping it Up

Recap

  • Joined early, no UX research, high pressure to ship

  • Original swipe UI didn’t match real-world behavior

  • Conducted deep research → revealed need for proactive intelligence

  • Pivoted from swipe → task-focused experience (replaced product model, not just UI)

  • Early adoption metrics improved: task completion ↑65%, user confusion ↓70%, AI complaints ↓50%

My Learnings

This project humbled me in the best way possible.

01

Solve real problems first. Design polish cannot replace weak value.

02

Validate early. Observing actual behavior is more reliable than assumptions.

03

Build trust before engagement. Users return for usefulness, not novelty.

04

Prioritize reliability over flashy features. Stable, context-aware AI drives adoption.

05

Failure is a mirror, not a verdict. Every misstep taught more than success, improving the product and my user-centered approach.

06

Learning…

Next steps

Evolve from a task-based workflow to a multi-modal experience combining tasks, insights, content, and guidance

Expand qualitative validation once improvements roll out: With the temporary pause on new restaurant onboarding so the team can strengthen core features and focus on high-value AI outputs like social content and video generation, the next step is to reopen pilots and gather richer qualitative feedback on how the redesigned workflow supports owners in their day-to-day business outcomes.

Future Metrics: Adoption, retention, revenue, and referrals will be tracked as the product matures.

  • Validate which workflow users will actually pay for.

  • Improve AI accuracy and trust signals.

  • Tighten IA across chat, tasks, insights, and notifications.

  • Add retention loops tied to real restaurant activity.

  • Double down on one hero workflow that drives value.

I'm glad you made it here.

I'm currently open for new and exciting opportunities.

Let's connect and create something nice.

V.2025

13:16:27

+1 (765) 767 0056

Create a free website with Framer, the website builder loved by startups, designers and agencies.