From Concept to Classroom: Building an AI-Powered Instructional Coach to Elimiinate Coaching Delays
Logan & Friends is an education company that partners with schools and communities to design equity-focused, real-world learning experiences. They support educators through coaching, co-design, and creative programs that help students build curiosity, agency, and future-ready skills.
MY ROLE & TEAM
Role & Duration
Founding Designer led end-to-end UX from research to prototyping and testing · 30 Weeks
Team
4 Product Designers · 1 Client Designer · Subject Matter Experts
Focus
Graduate Capstone · Product Design · Strategy
Domain
EdTech · AI
PROJECT OUTCOME
Improved overall user satisfaction by
Reduced coaching feedback delays by
Cut down feedback navigation time by
CLIENT VISION & REQUIREMENTS
The project started with ambiguity. The client, envisioned an AI-powered coaching tool - A system that leverages AI with human expertise to provide personalized feedback.
How it works
│
The tool would record real teaching sessions.
│
AI would then step in to analyze the audio, comparing it to teaching standards.
│
Based on that, it would generate feedback and suggestions.
│
A human coach would review and refine that feedback.
│
And finally, teachers would receive clear, actionable steps to improve their practice.
But, the scope, technical feasibility, and user needs were unclear.
THE CHALLENGE
The fundamental challenges in Instructional coaching today-

Coaching isn’t easily accessible to all teachers.

Teaching environments are often stressful and demanding.

Observation time is limited, leaving little room for consistent feedback.

Progress tracking is inadequate, making it hard to measure growth over time.

Teachers depend on coaching to grow. But coaching today often comes too late, isn’t tailored, and rarely fits the pace of real classrooms. That gap became the starting point for the design journey.
RESPONSIBILITIES
My responsibilities included
User Research
5 interviews, 12 surveys, 20+ research papers, and 7 competitor analyses.
Insight Synthesis
Turned findings into personas, user stories, and key use cases.
Design
Created AI-assisted feedback flows focused on clarity, usability, and scalability.
Prototyping & Testing
Built interactive prototypes and ran multiple rounds of usability testing.
Collaboration
Worked closely with 4 designers, a PhD client, and subject matter experts to ensure the designs were both pedagogically sound and technically feasible.
RESEARCH AND EMPATHY
We began by deeply understanding teachers’ day-to-day realities. Through interviews and surveys, we uncovered both excitement and hesitation around AI.
Figure shows: Interview notes on Figjam.

Figure shows: Data collection and Analysis on Figjam.

Key Insights after desk research, surveys and talking to educators
In the US currently receive consistent coaching support.
of teachers are open to AI-assisted feedback.
are comfortable being recorded in class.
believe technology should play a role in classrooms.
Empathy mapping revealed teachers want clear, bite-sized guidance, privacy-respecting observation, and tools that save time.
Figure shows: Empathy map on Figjam.

Figure shows: Affinity mapping on Figjam.

Affinity mapping helped us synthesize qualitative inputs into recurring themes, revealing patterns that guided persona creation and problem definition.
Competitor Analysis led us to observe best practices and highlighted a gap in AI integration, presenting an opportunity for personalized, standards-aligned feedback.
Figure shows: Competetive Analysis on Figjam.

CORE CHALLENGES
My responsibilities included
Feedback is often delayed beyond 72 hours.
Existing guidance feels generic and impersonal.
Teachers struggle with time management while giving or receiving feedback.
There’s limited to no access to AI tools that assist without adding cognitive load.
DEFINING THE PROBLEM
Synthesizing research led to clear personas, user journeys and use cases.
Figure shows: Personas, Use cases and user scenarios on Figjam.

We identified key personas representing teachers across grade levels, instructional coaches, and school administrators, each with unique goals and challenges in the feedback process.
OPPORTUNITY
Integrating AI into classrooms revealed both potential and friction. Teachers valued faster feedback but feared losing the human connection that makes coaching meaningful. Many lacked training and confidence with AI tools, while others felt uneasy about being recorded or judged by algorithms. In classrooms that move at the speed of life, feedback delayed by weeks loses impact. What teachers need isn’t more feedback — it’s timely, contextual insights that recognize their challenges and progress. The real design challenge is ensuring AI enhances coaching without replacing the empathy and trust that define great teaching.
That gap — between intention and impact — revealed a profound opportunity.
What if coaching could happen closer to the moment of teaching?
What if technology could listen, understand, and surface insights — not to replace the human coach, but to empower them?
What if AI could make feedback more immediate, precise, and scalable, while keeping the heart of coaching intact — empathy, nuance, and growth?
DESIGN & PROTOTYPING
I led an iterative design approach on the feedback page
Figure shows: Feature list curation on Figjam.

We started by curating a list of features that solved these challenges
And proceeded towards building the Information Architecture
Figure shows: Application Information architecture on Figjam.

Design Iteration 1
I designed the feedback screen to give teachers a clear snapshot of session insights. Each card highlights a teaching framework area, showing strengths and areas to improve.
Feedback is grouped into clear categories with simple, action-focused notes, making it feel like guidance rather than evaluation. Color-coded tags highlight patterns, and a “See more” link provides deeper context without cluttering the view, keeping feedback approachable and scannable.
Figure shows: Low-fidelity wireframe of feedback page (iteration 1)

Figure shows: Mid-fidelity wireframe of feedback page (iteration 1)

The design showed session insights right away, but it didn’t explain why those insights appeared or which parts of the lesson triggered them. Because of that, the feedback felt a bit generic - something the teachers had already said was a major frustration.
Design Iteration 2
This time, I switched to a vertical layout and added timestamps so suggestions appear in the order they happened — from the earliest to the latest in the session. That way, teachers can instantly see what each note refers to and why it was flagged.
Figure shows: Low-fidelity wireframe of feedback page (iteration 2)

Figure shows: Mid-fidelity wireframe of feedback page (iteration 2)

After a few rounds of testing, it became clear that finding feedback was taking too long. The process felt like endless scrolling with no clear structure — users spent an average of 6 minutes just navigating and often struggled to find specific feedback again.
Design Iteration 3
My goal was to make the feedback experience feel more natural and intuitive for educators, something that fits seamlessly into their workflow rather than adding to it. Teachers already juggle a lot, so the design needed to feel effortless — quick to scan, easy to navigate, and meaningful at a glance. I wanted every interaction, from reviewing an audio snippet to exploring a suggestion, to feel purposeful and supportive, helping educators focus on growth instead of getting lost in the interface.
Figure shows: Low-fidelity wireframe of feedback page (iteration 2)

01
I added an audio player with segment markers on the timeline, so educators can easily jump between different pieces of feedback. The markers make it simple to find and listen to the most relevant sections.
02
I kept the feedback cards complete with timestamps, tags for the particular framework, feedback type, detailed notes, AI suggestions, and resource recommendations to help educators grow and improve.
03
I also added a dynamic AI chat box that educators can use anytime they have questions about a specific piece of feedback. It’s like having a coach right beside you while you review.
THE OUTCOME
The final prototype and usability tests delivered measurable impact
01
Navigation time reduced by 40% (from 6 min → 3.5 min).
Faster, actionable feedback for educators
02
90% of teachers reported satisfaction with clarity and speed.
Increased teacher efficiency, saving hours per week.
03
Feedback delays reduced 100%, enabling near real-time guidance.
Validated AI-human collaboration, ensuring trust and adoption.
EXECUTION
This project was a deep dive into designing for complexity — blending human insight with AI to create something truly useful for educators. I collaborated on the end-to-end design, from shaping the product vision in ambiguity to turning intricate workflows into clear, intuitive experience. Collaborating closely with the founder, fellow designers, and subject matter experts. I relied on both data and empathy to guide every decision.
REFLECTION
This experience reaffirmed the value of empathy-driven, AI-assisted design. I learned how rapid iteration fuels clarity in emerging tech, and how strong communication keeps cross-functional teams aligned. Most importantly, I saw that balancing automation with human judgment isn’t just good design — it’s what builds trust. If scaled, this approach could meaningfully reshape how teachers and students connect, making feedback faster, more personal, and deeply human.