We set out to transform how educators receive coaching by designing an MVP for an AI-powered coach that addresses their everyday struggles and lack of personalized and delayed feedback. Our goal was to make coaching more accessible, timely, and tailored to individual needs, ultimately making it easier for educators to grow and succeed in their roles.
Through a collaborative and iterative design process, we developed a platform that combines AI strengths with human expertise from discovery to design handoff.
My Role: UX Designer, Researcher
As one of the designers on this project, I dove deep into understanding educators' needs. I reviewed over 20 research papers and teaching frameworks and spoke with 5 educators to uncover their challenges. Synthesizing over 50 insights into actionable UX artifacts, I led the development of the feedback page and conducted 5 user tests.
Project Type: Graduate Capstone
Duration: 30 Weeks
Team Size: 4 UX/UI Designers (including me)
Client: Logan & Friends (Led by Dr. Jocelyn Logan Friend)
Domain: Edutech
Tools: Figma, Miro, Figjam, Google Suite
PROJECT OUTCOMES

Overall user satisfaction rate of 90%

Reduced coaching feedback delays by 100%

Cut down feedback navigation time by 40%.
PROCESS
01
Understanding the project scope & Discovery
Weeks 1-3
02
Empathize with educators
Weeks 3-5
03
Define and Plan
product requirements
Weeks 6-8
04
Brainstorm and
Ideate
Weeks 9-14
05
Design and
prototype
Weeks 15-27
06
Test
prototype
Weeks 28-30
We began by clarifying the client's vision and defining the project objectives and challenges of traditional instructional coaching through several initial meetings with the client and desk research.

To gain a foundational understanding, ensure the design process aligns with the client’s expectations and addresses key issues.






The client, Dr.Logan, had a vision to overcome these challenges -
Leverage AI with human expertise to analyze audio recordings of teaching sessions, provide personalized feedback, and align insights with existing teaching frameworks.

01
To build the experience around the client's vision for MVP1.
02
Enhance instructional coaching experience.
03
Make instructional coaching accessible

"How might we redefine the coaching experience for educators?"
Despite the clear vision from our client, the project posed significant challenges, especially since the team and I had no prior experience working with AI or machine learning models.
One of the primary hurdles was -
01 Translating the vision into a practical, user-centered tool. We had to address complex issues, such as how the AI would analyze classroom dynamics using only audio inputs and provide accurate, actionable feedback.
02 Technical uncertainties regarding the feasibility and scope of integrating AI into the feedback process.

I delved into online resources to overcome these challenges and consulted with subject matter experts. I understood Acoustic Feature Extraction, Voice/Sentiment Analysis, and Natural Language Processing models.
Another profound challenge was -
01 Bridging the experience between Human and AI interactions.

I specifically focused on designing the feedback section of the product. My goal was to make receiving feedback and putting it into action seamless, along with unhindered navigation between multiple feedback and chat channels with AI and human coaches.
To understand educators' challenges and their openness to AI and ensure our design was grounded in real needs, we conducted comprehensive research. This included desk research, 5 interviews with experienced and novice teachers, and teacher coaches. Additionally, 12 participants filled out surveys, we reviewed 20+ research papers, and analyzed 7 competitors.

Contains Research details and Empathy map

The key themes identified revealed that educators are excited about AI's potential to enhance teaching.
Concerns —
Ability of AI to understand human nuances
Availability of personalized support
AI replacing human coaches
To engage teachers, we must —
Address their motivation
Offer quality resources
Resolve issues with privacy, tech adoption, and feedback timing
The absence of AI-powered products in the market gives us a competitive edge. Many are open to AI if privacy and effectiveness concerns are addressed.
Figure shows: Data Analysis to find key themes using affinity mapping on Figjam.
We organized research data into affinity maps, created 4 personas, 10 user stories, 24 use cases, and mapped out journeys to represent different types of educators and to provide a clear picture of the users' needs, behaviors, and expectations.
Identifying the User's pressing challenges and needs.
Balancing technical feasibility with those needs.
Bridging human-AI interactions smoothly.

We focused on the personas, user stories, and use cases to capture maximum value without overwhelming or overcomplicating the design, aligning with the MVP.

Led to the development of wireframes and user flows.
WIREFRAMES
We then established the basic layout and structure of each screen, ensuring that all necessary elements are included and logically arranged.
Iteration 1
The Initial Feedback Page Layout - Using empathy as a guide, I designed the feedback page to focus on what teachers needed most: easy access to feedback and AI interaction. I organized it into three sections: class session audio recordings, AI-generated feedback, and a chat feature.
Test observations
During testing, we put ourselves in the shoes of educators by asking them to locate specific feedback tied to a teaching framework. However, teachers struggled to quickly find and act on the feedback—they spent nearly 6 minutes navigating through the content. The long search time caused frustration and self-doubt as they had to scroll through each piece of feedback to find what was relevant.
long search and scroll time led to self-doubt.
Skimmed through each feedback to identify the specific feedback.
Challenges Identified
Considering the amount of feedback the AI coach might provide and educators engaging in conversation with the AI coach —
Excessive scrolling overwhelmed educators, reducing their confidence.
Difficulty in identifying the most critical feedback.
Limited time for meaningful engagement with the AI coach due to navigation issues.
Iteration 2
Iteration 3
Time-Stamped Audio for Seamless Navigation: Focusing on usability and minimizing cognitive load, I implemented time-stamps on the audio recordings. This allowed teachers to jump directly to the part of the session where specific feedback was provided, cutting down on the time and effort needed to find the most relevant information.
Test observations
We saw a significant improvement. Educators were now able to navigate through feedback in less than 3.5 minutes—a 40% faster than before. The combination of time-stamped feedback and reduced scrolling made the experience smoother and more intuitive.

Dynamic chat feature: Reduced scrolling and delivered relevant, timely feedback based on user selection.
Time-stamped audio: Enabled quick, precise navigation through feedback.
Simplified and color-coded feedback: Helped educators interpret feedback more easily and confidently.
Other considerations to make the experience even better—
We focused on creating a user-friendly, intuitive, and supportive platform for teachers and coaches by addressing specific pain points and meeting their needs through various design and functional elements that could empower them to create engaging and effective learning experiences for their students.
Painpoint 1
Time-Consuming Process:
Traditional coaching requires scheduling meetings, which often conflicts with teachers' busy schedules.

Painpoint 2
Limited/Delayed Feedback Timeliness:
Slow feedback hinders teachers' ability to make timely improvements in their teaching.

Painpoint 3
Lack of Personalization:
Generalized coaching feedback often fails to address specific teaching challenges.

Painpoint 4
Subjectivity in Feedback:
Feedback from human coaches can vary in objectivity based on their perspective and experience.

Painpoint 5
Limited Accessibility:
Geographic limitations and the availability of qualified coaches can restrict professional development opportunities for teachers.

We began by conducting usability tests with the same participants from earlier interviews, including both experienced and novice teachers, streamlining recruitment and ensuring continuity. This allowed us to observe how the design met the diverse needs of different users. Cognitive walkthroughs and internal design critiques helped us catch early usability issues while focusing on improving intuitive interactions.
We provided participants with both low- and high-fidelity prototypes, assigning tasks to observe their natural behavior. Think-aloud sessions offered real-time feedback on how teachers interacted with the platform, helping us understand their pain points in the moment. Expert reviews and heuristic evaluations, grounded in design principles like consistency and error prevention, provided valuable insights into improving the user flow.
To gain further clarity, we conducted user satisfaction questionnaires and system usability scale assessments, which resulted in a 90% satisfaction score
Scroll
Issue: The current scroll and fixed layout on the feedback page reduces the scrollable area, hindering user experience.

b. Flagged feedback
Issue: Users weren't able to identify flagged feedback items easily due to a lack of visual cues.

c. 'Learn more' option
Issue: The "Learn More" option followed by hover functionality for accessing feedback details seems redundant.

d. AI assistant icon
Issue: The AI Assistant icon looks like a comment bubble, potentially leading to confusion with the feedback section.

e. Performance page
Issue: The performance page lacks clarity regarding user scores.
Percentage indications are missing for each card.
Color-coding to represent different domains is absent.
Understanding subdomain scores is difficult.

FINAL DESIGNS
The project evolved non-linearly due to our early challenges with AI/ML, especially in audio analysis and feedback. As we gained insights from research, we refined the scope, separating the AI assistant from chat functions and improving the information architecture. Our iterative design process, fueled by user feedback, embraced flexibility and constant improvement. Using an agile approach, we focused on the MVP and adapted to changing needs, continuously refining our ideas based on new discoveries.
MY LEARNINGS
Navigating AI: I gained hands-on experience with AI, turning complex tech into user-friendly solutions.
User Empathy: Deeply understanding educators’ needs taught me the importance of designing with empathy.
Iterative Design: Iterating and refining designs based on user feedback proved essential for creating effective solutions.
Balancing Ambition and Practicality: I learned to balance big ideas with practical constraints, focusing on what was essential for the MVP.
Human-AI Interaction: Designing how humans interact with AI taught me to ensure it complements rather than replaces human input.
Prototyping Skills: Creating and refining prototypes improved my ability to design intuitive and user-friendly interfaces.
Problem-Solving: Tackling challenges with AI feedback helped me develop creative problem-solving skills.
Teamwork and Communication: Effective collaboration and clear communication with my team and client were crucial for success.