I was the sole designer on a ten-person mock startup team. We created Pitch.ai, a machine learning-fueled web app that helps people improve their public speaking skills.
This app was made for talkin’
Over eight weeks we crafted a mock startup business, from concept to functioning product demo.
I learned an immense amount about what it means to engage in design within a team, and how to communicate the values and processes that I bring as a designer. With no other designers to fall back on, I was pushed to present my work with
clarity and find ways to translate the jargon I’ve been accumulating in school.
Public speaking is hard. It’s not easy to stand subject to a collective gaze, and it’s even harder when we hear feedback or watch a video of our performance. Acting on that feedback is just as difficult: it takes determination, a ton of
reps, and sometimes an audience of patient friends.
Pitch.ai is an public speaking trainer that lets a user record themselves practicing, and in turn receive a set of actionable tips, data visualizations, and training course recommendations. It uses machine learning algorithms to compare
recorded videos to a database of speeches, providing users with objective insights into their performance.
It’s designed with working professionals in mind, with the goal of being offered as an employee benefit. Think of it like a Grammarly for public speaking.
🎉 Key Features
Get feedback and data viz on filler words, pacing, and pitch.
Pointed tips on becoming a more expressive speaker.
Headspace-esque exercise packs for targeted practice.
Here’s how it works:
Set it up
Prepare a speech or presentation, and open up the Pitch.ai web app. Start a new presentation and link your slide deck if you have one.
You can start recording video + audio, or just audio. If you have a slide deck linked, it launches after a countdown.
Let it process
It takes a bit of time for the algorithms to work their magic after you finish. Enjoy some quotes from public speaking powerhouses in the meantime.
Ta-da! Pitch guides you through feedback based on the presentation you just gave. Wade through tips, data visualizations, and recommended courses to level up your speaking skills. Rinse & repeat.
This was a bit different than a typical design school project; I joined a team with an already-decided product concept for an AI speaking trainer. I defined my job as formulating the specifics of the product–what features would we include?
What user research methods should we use to learn about behaviors around public speaking practice?
As the sole designer, I proposed and carried out data collection methods, communicated with engineers about feasibility, and worked in tandem with MBAs who were developing a business plan.
Gauging Public Interest
Facebook ad campaign
Through market research, my MBA teammates identified 20 - 35 year old tech workers as an optimal target audience because of their greater likelihood of having access to digital employee benefits and their propensity to taking up new tech.
With an audience in mind, our first goal as a team was to gauge the desirability of our product on the market. We set up a series of Facebook ads targeting 20-35 year old tech employees in large cities, and I helped design
and code a splash page to collect email addresses from interested customers.
Representing the product
I took a quick stab at what the product might look like for the sake of giving it a visual representation–though in hindsight I think the product concept could have been better represented through an abstraction.
16% of visitors signed up
A couple days of running ads garnered an 16% sign-up rate on a set of 87 site visitors–we viewed this as a signal that there’s a desire for our type of product. We were limited on ad placement by our shoestring student budgets, but I
also think we could have been more thoughtful with our ad copy and imagery by tailoring it to each particular market segment.
User and Expert Interviews
Reaching target users and industry professionals
I helped make a set of interview guides so that our team could collectively perform interviews and contribute to data-gathering. Other teammates performed expert interviews with HR professionals, a student resource program manager, and a
business leader currently at the University of Washington, while I performed interviews with target users.
The interview guide was designed to learn about behaviors around public speaking: how do people practice for presentations? What tools do they use while practicing? What is their experience with receiving feedback on their performance?
I synthesized some actionable design insights from user interviews:
The methods and timing of presentation practice vary wildly, so our product should be supportive of diverse habits rather than prescriptive. This steered me away from integrating any sort of
A lot of practice for presentations is done with laptops and slide-decks; therefore our MVP should be a desktop web app, with a goal of later expanding to mobile to accommodate more users.
People do not actively seek feedback on their public speaking skills because they would rather not hear what they did wrong–therefore, considering the delivery of feedback is integral to
Carving Out an MVP
Triangulating business & research & tech
Determining MVP features was a function of a Kano survey, the abilities of our developers, and user research. Kano results indicated high desirability in voice analysis features like filler word detection (“um”, “like”…) and vocal pitch,
as well as video analysis features like body language and eye contact.
From pricing model to feature
A business plan also started to crystallize. From competitive analysis of similar products like Orai and Headspace, we decided to include a set of course offerings to bring users back to the product. These could be targeted exercises (on
topics like reducing filler words, being more expressive, etc.) bundled into course packs. Due to time constraints we were unable to explore how the courses would be experienced.
Pricing models developed by the business team.
Narrowing from technical constraints
I thought that including live feedback during a presentation performance would be an interesting feature–multiple research participants said umprompted that it would have been helpful. After discussions with developers we concluded that
it would be too technically difficult, so we ended up tabling it.
A diagram of MVP functionality.
I wanted to create a Wizard of Oz (WOZ) prototype to address some of my uncertainties: how would people react to the AI’s feedback? What motivates or demotivates certain feature usage? Does the MVP demonstrate a complete and enjoyable
Communicating the method
I had to make my case to set aside funds to pay participants. I used sketches to help explain the concept of a WOZ prototype and what I hoped to learn from it, shown here. My teammates bought in to the idea, and a few of them helped me
Screenshot from a WOZ session.
Crafting the method
I wrote an in-depth article on how I crafted the WOZ prototype and some of my takeaways from the process.
TL;DR: I hooked up an external monitor to my laptop, and used a wireless keyboard and mouse to manipulate an HTML doc running on a local server on the laptop. The participants interacted with the
HTML doc, believing it to be a real, functioning product–only I was simply updating code on my side and saving the document to send content over to them.
WOZ: Interpreting & Responding
From the WOZ sessions, I learned:
A content feedback section–where Pitch could analyze the contents of the speech and suggest ways to make it more focused–was mostly unwelcome by participants. I used example outputs of an API for reference.
The phrasing of feedback was more successful when it felt supportive and non-judgmental.
Feedback felt actionable (though it is difficult to say if it truly is actionable without a longer study). Participants generally expressed that feedback could have been made more useful if it contained more detail.
So I decided to:
Remove the content feedback section. The engineers wanted to spend time developing the feature but I was able to come back with evidence that it may not be useful in the experience.
Refine the phrasing of feedback to be consistently constructive and supportive.
Added additional detail to the feedback content, where the technical limits allowed. This included enumerating the actual filler words used in the presentation, and adding visualizations for pacing and pitch.
Moving Towards Production
This process ended up being messier than the ideal flow from napkin sketch to polished pixels. Here’s some of the ways I attempted to communicate my vision for interaction design to the team.
I used annotated wireframes to communicate UI concepts and interactions to the team. This wireframe was for a pop-out control panel that could give live feedback and sit on top of a slide deck. It was nixed after we decided against live
Whiteboarding with devs
I also did my best to externalize early ideas with whiteboard sketches. I held a couple meetings where I put my ideas on a board and had developers go through and annotate, though it mostly resulted in "we don’t have the time/resources
to do x". For actual design crit I found it easier to approach friends outside of the project, but this meant I also had to justify new decisions back to my team.
The Struggle for Realness
Between the initial sketches and the polished direction, I made significant changes. I struggled constantly with translating the experience I had in my mind into something that resembled a real web app. At one point I learned from a
developer that I had been designing in the entirely wrong dimensions, which meant that the proportions of UI elements just felt weird. I got over this hurdle by digging into the visual rhythms and interactions of existing web apps,
namely Todoist, Otter, and Github. I used it as sort of a competitive UI analysis.
Wireframe evolutions of the same element.
The homepage evolved alongside an evolving understanding of how people would actually use our product. Initial wireframes (left) placed too much emphasis on past presentations, while the next evolution (middle) lacked a CTA. The third
iteration (right) responds with a prominent card carousel as CTA to resume practice or explore course packs.
Incorporating feedback from the team
One element that underwent change in response to feedback from my team was the feedback section. On the second pass, the wireframe looked like this
The general sentiment was that there was too much going on–and there would likely be more content added with successive API integrations. I needed to make content easier to read and to accomodate more types of content.
Making feedback more interactive
I responded by shifting to a navigable (by button or arrowkey) card carousel to make the feedback content more digestible and to allow for the expansion of content types over time. With more time I would’ve loved to explore other ways of
Detailed Design Spec
Section in progress.
Presenting the Work
Every two weeks, our group gave a short pitch to a rotating cast of CEOs, VCs, marketers, and professors. I gave a product demo of a mid-fidelity version of Pitch.ai, and a few weeks later our team presented at Pioneer Square Labs (PSL) in downtown Seattle.
Left: me giving a product demo. Right: The final presentation to a group of VCs at PSL.
Reflecting on the Experience
A rewarding challenge
This was undoubtedly the most challenging project I’ve yet to work on. I found myself at first struggling to communicate both the value of design and the outcomes of my work on the project. Many of my teammates had not worked with a
designer before, so I had to find ways to explain myself; having some sort of visual material was usually the most successful way to do so.
Broadening my practice
Though challenging, it was a breath of fresh air after working with only designers on coursework. It reinforced my agreement with Donald Schön that design is a
reflective conversation with the materials of a situation (or more simply, design is a conversation). For most of my coursework, I’d been given a series of assignments meant to drive me through the design process–but design in the
real world is clearly a more complex activity.
Working with only my knowledge of the existing situation, I found clarity in first enumerating my unknowns and then setting a plan to uncover them. Though I wish I had more fully
included my team in synthesizing and acting on insights–discussing the work with colleagues outside of the project was not always the most productive way to move forwards.
The facets of designing
Having to concoct my own frameworks for research and design has given me a deeper appreciation for the myriad complexities (in communication, in technical feasibility, in business...) that might surface while designing; and the recognition
that being a successful designer is premised on an ability to navigate these as facets of the design challenge itself.