Debbie Levitt, MBA, is the CXO of Delta CX, and since the mid-1990s she has been a CX and UX consultant focused on strategy, research, training, and Human-Centered Design/User-Centered Design. She’s a ‘change agent’ and business design consultant focused on helping companies transform towards customer-centricity while using principles of Agile and Lean.
Debbie Levitt
She has worked in various CX and UX leadership and individual contributor roles at companies such as Wells Fargo, Macy’s, Sony and Constant Contact. Clients have given her the nickname, “Mary Poppins,” because she flies in, improves everything she can, sings a few songs, and flies away to her next adventure. Levitt’s new book, “Customers Know You Suck ,” is the customer-centricity how-to manual.
As Leavitt says, “Start investigating what’s holding you back from improving customer-centricity. Learn how to be value-led: how much value we can frequently create for potential and current customers.”
I am often asked two questions:
1. How can we do user or market research without a hypothesis?
2. How can we do research when we don’t have an idea, MVP, or product to show someone? What evidence would this early research provide, and how is this better than just releasing that idea and learning from testing?
Research Without a Hypothesis
Let’s revisit the five-step Scientific Method.
● Step 1 is to research and observe; qualified Researchers learn about people, contexts, and systems by watching them perform tasks and asking well-crafted interview questions.
● Step 2 is to “ask a question” or write a problem statement based on research insights.
● Step 3 is to form a hypothesis that could potentially answer the question or solve the problem.
● Then we test, experiment, and cycle around again if our hypothetical solution fails in small or large ways.
The hypothesis is not our starting point. If we create a hypothesis before our research, the Scientific Method is out of order. Our pre-selected point of view, solution, or possible answer is presented before we observe and research. It’s a solution cart before a research horse.
Have you ever used a product or service and frustratedly thought that the company didn’t understand you? “How could they think I needed that?” is a common customer sentiment when companies start with ideas, features, or hypotheses without doing deeper, qualitative research first.
Stay out of “the box”
Hypotheses can put us in a box and set us up for various mistakes in decision-making, prioritization, and solutions. We might be so confident that we skip research or design the research that will find the audience that likes our idea. It’s human nature; we would rather prove ourselves right than unearth the evidence proving us wrong.
Here are some examples:
● Hypothesis: Our users get confused, and could benefit from more how-tos around our site. The hypothesis assumes the solution, which then influences our research and designs. We focus on learning where to put more how-tos, missing the opportunity to learn why users struggle, and how our system can be better designed. Users rarely want more how-tos, and are unlikely to read them.
● Hypothesis: We will have more active users if we send email newsletters more frequently and build a chatbot. How do we know this is what will inspire users to be more active? This hypothesis is an assumption that is likely to be invalidated when we learn more about target audiences’ tasks, decisions, and perspectives.
● Hypothesis: Candidates looking at our open jobs leave our site more than they are applying. We believe they need words of encouragement, so we’ll A/B test writing, “Good luck,” next to the apply button. How much do we really know about people’s tasks and why they don’t apply? How do we know they need words of encouragement? The hypothesis skews research and design towards “encouraging words” when, even if that
shows a little lift, it might not be the right solution for a problem we haven’t correctly defined.
Hypothesis-first methods are trendy right now, but our high failure rates, non-utilized features, product-market fit struggles, and trouble finding customers who will stay tell us that our methods aren’t working well.
Research Without an Existing Product or Concept
Hypothesis testing, MVPs, and just ship it and get feedback later are popular approaches, often part of a “build, test, learn” cycle. We know that learning and feedback are important, but we reject thorough research that would set us up for better strategies, decisions, prioritization, products, and services. We’re concerned that early research takes too much time and money without considering the costs of poor quality and failures, including time, money, Customer Support utilization, public embarrassment, customer attrition, and
fixing what we released to the public.
Let’s start by understanding the two main types of CX/UX research:
● Evaluative research evaluates an existing solution, idea, concept, prototype, live site, etc. We observe and assess how people use it and where there might be room for improvement. We’re testing a hypothesis or solution to see if it solves our original problem statement.
● Generative research generates evidence, knowledge, and data about people, systems, and contexts. Qualified Researchers often use observational research methods to get the full picture of who, what, where, when, how, and especially why. Research documentation includes actionable diagrams and documents such as task analyses , journey maps, service blueprints, pain points, problem statements, and opportunities.
Generative research can help us learn and understand:
○ Habits
○ Motivations
○ Decision-making
○ Collaborators
○ Environment
○ Knowledge, understanding, and savvy, or the lack thereof
○ Tools and workarounds people add to their task
○ Perspectives and mental models
○ Priorities
○ Lots of why and how for all of the above.
Can I do this research myself?
Startup founders, entrepreneurs, and business leaders often believe that research is easy. Find a few people, ask them what they want or need, and drive your products and services in that direction. Research seems like the type of thing anybody can do, but not everybody will do well. Great research includes solid planning, recruiting the right number of the right people, an expert moderation style during sessions, excellent observational skills, analyzing the sessions, synthesizing the data into patterns, and landing on actionable, strategic insights and suggestions.
Ask the wrong questions, the wrong people, too few people, or make accidental or purposeful errors when drawing conclusions, and we might have false “information” that will guide strategies and decisions. The risks created by bad research are worse than risks created by no research since we might have a false confidence that any research data is good and helpful.
Learning about problems before inventing solutions
We have a target audience and a topic area, but we recognize that we don’t fully understand problems and opportunities, so it’s too early to propose a solution. This is where generative research done by qualified Researchers shines. It’s designed to be solution-agnostic.
This is sometimes called “discovery research.” Before we plan a project, decide on a solution, design products or services, or code, we want to gather evidence and information. When done correctly, this generative research answers our team’s questions about people, contexts, and systems. We can move forward with facts versus working from opinions, guesses, and assumptions. When done incorrectly, our “discovery research” is actually evaluative testing of an idea, or trying to “discover” who might like the concept we’re considering building. Imagine we want to innovate or improve something for a target population. Let’s see some examples of how generative research would set our project up for success:
● Our company thinks we want to build an app for surfers to coordinate going surfing. Run generative research with 15–20 surfers, perhaps some frequent surfers and some who do it less often. Or some with smaller surfing circles and some with larger surfing circles. Watch how they plan surfing now. Look for any interaction or moment that’s frustrating, confusing, disappointing, or distracting, my “Four Horsemen of Bad UX®.” Look for anything that is manually intensive, has a high cognitive load, is error-prone, or is knowledge dependent, Larry Marine’s “Task Dimensions.”
○ Research might show that surfers are just fine and don’t have a problem for us to solve. This important data can help us pivot early, and avoid the risks of common failures stemming from our guesses and assumptions.
● We wonder how Parkinson’s patients use fitness wearables, and if they have the data they need to manage their disease. We’re not testing a solution or the fitness wearables themselves. We research a population and their behaviors to find insights and possible opportunities.
● Our target audience has no bank account and can’t afford much mobile data. How do we help them learn more about saving money? Research people, behaviors, habits, beliefs, and the information and technology this audience has available. Don’t make assumptions or use stereotypes about people of a certain culture, location, or socioeconomic status. Don’t brainstorm what the target audience should do until you understand what they do now and why.
Hold Off on That Hypothesis
We will reduce our risks and costs if we start with generative research. Instead of “build, test, learn,” consider models and methods where we learn first. Remember your promises to target customers, who expect high quality and value in everything you offer. They don’t love your guesses, experiments, and endless tweaking. They want to be understood, and they want something that just works.
Let’s get out of the box, and shake off any limits we put on ourselves because we thought we had to have a hypothesis or we had to have a solution or idea first.
Copyright © 2023 California Business Journal. All Rights Reserved.
Related Posts