QUALITATIVE RESEARCH 101, Part I: Introduction

QUALITATIVE RESEARCH 101, Part I: Introduction

“In a world moving increasingly to data science, it feels as though qualitative research has been relegated to the bottom of our professional toolbox.”

My colleagues and friends typically think of me as a “quant” guy who keeps my head buried in numbers.  It is true, I love surveys and data.  However, I have a big heart and a lot of love for qualitative research, too!

I define qualitative research as direct interactions between a researcher and the audience.  These are typically focus groups and in-depth one-on-one interviews.

In a world moving increasingly to data science, it feels as though qualitative research has been relegated to the bottom of our professional toolbox.  “Where did I put that focus group guide? It is in there…  somewhere.”

So, I am going to reach into my own toolbox and share a series of thoughts and professional insights into qualitative research.

Great qualitative research requires developing and using a set of skills that many data scientists are hesitant to pursue: managing conversations with flesh-and-blood people.  This requires an additional set of skills, training, and experience that many specialized data-oriented researchers have ignored or have not had opportunities to explore.  I can point to myself as a former member of this crowd – I worked for an incredibly bright and talented firm that conducted a successful U.S. presidential campaign without holding a single focus group.  However, I was also fortunate to later work with an amazing firm with a reputation for conducting top-notch qualitative research.  Each of the two forms of research – quantitative and qualitative – has their role in understanding opinions and how to influence them.

Many people I meet are also not aware of another secret: we have new technological tools in qualitative research for delivering cutting-edge analysis.

As researchers looking for insights, we first need a signal that “something is off,” and there is something new to explain.  For example, in the burgeoning field of behavioral economics, scholars such as Daniel Ariely, author of books such as Irrational Behavior, grab our attention by pointing out the quirks of human nature.  Why do people act against their self-interest?

I have had tremendous success for clients and personal satisfaction by talking to real people.  These interactions can generate new ideas and be a source for explaining unexpected phenomenon.

The process of finding answers to these questions is one of many uses of qualitative research.  It is a key tool when researchers need to “know what we do not know.”

Since this is an introduction, here is a preview of some areas I will cover:

  • Some guiding principles of qualitative research.
  • Some anecdotes and case studies.
  • Reasons saying you conducted “your own focus groups” deeply concerns me.
  • Classic and new ways to analyze qualitative research.
  • The real-world solution for many organizations: combine qualitative and quantitative research.
  • Ways I can help you conduct qualitative research, including with a tight budget.

As always, contact me directly at jstarr@starr-research.com to discuss opinion research and solving your organization’s challenges.

Reading Between the Lines at The Times

To start off 2016, The New York Times snuck in something new: a general population survey conducted mostly online.  Until then, the newspaper only conducted polls with live interviewers calling landline and mobile telephone numbers.

Credit: Joshua Bright for The New York Times
Credit: Joshua Bright for The New York Times

John Broder, their editor of News Surveys, provided details about this change in a special Times Insider column.  Methodology junkies will enjoy it.  Click here to go to the article.

Was this a boring story for most readers? Probably.

But, here are some interesting things to consider:

Is this, in part, a trial-run by The NY Times to improve the reliability of the political horserace question, in time for the 2016 campaign?  American readers, and the journalists who rely on them, are obsessed with the horserace – who is “winning” or “losing” and by how much?  Donald Trump did not create this obsession, but in some fashion his candidacy reflects the American fascination with these questions.

Unfortunately, it has been well-documented in recent years the reliability of general population or voter surveys have deteriorated – at least, by the admission of The New York Times – due to, among other things, the increase in cell-phone only households.  The resulting increase in costs to conduct reliable polls has been staggering, and certainly taken a toll on newsroom budgets.  Side note: Would it also be fair to say that The New York Times just announced that their general public/voter polling is flawed?

We are lost in the political desert without reliable polls.  News organizations are desperate to find a solution to return the credibility to these polls.  They need to feed the public’s interest in the horserace.  Likewise, journalists need the polls to help them write their political stories and analyses.

So, with this one poll and accompanying article, The New York Times might be preparing the general public for a new methodology for their political polling.  I don’t think they can have it both ways for too long – some polling with an online component, and some without.  And, from my perspective, the sooner a change is made to make general public surveys with an online component the gold standard, the sooner survey costs can come back to earth.

Of course, only The New York Times knows in their testing whether this methodology is ready for their front pages.  But, why else would they publish the results and article?

If this methodology is sufficiently reliable, we will see the return of the horserace question across the front pages.  Phew.

 Will there be an impact for communication professionals?  Will standards change to publish your polling?  News editors across the country are asked daily to publish polls conducted by a 3rd party.  No matter how interesting the results, the first question is whether the survey was methodologically sound.  Journalists, often weak in math and statistics, have been trained over the years to sniff out bad polling.

Will The New York Times create a new standard for polling, whether good or bad, and will news editors across the country follow their lead?

Perhaps John Broder’s article will have broader implications than just be a dry review of survey methodologies.