Finger to the Wind: Improving Odds for Success in Communications Research

Finger to the Wind: Improving Odds for Success in Communications Research

It is a simple mantra, capturing the tenor of the times: #Enough2016.

The hashtag punctuated posts across social media in shocked reactions to unexpected deaths of cultural icons, the Brexit referendum vote and, in what many consider a stunning upset, the outcome of the U.S. presidential election. Those unable to stomach surprises have taken aim at pollsters and journalists, whose arrogance and sloppiness were deemed to be the most intolerable offenses of all for leading the public astray. Yes, public confidence in news-reported facts and figures is badly shaken. However, communicators should avoid indulging in the blame game and instead focus on improving their own evidence-based strategy planning for 2017.

The Night Data Died

It wasn’t as bad as lifting a wet finger to the air to figure out which way the wind was blowing. But it was close.

A New York Times article spoke volumes, both in its headline, “A ‘Dewey Defeats Truman’ Lesson for the Digital Age,” and its reference to Republican political strategist Mike Murphy’s election-night admission of hubris in making the wrong call: “Tonight data died.”

In its meditation following the Trump win, “Should we give up on election polling?”, the BBC also reached back to the 2015 U.K. general election when the Conservative party gained the majority vote despite having been seriously underestimated by British polls. What had caused opinion polls and journalists to badly misread American and British voters?

Rather than add to a bubbling stew already thick with ideological punditry, the Pew Research Center instead offered three possible underlying problems with research mechanics that may have caused U.S. poll readings to go awry.

Communicators should pay close attention. Audience measurement tools have been part of the communications kit for decades, helping to inform strategies or raise visibility for brands and businesses. Publicity surveys, stakeholder studies and customer analyses are typical research approaches that deliver data, which communicators can use to develop actionable insights.

However, before shaping narratives from numbers, it will be important to first sharpen scientific literacy. What lessons on research techniques can marketers and communications practitioners learn from the 2016 election polls?

Accidental Thinning of the Herd: Non-Response Bias

Errors can occur with non-response bias, when the group of survey takers differs from the intended sample because the survey fails to reach a significant number of its target demographics. Critics now believe election opinion polls missed key groups who cast their votes for Donald Trump, including those living in remote regions where it would be difficult to access online surveys, such as rural counties.

Research Tips: Thoughtfully reconsider the ways surveys are fielded. For example, a surgical equipment manufacturer seeking the opinions of both hospital administrators and surgeons for a customer study can unintentionally filter out one group based on the chosen methodology. If a landline phone survey is used exclusively, the study could run the risk of underrepresenting surgeons who divide their time between hospital visits, operating facilities and their own offices. Researchers can improve response rates and collect sample data more accurately by using a mixed-mode methodology that offers respondents options for taking the survey (e.g., phone or face-to-face interviews, mobile or landline phone surveys, online or regular mail questionnaire).

You Like Me, You Really Like Me: Social Desirability Bias

Social scientists and researchers are familiar with social desirability bias, the tendency of survey takers to select responses they believe create more favorable impressions, rather than what reflects their true feelings or behaviors. Analysts now feel that Trump voters actually included many hidden advocates, who either concealed their preference or deliberately misled others to avoid criticism.

Such pretense can arise in stakeholder research when survey takers are asked about sensitive topics or personal habits. As an example, it would be difficult for an employer to avoid social desirability bias in an employee study about worker behavior considered unethical, illegal or high risk for retaliation.

Research Tips: When the possibility of social desirability bias exists, one way to facilitate data collection is through question design. Restate questions to let respondents know their answers will remain confidential. Also, try prefacing the questions with reassurances that survey participants will not be judged about beliefs or activities others may consider unpopular or socially unacceptable (e.g., “There are several reasons why employees may sometimes miss work. Which of these reasons have you cited in the past year?). Other good practices include asking touchy questions later in the poll, after trust has been established, and adopting more neutral language (e.g., Replace “Are you overweight?” with “What is your weight?”).

What to Expect When You’re Expecting: Population Specification

In statistics, a population refers to the entire set of data to be studied, and a sample is a specific group within the population that should reflect the complete set’s characteristics. Population specification is especially tough in an election since the target individuals aren’t identified until the day the ballots are cast—for example, eligible Americans who actually voted. Pew researchers have suggested that population models may have been wrong because pollsters didn’t account for voters in rural communities and those who vehemently insisted they would not participate because of their disdain for all candidates, yet ended up voting anyway.

Similar mistakes can happen when using customer data to develop communications strategies. For instance, a software company that wants to understand customer perceptions of its brand’s value proposition may initially define its population as end-user consumers who directly buy one version of its products online. However, if the software company relies on a channel structure (e.g., IT distributors, value-added resellers, systems integrators) to generate sales, the survey population should be expanded to represent these intermediaries, too.

Research Tips: Keep in mind the business goal on which observations from the survey are meant to shed light. To avoid misrepresenting survey populations, use two techniques to introduce fresh perspectives. The first approach is to find the “negative space,” or the areas in between what is obviously defined. Based on the study’s objectives, investigate further to see if there may be groups that could complete the population. The second approach is to try reframing the target audience, either by broadening or editing, for more relevant business insights. For example, when a study about service quality calls for a customer population, it may help either to expand the group or pinpoint a specific customer set, based on what will be done with the findings. If the goal is to understand general perceptions, it may be worthwhile to include both active customers and recent defectors. However, if the business wants to increase share of wallet by changing perceptions, it would be more meaningful to focus on customers who are portfolio buyers that purchase from various competing vendors.

Resolution

The worldwide political and cultural events of 2016 will be remembered as much for their individual merits as for their collective shock value. After all, there were positive surprises, too. Although many Las Vegas bookies already had favored the Chicago Cubs for a World Series win going into the 2016 season, the odds kept shifting during the championship until the final game.

If 2016 leaves people feeling a bit unsteady, then communicators can review the year’s events as part of an exercise to calibrate their own practice mechanics. Were these scenarios truly impossible to predict, or did critical missteps seal their fates? Taking a scenario planning approach will allow for more resilience in working toward a long-term resolution.

Did someone say resolution? #BringOn2017.

Mary curates the G&S brand experience to share the G&S mission, vision and values with audiences who influence the agency’s growth. Mary directs the agency’s marketing strategy that spans its digital and social media properties, branded research and live events, and news coverage. She is the co-author of the firm’s annual Sense & Sustainability® Study and the executive producer of its portfolio of business and media conferences for senior communicators. Before joining G&S, Mary was SVP, corporate communications, and managing officer at Medialink Worldwide, a multimedia content and technology provider whose Nasdaq-listed IPO she helped to launch and grow to $180 million in market capitalization. Mary is the B2B columnist and an Advisory Board member of PR News, and is a member of the Public Relations Society of America and its Silver Anvil Awards judging panel. Combining her academic training in art with her professional passion, Mary struck one item off her bucket list by leading a marketing panel at the Museum of Modern Art in New York.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>