top of page

The unreliable nature of digital surveys

  • Writer: The Left Chapter
    The Left Chapter
  • 7 minutes ago
  • 3 min read

Image via the PCC


By Emilia Reed, special to Granma, translated from the Spanish


Last night, while I was on my cell phone for a while, my neighbor Marta stumbled upon a poll on Facebook.: "Do you support So-and-so or Mengano?" Two options: a counter that shoots up at full speed and, in under an hour, a striking result—"84% So-and-so.". Soon after, that screenshot made its way around WhatsApp accompanied by a blunt comment: "People have spoken."


Marta, who is not a specialist in statistics and does not pretend to be, feels that the number has weight. It sounds definitive, almost as if someone had measured the pulse of the country. What was actually measured was the reaction of a group of people who saw the publication at that moment and chose to check an option.


Quick internet surveys are already part of the digital landscape and are sneaking into social networks, news portals, messaging channels and even applications that promise to "know what you think" with a click. They are attractive because of their simplicity, because they ask something simple, generate an immediate percentage and give the impression that the debate has been resolved. The problem begins when that percentage is interpreted as if it were representative of all "society" or "citizens".


A rigorous survey isn’t about who happens to respond, but about carefully selecting participants in a controlled way, using proven scientific methods.


It may sound technical, but the idea is simple. If you want to understand what a population truly thinks, it’s not enough to just ask out loud and rely on the opinions of those who raise their hands around you. On the internet, “raising your hand” is even more unpredictable; those who notice it, have the time, feel strongly for or against, or feel called by the community sharing the survey, participate.


So, what usually happens is that these votes reflect, at best, just a portion of an account’s followers, regular readers of a publication, or members of a group. Not “the people,” but rather whoever happened to be there at the time and responded.


The most common trap is what experts call "self-selection bias." Imagine a portal launching a public poll: "Do you think the government should do this?" People who vote usually do so because the issue matters to them or sparks strong feelings, not because they were randomly selected. As a result, the outcome tends to reflect more extreme opinions.


Something similar occurs on social media, where the "bubble effect" also plays a role. This means that a poll shared by an account with a like-minded audience is likely to yield a uniform result. If content mainly circulates among people who already share the same views, the percentage reflects not a social majority, but the tight-knit nature of that group.


An online survey can be a way to engage, gauge the pulse of a community, or spark conversation. The problem arises when they’re dressed up as science or used to manufacture consent.


A good rule of thumb can help you avoid being manipulated. If we don’t know who was allowed to answer, how they were selected, whether the same person was stopped from voting multiple times, and if the question was asked in a neutral way, then that percentage doesn’t really show “what people think”; at best, it reflects what a self-selected group happened to say.


You don’t need to be an expert in networks or statistics to understand the difference between data measured by scientific standards and a number that’s gone viral. It's just a matter of remembering that a click isn’t a representative sample of a people. On the internet, percentages can be loud, seductive, and often quite unreliable.



This work was translated and shared via a License CC-BY-NC

Comments


bottom of page