By now, we all know what social media is about. Sure, sites like Facebook and Instagram help us stay connected and keep up with celebrity gossip, but at their core, they’re money-making businesses powered by advertising revenue.
For most of us, that’s a fair trade. A few targeted ads seem like a small price to pay for access to everything the internet offers. And if you end up buying a kitchen gadget you never knew you needed, what’s the harm?
But who’s really deciding what adverts you see? We’re long past humans making those decisions. Now, it’s all down to algorithms – opaque, automated systems built to maximise clicks and sales.
And herein lies the problem – at least according to a recent study published in the International Journal of Research in Marketing.
The study focused on a specific type of experimentation carried out by Facebook and Google on users – A/B testing.
“A/B testing is an experiment where you create two or more groups of participants and study their reactions to a stimulus,” said Dr Yann Cornil, an associate professor at the UBC Sauder School of Business and co-author of the new paper, speaking to BBC Science Focus.
In marketing, that stimulus is often different versions of an advertisement. Simply put, one group sees Ad A, another group sees Ad B and market researchers measure the response to determine which version is more effective.
As study co-author Dr David Hardisty claims, every Facebook user is likely an unwitting participant in around 10 different such experiments (Facebook themselves did not confirm this claim to BBC Science Focus).
If you’re starting to feel like a lab rat, that’s understandable. But the real issue isn’t that we’re being experimented on without knowing – after all, such marketing experiments were employed long before social media came along.
The problem is how the experiments are being conducted.
A/B testing gone wrong
“For an A/B test to be useful, the groups shown different versions need to be randomly assigned,” Cornil said. “This is critical because without random assignment you can’t establish causality. You can’t say that version A works better than version B if the two groups are fundamentally different.”
But that’s not how Facebook and Google (and likely other sites, the authors claimed) run their tests. Instead of randomly assigning users, their algorithms actively select who sees what – choosing people based on who they predict will engage with the ad most.
That means advertisers aren’t getting real insights into what makes an ad effective. They’re just seeing what works best with the algorithm.

“If you have an ad that’s going crazy and getting a lot more clicks, it may just be that Facebook successfully identified a small, particular group of people that really like it,” Hardisty said in a statement. “And if you change your whole product line or campaign to match that, it might actually be alienating to most people. So you have to be very careful not to draw broader lessons from one Facebook study.”
This is all bad news for advertisers trying to figure out what works – but things don’t look much better for the rest of us either.
Read more:
- A social media ban for children would actually solve nothing. Here’s why
- Child depression rates are skyrocketing - but social media isn’t to blame. Here’s why
- Why scientists don’t actually know if social media is bad for you
When algorithms decide who sees what
As researchers like Hardisty suggest, since few people fully understand how these algorithms work, they can have serious unintended consequences.
For example, one study from 2018 found that supposedly gender-neutral job adverts for STEM (Science, Technology, Engineering and Maths) roles were being shown to men more than to women.
Why? Because the algorithm prioritized cost-efficiency. Young women, who are statistically more likely to engage with ads, were actually more expensive to target. To save money, the algorithm quietly sidelined them, unintentionally reinforcing gender bias.
Then there’s political polarisation.
“Algorithms create these little cocoons where you’re only exposed to opinions that align with your own,” Cornil said. “There are things companies could do to address this – like allowing users to opt into a more diverse range of content – but they have no real incentive to make those changes.”
It may go deeper than just showing one group of people a different ad than another. In fact, according to the researchers, the algorithms are now so complex and precise that they can micro-target individuals rather than groups.
“It's selecting the best possible ads for a specific segment – and the segment isn’t even a group of people,” Cornil explained. “With all the data we have about consumers, the segment is one.”
In other words, the algorithms don’t just identify broad demographics like “women aged 18–25” or “middle-class professionals in London.” They can tailor ads down to individual users, analysing their behaviour in real time to predict what they’ll click on next.
“It all happens in a black box,” Cornil claims. “The advertiser doesn't know, but the machine knows. AI knows.”
What does this mean for you?
“The first thing to understand,” Cornil suggests, “is that when you’re online, you’re constantly being experimented on.”
These experiments themselves aren’t inherently bad – we’ve been using similar techniques to study human behaviour for years and data collected in this way can be more reliable since participants don’t know they’re in an experiment.
But the difference now is that these experiments are being run by opaque, ever-changing algorithms that decide what you see and when you see it.
“The second thing,” Cornil added, “is to be aware that these algorithms have decided you are going to be exposed to a specific message.”
And that message isn’t always random. It’s carefully selected, shaped by data, and tailored to influence what you think, what you buy and – perhaps most worryingly – what you believe.
Facebook and Google did not respond to BBC Science Focus's request for comment
About our expert
Yann Cornil is an associate professor in the Marketing and Behavioral Science Division at the UBC Sauder School of Business, Canada. His research has been published in the Journal of Consumer Research, the International Journal of Research in Marketing and Organizational Behavior and Human Decision Processes, among others.
Read more: