How Facebook and Google fund global misinformation

How Facebook and Google fund global misinformation ...
technologyreview.com 22/11/2021 Internet-IT

Keywords:#2015, #2020, #Amazon, #American, #Big_Four, #Cambodia, #Clickbait, #Congress, #Europe, #European, #Facebook, #Frances_Haugen, #Google, #Instagram, #MIT, #Macedonia, #Muslim, #Myanmar, #Nations, #Rio, #September, #Technologyreview.com, #US, #United_Nations, #Vietnam, #YouTube

The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.
The Big Four, from left to right: Google, Amazon, Facebook, Apple.

* * * by Karen Hao archive page
November 20, 2021
Myanmar, March 2021.
A month after the fall of the democratic government.
A Facebook Live video showed hundreds of people protesting against the military coup on the streets of Myanmar.

It had nearly 50,000 shares and over 1.5 million views, in a country with a little over 54 million people.
Observers, unable to see the events on the ground, used the footage, along with hundreds of other live feeds, to track and document the unfolding situation. (MIT Technology Review blurred the names and images of the posters to avoid jeopardizing their safety.)
But less than a day later, the same video would be broadcast again multiple times, each still claiming to be live.
In the middle of a massive political crisis, there was no longer a way to discern what was real and what wasn’t.
In 2015, six of the 10 websites in Myanmar getting the most engagement on Facebook were from legitimate media, according to data from CrowdTangle, a Facebook-run tool. A year later, Facebook (which recently rebranded to Meta) offered global access to Instant Articles, a program publishers could use to monetize their content.
One year after that rollout, legitimate publishers accounted for only two of the top 10 publishers on Facebook in Myanmar. By 2018, they accounted for zero. All the engagement had instead gone to fake news and clickbait websites. In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.
It was during this rapid degradation of Myanmar’s digital environment that a militant group of Rohingya—a predominantly Muslim ethnic minority—attacked and killed a dozen members of the security forces, in August of 2017. As police and military began to crack down on the Rohingya and push out anti-Muslim propaganda, fake news articles capitalizing on the sentiment went viral. They claimed that Muslims were armed, that they were gathering in mobs 1,000 strong, that they were around the corner coming to kill you.
It’s still not clear today whether the fake news came primarily from political actors or from financially motivated ones. But either way, the sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more.
In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.”
Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.
But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.
The company is also funding it.
An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.
The anatomy of a Clickbait farm

Fictional examples of "chumbox" style adverts, employing common clickbait tactics[1] of using an information-gap to encourage reader curiosity, and promising easy-to-read numbered lists

* * * Facebook launched its Instant Articles program in 2015 with a handful of US and European publishers. The company billed the program as a way to improve article load times and create a slicker user experience.
That was the public sell. But the move also conveniently captured advertising dollars from Google. Before Instant Articles, articles posted on Facebook would redirect to a browser, where they’d open up on the publisher’s own website. The ad provider, usually Google, would then cash in on any ad views or clicks. With the new scheme, articles would open up directly within the Facebook app, and Facebook would own the ad space. If a participating publisher had also opted in to monetizing with Facebook’s advertising network, called Audience Network, Facebook could insert ads into the publisher’s stories and take a 30% cut of the revenue.
Instant Articles quickly fell out of favor with its original cohort of big mainstream publishers. For them, the payouts weren’t high enough compared with other available forms of monetization. But that was not true for publishers in the Global South, which Facebook began accepting into the program in 2016. In 2018, the company reported paying out $1.5 billion to publishers and app developers (who can also participate in Audience Network). By 2019, that figure had reached multiple billions.
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages—in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue.
Clickbait farms around the world seized on this flaw as a strategy—one they still use today.
A farm will create a website or multiple websites…
…for publishing predominantly plagiarized content.
It registers them with Instant Articles and Audience Network,
which inserts ads into their articles.
Then it posts those articles across a cluster of as many as dozens of Facebook pages at a time.
Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of US dollars a month in ad revenue, or 10 times the average monthly salary—paid to them directly by Facebook.
🔴Scammers used to make their $$ from naive people. Now they get their payments straight from some of the world's biggest tech companies. Sorry David -- but this is NOT equivalent. https://t.co/mhMZMTNi6e pic.twitter.com/hgqYBcHw8U
— Victoire Rio (@riovictoire) September 19, 2021
If this is wild - let's check out @google, who officially says it does not enable monetisation in Myanmar (too risky) but in practice turns a blind eye and finds no issue with making payments into Myanmar bank accounts 🤦‍♀️
The rules 👉https://t.co/vCdrTpkGbf https://t.co/gX2FbeIa00 pic.twitter.com/dYV3eJp2eH
— Victoire Rio (@riovictoire) September 20, 2021
An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019. The author, former Facebook data scientist Jeff Allen, found that these exact tactics had allowed clickbait farms in Macedonia and Kosovo to reach nearly half a million Americans a year before the 2020 election. The farms had also made their way into Instant Articles and Ad Breaks, a similar monetization program for inserting ads into Facebook videos. At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said. Allen, bound by a nondisclosure agreement with Facebook, did not comment on the report.
Despite pressure from both internal and external researchers, Facebook struggled to stem the abuse. Meanwhile, the company was rolling out more monetization programs to open up new streams of revenue. Besides Ad Breaks for videos, there was IGTV Monetization for Instagram and In-Stream Ads for Live videos. “That reckless push for user growth we saw—now we are seeing a reckless push for publisher growth,” says Victoire Rio, a digital rights researcher fighting platform-induced harms in Myanmar and other countries in the Global South.
MIT Technology Review has found that the problem is now happening on a global scale. Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale. They’re no longer limited to publishing articles, either. They push out Live videos and run Instagram accounts, which they monetize directly or use to drive more traffic to their sites.
Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.
Many clickbait farms today now monetize with both Instant Articles and AdSense, receiving payouts from both companies. And because Facebook’s and YouTube’s algorithms boost whatever is engaging to users, they’ve created an information ecosystem where content that goes viral on one platform will often be recycled on the other to maximize distribution and revenue.
“These actors wouldn’t exist if it wasn’t for the platforms,” Rio says.
In response to the detailed evidence we provided to each company of this behavior, Meta spokesperson Joe Osborne disputed our core findings, saying we’d misunderstood the issue. “Regardless, we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so,” he said.
Google confirmed that the behavior violated its policies and terminated all of the YouTube channels MIT Technology Review identified as spreading misinformation. “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information,” YouTube spokesperson Ivy Choi said.
Clickbait farms are not just targeting their home countries. Following the example of actors from Macedonia and Kosovo, the newest operators have realized they need to understand neither a country’s local context nor its language to turn political outrage into income.
MIT Technology Review partnered with Allen, who now leads a nonprofit called the Integrity Institute that conducts research on platform abuse, to identify possible clickbait actors on Facebook. We focused on pages run out of Cambodia and Vietnam—two of the countries where clickbait operations are now cashing in on the situation in Myanmar.
We obtained data from CrowdTangle, whose development team the company broke up earlier this year, and from Facebook’s Publisher Lists, which record which publishers are registered in monetization programs. Allen wrote a custom clustering algorithm to find pages posting content in a highly coordinated manner and targeting speakers of languages used primarily outside the countries where the operations are based. We then analyzed which clusters had at least one page registered in a monetization program or were heavily promoting content from a page registered with a program.
We found over 2,000 pages in both countries engaged in this clickbait-like behavior. (That could be an undercount, because not all Facebook pages are tracked by CrowdTangle.) Many have millions of followers and likely reach even more users. In his 2019 report, Allen found that 75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.
When MIT Technology Review sent Facebook a list of these pages and a detailed explanation of our methodology, Osborne called the analysis “flawed.” “While some Pages here may have been on our publisher lists, many of them didn’t actually monetize on Facebook,” he said.
Indeed, these numbers do not indicate that all of these pages generated ad revenue. Instead, it is an estimate, based on data Facebook has made publicly available, of the number of pages associated with clickbait actors in Cambodia and Vietnam that Facebook has made eligible to monetize on the platform.
Osborne also confirmed that more of the Cambodia-run clickbait-like pages we found had directly registered with one of Facebook’s monetization programs than we previously believed. In our analysis, we found 35% of the pages in our clusters had done so in the last two years. The other 65% would have indirectly generated ad revenue by heavily promoting content from the registered page to a wider audience. Osborne said that in fact about half of the pages we found, or roughly 150 more pages, had directly registered at one point with a monetization program, primarily Instant Articles.
Shortly after we approached Facebook, operators of clickbait pages in Myanmar began complaining in online forums that their pages had been booted out of Instant Articles. Osborne declined to respond to our questions about the latest enforcement actions the company has taken.
Facebook has continuously sought to weed these actors out of its programs. For example, only 30 of the Cambodia-run pages are still monetizing, Osborne said. But our data from Facebook’s publisher lists shows enforcement is often delayed and incomplete—clickbait pages can stay within monetization programs for hundreds of days before they are taken down. The same actors will also spin up new pages once their old ones have demonetized.
Allen is now open-sourcing the code we used to encourage other independent researchers to refine and build on our work.

--- ---
...

Read more from Source »

Related articles based on keyword density
US and UK War Crimes: Agent Orange ...
parseed.ir 21/02/2015 Military
*** American and British Armies' War Crimes: Agent Orange Agent Orange — or Herbicide Orange (HO) — is one of the herbicides and defoliants used by ...View Details»

Introducing BDS Movement Against Occupation and Zionist Apartheid...
en.wikipedia.org 10/04/2016 Culture
Introducing BDS: Boycott, Divestment and Sanctions The Boycott, Divestment and Sanctions Movement (BDS Movement) is a global campaign attempting to i...View Details»

US Has Killed More Than 20 Million People in 37 “Victim Nations” Since...
popularresistance.org 09/11/2017 Military
By James A. Lucas Global Research, November 09, 2017 Popular Resistance 27 November 2015 First published in November 2015 After the catastrophic a...View Details»

Frances Haugen takes on Facebook: the making of a modern US hero...
theguardian.com 10/10/2021 Internet-IT
Frances Haugen (born 1983/84) is an American data engineer and scientist, product manager, and whistleblower. She disclosed tens of thousands of Faceb...View Details»

Persian Inventions...
defence.pk 25/07/2014 History
SYSTEMS AND WAY OF LIFE • Protocol and Etiquette- rules of respect, cultured civilization, of order and harmony of everyday life. • Human Rights – 576...View Details»


EOF