Est. 1995

Tag: Facebook

How Social Media Abdicated Responsibility for the News

Illustration by Nicholas Konrad / The New Yorker

X, formerly known as Twitter, has, under the ownership of Elon Musk, dismantled its content-moderation staff, throttled the reach of news publications, and allowed any user to buy blue-check verification, turning what was once considered a badge of trustworthiness on the platform into a signal of support for Musk’s regime. Meta’s Facebook has minimized the number of news articles in users’ feeds, following years of controversy over the company’s role in spreading misinformation. And TikTok, under increased scrutiny in the United States for its parent company’s relationship with the Chinese government, is distancing itself from news content. A little over a decade ago, social media was heralded as a tool of transparency on a global scale for its ability to distribute on-the-ground documentation during the uprisings that became known as the Arab Spring. Now the same platforms appear to be making conflicts hazier rather than clearer. In the days since Hamas’s attacks, we’ve seen with fresh urgency the perils of relying on our feeds for news updates.

An “algorithmically driven fog of war” is how one journalist described the deluge of disinformation and mislabelled footage on X. Videos from a paragliding accident in South Korea in June of this year, the Syrian civil war in 2014, and a combat video game called Arma 3 have all been falsely labeled as scenes from Israel or Gaza. (Inquiries I sent to X were met with an e-mail reading, “Busy now, please check back later.”) On October 8th, Musk posted a tweet recommending two accounts to follow for information on the conflict, @WarMonitors and @sentdefender, neither of which is a formal media company, but both are paid X subscribers. Later that day, after users pointed out that both accounts regularly post falsities, Musk deleted the recommendation. Where Twitter was once one of the better-moderated digital platforms, X is most trustworthy as a source for finding out what its owner wants you to see.

https://www.factcheck.org/2023/10/posts-use-fabricated-audio-to-misrepresent-cnn-report-during-rocket-attack-in-israel/

Facebook used to aggregate content in a “News Feed” and pay media companies to publish stories on its platform. But after years of complicity in disseminating Trumpian lies—about the 2016 election, the COVID pandemic, and the January 6th riots—the company has performed an about-face. Whether because of negative public opinion or because of the threat of regulation, it’s clear that promoting news is no longer the goal of any of Meta’s social media. In recent days, my Facebook feed has been overrun with the same spammy entertainment-industry memes that have proliferated on the platform, as if nothing noteworthy were happening in the world beyond. On Instagram, some pro-Palestine users complained of being “shadowbanned”—seemingly cut off without warning from algorithmic promotion—and shared tips for getting around it. (Meta attributed the problem to a “bug.”)

Our feeds continue to create a feeling of transparency and granularity, while providing fewer of the signposts that we need to chart a clear path through the thicket of content. What remains, perhaps aptly, is an atmosphere of chaos and uncertainty as war unfolds daily on our screens.

In July, Meta launched its newest social network, Threads, in an attempt to draw users away from Musk’s embattled X. But, unlike X, Threads has shied away from serving as a real-time news aggregator. Last week, Adam Mosseri, the head of Instagram and overseer of Threads, announced that the platform was “not going to get in the way of” news content but was “not going go [sic] to amplify” it, either. He continued, “To do so would be too risky given the maturity of the platform, the downsides of over-promising, and the stakes.” I’ve found Threads more useful than X as a source for news about the Israel-Hamas war. The mood is calmer and more deliberate, and my feed tends to highlight posts that have already drawn engagement from authoritative voices. But I’ve also seen plenty of journalists on Threads griping that they were getting too many algorithmic recommendations and insufficient real-time posts. Users of Threads now have the option to switch to a chronologically organized feed. But on the default setting that most people use, there is no guarantee that the platform is showing you the latest information at any given time.

More Info: Websites for Fact-Checking

Source:

How Social Media Abdicated Responsibility for the News

Did Facebook enable global political manipulation?

Listen to the story:
 

 

When Sophie Zhang went public with explosive revelations detailing the political manipulation she’d uncovered during her time as a data scientist at Facebook, she supplied concrete evidence to support what critics had long been saying on the outside: that Facebook makes election interference easy, and that unless such activity hurts the company’s business interests, it can’t be bothered to fix the problem.

On her last day, hours after she posted her memo internally, Facebook deleted it (though they later restored an edited version after widespread employee anger). A few hours later, an HR person called her, asking her to also remove a password-protected copy she had posted on her personal website. She tried to bargain: she would do so if they restored the internal version. The next day, instead, she received a notice from her hosting server that it had taken down her entire website after a complaint from Facebook. A few days after that, it took down her domain as well.

These are some of the biggest revelations in Zhang’s memo:

  • It took Facebook’s leaders nine months to act on a coordinated campaign “that used thousands of inauthentic assets to boost President Juan Orlando Hernandez of Honduras on a massive scale to mislead the Honduran people.” Two weeks after Facebook took action against the perpetrators in July, they returned, leading to a game of “whack-a-mole” between Zhang and the operatives behind the fake accounts, which are still active.
  • In Azerbaijan, Zhang discovered the ruling political party “utilized thousands of inauthentic assets… to harass the opposition en masse.” Facebook began looking into the issue a year after Zhang reported it. The investigation is ongoing.
  • Zhang and her colleagues removed “10.5 million fake reactions and fans from high-profile politicians in Brazil and the US in the 2018 elections.”
  • In February 2019, a NATO researcher informed Facebook that “he’d obtained Russian inauthentic activity on a high-profile U.S. political figure that we didn’t catch.” Zhang removed the activity, “dousing the immediate fire,” she wrote.
  • In Ukraine, Zhang “found inauthentic scripted activity” supporting both former prime minister Yulia Tymoshenko, a pro–European Union politician and former presidential candidate, as well as Volodymyr Groysman, a former prime minister and ally of former president Petro Poroshenko. “Volodymyr Zelensky and his faction was the only major group not affected,” Zhang said of the current Ukrainian president.
  • Zhang discovered inauthentic activity — a Facebook term for engagement from bot accounts and coordinated manual accounts— in Bolivia and Ecuador but chose “not to prioritize it,” due to her workload. The amount of power she had as a mid-level employee to make decisions about a country’s political outcomes took a toll on her health.
  • After becoming aware of coordinated manipulation on the Spanish Health Ministry’s Facebook page during the COVID-19 pandemic, Zhang helped find and remove 672,000 fake accounts “acting on similar targets globally” including in the US.
  • In India, she worked to remove “a politically sophisticated network of more than a thousand actors working to influence” the local elections taking place in Delhi in February. Facebook never publicly disclosed this network or that it had taken it down.

By speaking out and eschewing anonymity, Zhang risked legal action from the company, harm to her future career prospects, and perhaps even reprisals. Her story reveals that it is really pure luck that we now know so much about how Facebook enables election interference globally. To regulators around the world considering how to rein in the company, this should be a wake-up call.

 

She risked everything to expose Facebook. Now she’s telling her story.

Social Media is a (Largely) Lawless Cesspool

Governments from Pakistan to Mexico to Washington are woefully unequipped to combat disinformation warfare. Eastern European countries living in Russia’s shadow can teach us how to start fighting back, but only if our politicians decide to stop profiting from these tactics and fight them instead.

 

Suzanne Smalley

Governments from Pakistan to Mexico to Washington are woefully unequipped to combat disinformation warfare. Eastern European countries living in Russia’s shadow can teach us how to start fighting back, but only if our politicians decide to stop profiting from these tactics and fight them instead.

A screenshot from a video shared widely on social media purporting to show a Hamas fighter downing a helicopter, which is actually pulled from the video game Arma 3.

Video game clips purporting to be footage of a Hamas fighter shooting down an Israeli helicopter. Phony X accounts spreading fake news through fictitious BBC and Jerusalem Post “journalists.” An Algerian fireworks celebration is described as Israeli strikes.

These are just a few examples of the disinformation swirling around the conflict between Hamas and Israel, much of which has been enabled by X, formerly known as Twitter, and by platforms like Meta and Telegram.

The platforms have also been used to terrorize. In one instance, a girl found out that a militant had killed her grandmother after he broadcast it on a Facebook livestream. Meta did not immediately reply to a request for comment.

X owner Elon Musk promoted two particularly virulent accounts spreading disinformation in a post that was viewed 11 million times before Musk deleted the tweet a few hours later.

One of those accounts, @sentdefender, was described by Digital Forensic Research Lab (DFR) expert Emerson Brooking as both “absolutely poisonous” and often retweeted “uncritically.”

Read More: Hacktivists take sides in the Israel-Palestinian war

X removed some of the most blatantly fake tweets, often hours after they were posted, but purveyors of disinformation like @sentdefender still operate freely.

A spokesperson for X replied to a request for comment by saying to “check back later.”

The platform announced changes to its public interest policy over the weekend, according to a post on its safety channel. The post said X has seen an increase in “daily active users” based in the conflict area in the past few days and that more than 50 million posts worldwide have discussed the attack.

The use of video game and recycled news footage to spread false information about the conflict is a growing trend, making it even more difficult to root out disinformation, according to Dina Sadek, a Middle East research fellow with the Digital Forensic Research Lab.

“We’re laser-focused and dedicated to protecting the conversation on X and enforcing our rules as we continue to assess the situation on the platform,” they said.

The post said X will remove newly created Hamas-affiliated accounts. It also said it is coordinating with industry peers and the Global Internet Forum to Counter Terrorism (GIFCT) “to try and prevent terrorist content from being distributed online.”

X said it is “proactively monitoring” for anti-semitic accounts and has “actioned” tens of thousands of posts sharing graphic media and violent and hateful speech.

On Tuesday, European Commissioner Thierry Breton sent a letter to Musk, cautioning that X is spreading “illegal content and disinformation.” EU’s Digital Services Act (DSA) mandates that large online platforms such as X remove illegal content and take steps to quickly address how they impact the public.

“Given the urgency, I also expect you to be in contact with the relevant law enforcement authorities and Europol, and ensure that you respond promptly to their requests,” Breton wrote. He advised Musk that he would be following up on matters related to X’s compliance with DSA.

“I urge you to ensure a prompt, accurate, and complete response to this request within the next 24 hours,” Breton said.

The difficulty of rooting out disinformation is made more difficult by the growing trend of using video games and recycled news footage to promote falsehoods about the conflict, said Dina Sadek, a Middle East research fellow with DFR. Telegram has been a major vehicle for disinformation, she added, likely because it doesn’t restrict how often users can post and because the content is sent as a text message.

List of social platforms with at least 100 million active users

https://guides.stlcc.edu/fakenews/spotfakenews

https://guides.stlcc.edu/fakenews/spotfakenews

“The second that you think something happened you can give them a boost and give them pictures from the incident,” she told The Record. “There’s just the speed of how things happen when on messaging applications and some of those have large numbers of subscribers.”

Sadek said it is too soon to detect patterns in the disinformation being disseminated — both in terms of the amount and which side’s supporters are most active — but she said she has seen it emanate from all sides of the conflict.

Stanford disinformation scholar Herb Lin told Recorded Future News that he predicts the propaganda war will intensify significantly in the coming weeks, citing Russia’s likely support to Hamas due to its friendly relationship with Iran.

“They have a quick reaction disinformation force,” he said. “They have the ability to react promptly to this sort of stuff and the first people to get on the air tend to dominate the messages for a while.”

Learn more: https://guides.stlcc.edu/fakenews/spotfakenews

The New Cold War is Being Fought on Social Media

Russia is not just at war with Ukraine; they’re also in a cold war with us. And last week Putin got a significant victory in that war, which is now being fought on the battleground of social media and the Internet.

Representative Matt Gaetz and Senator Rand Paul helped lead Putin’s victory this week in his cold war with America by stripping aid for Ukraine out of the continuing resolution to keep our government funded for the next 45 days.

It was a clear signal from Republicans in Congress to Putin that if he can just hang on long enough, his propaganda efforts will eventually lead America to drop out and hand Ukraine over to them.

Today’s propaganda battle is primarily being fought on the Internet, principally on social media.

That’s where Russia’s now well-documented targeted efforts in six swing states (using secret, insider information from the 2016 Trump campaign given them by Paul Manafort) succeeded in pulling out a squeaker Electoral College victory for Donald Trump. It’s where they hope to repeat that in 2024.

It was also a signal to China, Japan, Australia, South and North Korea, and Taiwan that America can’t be trusted to defend allied democracies when they’re physically attacked by larger authoritarian states. By increasing the chances of an aggressor’s victory, the GOP’s continuing resolution encourages authoritarian states like Russia and China and, thus, makes the world less safe.

The Putin Republicans are being aided in this by social media companies owned by rightwing billionaire oligarchs — and their fossil fuel oligarch buddies funding the GOP in every state and federally — who are each richer than any king or pharaoh in history.

Given the media power these oligarchs and their monopolies have, it’s hard to offer any easy solutions to this threat now facing our democracy.

The Biden administration is awake to the threat: President Biden’s speech in Arizona last week explicitly called out the MAGA extremists in the GOP, and Democrats in Congress and in regulatory agencies are going after their monopolies.

Those efforts, though, will take years to reach fruition; after all, it was exactly 40 years ago this year that Reagan instructed his SEC, FTC, and DOJ to functionally stop enforcing our nation’s anti-trust laws, so they’ve had four decades to reach astronomical levels of consolidation and wealth.

Any effort to take on the media giants is complicated by five corrupt Republicans on the Supreme Court having legalized political bribery in 2010 with their Citizens United decision.

So now it’s largely up to us to carry the message forward. You and me. People who value democracy and want to see a world safe from tyrants and wannabee tyrants like Putin, Xi, MBS, and Trump.

Source:

Is the New Warfare Battleground on Social Media and the Internet?

Creating an Alternate, Autocratic Reality

Musk, Thiel, Zuckerberg, and Andreessen are American oligarchs, controlling online access for billions of users on Facebook, Twitter, Threads, Instagram, and WhatsApp, including 80 percent of the US population. Moreover, from the outside, they appear to be more interested in replacing our current reality—and our economic system, imperfect as it is—with something far more opaque, concentrated, and unaccountable, which, if it comes to pass, they will control.

Their plan for your future involves nothing less than confronting the nihilism of a looming dystopia. And four of the projects they are pursuing to address their visions will need tens of trillions of dollars of (mostly public) investment capital over the next two decades.

These Technocrats make up a kind of interlocking directorate of Silicon Valley, each investing in or sitting on the boards of the others’ companies. Their vast digital domain controls your personal information; affects how billions of people live, work, and love; and sows online chaos, inciting mob violence and sparking runs on stocks. These four men have long been regarded as technologically progressive heroes, but they are actually part of a broader antidemocratic, authoritarian turn within the tech world, deeply invested in preserving the status quo and in keeping their market-leadership positions or near-monopolies—and their multi-billion-dollar fortunes secure from higher taxes. (“Competition is for suckers,” Thiel once posited.)

Excerpted from The End of Reality: How Four Billionaires are Selling a Fantasy Future of the Metaverse, Mars, and Crypto by Jonathan Taplin. Copyright © 2023 by Jonathan Taplin. Printed with permission of Public Affairs, an imprint of Perseus Books LLC, a division of Hachette Book Group, Inc., New York, N.Y. All rights reserved.

Source:

How Musk, Thiel, Zuckerberg, and Andreessen-Four Billionaire Techno-Oligarchs-Are Creating an Alternate, Autocratic Reality

Help Us Investigate Surveillance Marketing Using Facebook Data

Surveillance marketers are upping their game. Instead of relying on tracking pixels, companies are now sending tracking data directly to one another.
"Companies may now be tracking you in a way that’s completely undetectable by users and their devices."The Markup has done extensive reporting on the Meta Pixel (previously the Facebook Pixel) and other tracking pixels in the last year, revealing that organizations—from hospitals to crisis hotlines to tax filing companies to the U.S. Department of Education—have sent sensitive data to Facebook. We’ve spurred congressional investigations, data breach notifications, and class action lawsuits. Dozens of organizations have removed the Meta Pixel from their websites as a result. We were able to do all of this because members of the public shared their data with us, through our “Facebook Pixel Hunt” study in partnership with Mozilla Rally. Those donations let us see how real people’s information ended up in Facebook’s hands as they surfed the web.
 
Now, we need your help again. Instead of relying on tracking pixels—which is web traffic that The Markup, Consumer Reports, and others can detect using tools in the browser—companies may now be tracking you in a way that’s completely undetectable by users and their devices…

Help Us Investigate Surveillance Marketing Using Facebook Data – The Markup

Suicide Hotlines Promise Anonymity. Why Are They Sending Sensitive Data to Facebook?

Suicide Hotlines Promise Anonymity. Dozens of Their Websites Send Sensitive Data to Facebook

The Markup found many sites tied to the national mental health crisis hotline transmitted information on visitors through the Meta Pixel

By: Colin Lecher and Jon Keegan

Originally published on themarkup.org

This article was copublished with STAT, a national publication that delivers trusted and authoritative journalism about health, medicine, and the life sciences. Sign up for its health tech newsletter here.

Websites for mental health crisis resources across the country—which promise anonymity for visitors, many of whom are at a desperate moment in their lives—have been quietly sending sensitive visitor data to Facebook, The Markup has found.

Dozens of websites tied to the national mental health crisis 988 hotline, which launched last summer, transmit the data through a tool called the Meta Pixel, according to testing conducted by The Markup. That data often included signals to Facebook when visitors attempted to dial for mental health emergencies by tapping on dedicated call buttons on the websites.

In some cases, filling out contact forms on the sites transmitted hashed but easily unscrambled names and email addresses to Facebook.

The Markup tested 186 local crisis center websites under the umbrella of the national 988 Suicide and Crisis Lifeline. Calls to the national 988 line are routed to these centers based on the area code of the caller. The organizations often also operate their own crisis lines and provide other social services to their communities.

The Markup’s testing revealed that more than 30 crisis center websites employed the Meta Pixel, formerly called the Facebook Pixel. The pixel, a short snippet of code included on a webpage that enables advertising on Facebook, is a free and widely used tool. A 2020 Markup investigation found that 30 percent of the web’s most popular sites use it.

The pixels The Markup found tracked visitor behavior to different degrees. All of the sites recorded that a visitor had viewed the homepage, while others captured more potentially sensitive information.

Many of the sites included buttons that allowed users to directly call either 988 or a local line for mental health help. But clicking on those buttons often triggered a signal to be sent to Facebook that shared information about what a visitor clicked on. A pixel on one site sent data to Facebook on visitors who clicked a button labeled “24-Hour Crisis Line” that called local crisis services.

Clicking a button or filling out a form also sometimes sent personally identifiable data, such as names or unique ID numbers, to Facebook.

The website for the Volunteers of America Western Washington is a good example. The social services nonprofit says it responds to more than 300,000 requests for assistance each year. When a web user visited the organization’s website, a pixel on the homepage noted the visit.

If the visitor then tried to call the national 988 crisis hotline through the website by clicking on a button labeled “call or text 988,” that click—including the text on the button—was sent to Facebook. The click also transmitted an “external ID,” a code that Facebook uses to attempt to match web users to their Facebook accounts.

If a visitor filled out a contact form on the Volunteers of America Western Washington’s homepage, even more, private information was transmitted to Facebook. After filling out and sending the form, a pixel transmitted hashed or scrambled, versions of the person’s first and last name, as well as email address. Volunteers of America Western Washington did not respond to requests for comment.

The Markup found similar activity on other sites.

The Contra Costa Crisis Center, an organization providing social services in Northern California, noted to Facebook when a user clicked on a button to call or text for crisis services. About 3,000 miles away, in Rhode Island, an organization called BH Link used a pixel that also pinged Facebook when a visitor clicked a button to call crisis services from its homepage.

Facebook can use data collected by the pixel to link website visitors to their Facebook accounts, but the data is collected whether or not the visitor has a Facebook account. Although the names and email addresses sent to Facebook were hashed, they can be easily unscrambled with free and widely available web services.

After The Markup contacted the 33 crisis centers about their practices, some said they were unaware that the code was on their sites and that they’d take steps to remove it.

“This was not intentional and thank you for making us aware of the potential issue,” Leo Pellerin, chief information officer for the United Way of Connecticut, a partner in the national 988 network, said in an emailed statement. Pellerin said they had removed the code, which they attributed to a plug-in on their website.

Lee Flinn, director of the Idaho Crisis and Suicide Hotline, said in an email that she had “never heard of Meta Pixel” and was asking the outside vendor who had worked on the organization’s site to remove the code. “We value the privacy of individuals who reach out to us, and any tracking devices are not intentional on our part, nor did we ask any developer to install them,” she said. “Anything regarding tracking that is found will be immediately removed.”

Ken Gibson, a spokesperson for the Crisis Center of Tampa Bay, said the organization had recently placed the pixel on its site to advertise for staff but would now reduce the information the pixel gathers to only careers pages on the site.

In follow-up tests, four organizations appeared to have completely removed the code. The majority of the centers we contacted did not respond to requests for comment.

“Advertisers should not send sensitive information about people through our Business Tools,” Meta spokesperson Emil Vazquez told The Markup in an emailed statement that mirrored those the company has previously provided in response to reporting on the Meta Pixel. “Doing so is against our policies and we educate advertisers on properly setting up Business tools to prevent this from occurring. Our system is designed to filter out potentially sensitive data it is able to detect.”

Vazquez did not respond to a question about whether or how Meta could determine if this specific data was filtered.

There is no evidence that either Facebook or any of the crisis centers themselves attempted to identify visitors or callers, or that an actual human ever identified someone who attempted to call for help through a website. Some organizations explicitly said in response to The Markup’s requests for comment that they valued the anonymity promised by the 988 line.

Mary Claire Givelber, executive director of New Jersey–based Caring Contact, said in an email that the organization had briefly used the pixel to recruit volunteers on Facebook but would now remove it.

“For the avoidance of all doubt, Caring Contact has not used the Meta Pixel to identify, target, or advertise to any potential or actual callers or texters of the Caring Contact crisis hotline,” Givelber said.

Meta can use information gathered from its tools for its own purposes, however, and data sent to the company through the pixels scattered across the web enters a black box that can catalog and organize data with little oversight.

Divendra Jaffar, a spokesperson for Vibrant Emotional Health, the nonprofit responsible for administering the national 988 crisis line, pointed out in an emailed statement that data transmitted through the pixel is encrypted.

“While Vibrant Emotional Health does not require our 988 Lifeline network of crisis centers to provide updates on their marketing and advertising practices, we do provide best practices guidelines to our centers, counselors, and staff and hold them to rigorous operating standards, which are reviewed and approved by our government partners,” Jaffar said.

The organization did not respond to a request to provide any relevant best practices.

Jen King, the privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, said in an interview that, regardless of the reasons, Meta is gathering far too much data through its tools.

“Even if this is accidental still on the part of the developers, you shouldn’t still be able to fall into this trap,” she said. “The time has long passed when you can use that excuse.”

The Pixel and Sensitive Data 

Meta, Facebook’s parent company, offers the pixel as a way to track visitors on the web and to more precisely target ads to those visitors on Facebook. For businesses and other organizations, it’s a valuable tool: A small company can advertise on Facebook directly to people who purchased a certain product, for example, or a nonprofit could follow up on Facebook with users who donated on their last visit to a website.

One organization, the Minnesota-based Greater Twin Cities United Way, said it did not use its website to reach out to potential 988 callers but instead focused on “donors and other organizational stakeholders.” Sam Daub, integrated marketing manager of the organization, said in an emailed statement that the organization uses tools like the pixel “to facilitate conversion-tracking and content retargeting toward users who visit our website” to reach those people but did not track the specific activity of 988 callers.

Apart from encouraging users to buy ads, this sort of data is also potentially valuable to Meta, which, in accordance with its terms of service, can use the information to power its algorithms. The company reserves the right to use data transmitted through the pixel to, for instance, “personalize the features and content (including ads and recommendations) that we show people on and off our Meta Products.” (This is one of the reasons an online shopper might look at a pair of pants online and suddenly see the same pair follow them in advertisements across social media.)

The pixel has proved massively popular. The company told Congress in 2018 that there were more than two million pixels collecting data across the web, a number that has likely increased in the time since. There is no federal privacy legislation in the United States that regulates how most of that data can be used.

Meta’s policies prohibit organizations from sending sensitive information through the pixel on children under 13, or generally any data related to sensitive financial or health matters. The company says it has an automated system “designed to filter out potentially sensitive data that it detects” but that it is advertisers’ responsibility to “ensure that their integrations do not send sensitive information to Meta.”

In practice, however, The Markup has found several major services have sent sensitive information to Facebook. As part of a project in partnership with Mozilla Rally called the Pixel Hunt, The Markup found pixels transmitting information from sources including the Department of Education, prominent hospitals, and major tax preparation companies. Many of those organizations have since changed how or whether they use the pixel, while lawmakers have questioned the companies involved about their practices. Meta is now facing several lawsuits over the incidents.

The types of sensitive health information Meta specifically prohibits being sent include information on “mental health and psychological states” as well as “physical locations that identify a health condition, or places of treatment/counseling.” Vazquez did not directly respond to a question about whether the data sent from the crisis centers violated Meta’s policies.

There is evidence that even Meta itself can’t always say where that data ends up. In a leaked document obtained and published by Vice’s Motherboard, company engineers said they did not “have an adequate level of control and explainability over how our systems use data.” The document compared user data to a bottle of ink spilled into a body of water that then becomes unrecoverable.

“The original use cases [for the pixel] perhaps weren’t quite so invasive, or people weren’t using it so widely,” King said but added that, at this point, Meta is “clearly grabbing way too much data.”

988 History and Controversy

The national 988 crisis line is the result of a years-long effort by the Federal Communications Commission to provide a simple, easy-to-remember, three-digit number for people experiencing a mental health crisis.

Crisis lines are an enormously important social service—one that research has found can deter people from suicide. The new national line, largely a better-funded, more accessible version of the long-running National Suicide Prevention Lifeline, answered more than 300,000 calls, chats, and texts between its launch in the summer of last year and January.

But the launch of 988 has been accompanied by questions about privacy and anonymity, mostly around how or whether callers to the line can ever be tracked by emergency services. The national line is advertised as an anonymous service, but in the past callers have said they’ve been tracked without their consent when calling crisis lines. Police have sometimes responded directly in those incidents, leading to harrowing incidents.

The current 988 line doesn’t track users through geolocation technology, according to the service, although counselors are required to provide information to emergency services like 911 in certain situations. That requirement has been the source of controversy, and groups like the Trans Lifeline, a nonprofit crisis hotline serving the trans community, stepped away from the network.

The organization has launched a campaign to bring the issue more prominence. Yana Calou, the director of advocacy at Trans Lifeline, told The Markup in an interview that there are some lines that “really explicitly don’t” track, and the campaign is meant to direct people to those lines instead. (Trans Lifeline, which is not involved in the national 988 network, also uses the Meta Pixel on its site. After being alerted by The Markup, a Trans Lifeline spokesperson, Nemu HJ, said they would remove the code from the site.)

Data-sharing practices have landed other service providers in controversy as well. Last year, Politico reported that the nonprofit Crisis Text Line, a popular mental health service, was partnering with a for-profit spinoff that used data gleaned from text conversations to market customer-service software. The organization quickly ended the partnership after it was publicly revealed.

Having a space where there’s a sense of trust between a caller and an organization can make all the difference in an intervention, Calou said. “Actually being able to have people tell us the truth about what’s going on lets people feel like they can get support,” they said.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

© 2024 CounterPoint