Thanks to Facebook, Your Cellphone Company Is Watching You More Closely Than Ever
WHISTLEBLOWING - SURVEILLANCE, 27 May 2019
20 May 2019 – Among the mega-corporations that surveil you, your cellphone carrier has always been one of the keenest monitors, in constant contact with the one small device you keep on you at almost every moment. A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.
Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.
Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments.
Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.
Facebook’s cellphone partnerships are particularly worrisome because of the extensive surveillance powers already enjoyed by carriers like AT&T and T-Mobile: Just as your internet service provider is capable of watching the data that bounces between your home and the wider world, telecommunications companies have a privileged vantage point from which they can glean a great deal of information about how, when, and where you’re using your phone. AT&T, for example, states plainly in its privacy policy that it collects and stores information “about the websites you visit and the mobile applications you use on our networks.” Paired with carriers’ calling and texting oversight, that accounts for just about everything you’d do on your smartphone.
An Inside Look at “Actionable Insights”
You’d think that degree of continuous monitoring would be more than sufficient for a communications mammoth to operate its business — and perhaps for a while it was. But Facebook’s “Actionable Insights,” a corporate data-sharing program, suggests that even the incredible visibility telecoms have into your daily life isn’t enough — and Zuckerberg et al. can do them one better. Actionable Insights was announced last year in an innocuous, easy-to-miss post on Facebook’s engineering blog. The article, titled “Announcing tools to help partners improve connectivity,” strongly suggested that the program was primarily aimed at solving weak cellular data connections around the world. “To address this problem,” the post began, “we are building a diverse set of technologies, products, and partnerships designed to expand the boundaries of existing connectivity quality and performance, catalyze new market segments, and bring better access to the unconnected.” What sort of monster would stand against better access for the unconnected?
The blog post makes only a brief mention of Actionable Insights’ second, less altruistic purpose: “enabling better business decisions” through “analytics tools.” According to materials reviewed by The Intercept and a source directly familiar with the program, the real boon of Actionable Insights lies not in its ability to fix spotty connections, but to help chosen corporations use your personal data to buy more tightly targeted advertising.
The source, who discussed Actionable Insights on the condition of anonymity because they were not permitted to speak to the press, explained that Facebook has offered the service to carriers and phone makers ostensibly of free charge, with access to Actionable Insights granted as a sweetener for advertising relationships. According to the source, the underlying value of granting such gratis access to Actionable Insights in these cases isn’t simply to help better service cell customers with weak signals, but also to ensure that telecoms and phone makers keep buying more and more carefully targeted Facebook ads. It’s exactly this sort of quasi-transactional data access that’s become a hallmark of Facebook’s business, allowing the company to plausibly deny that it ever sells your data while still leveraging it for revenue. Facebook may not be “selling” data through Actionable Insights in the most baldly literal sense of the word — there’s no briefcase filled with hard drives being swapped for one containing cash — but the relationship based on spending and monetization certainly fits the spirit of a sale. A Facebook spokesperson declined to answer whether the company charges for Actionable Insights access.
The confidential Facebook document provides an overview of Actionable Insights and espouses its benefits to potential corporate users. It shows how the program, ostensibly created to help improve underserved cellular customers, is pulling in far more data than how many bars you’re getting. According to one portion of the presentation, the Facebook mobile app harvests and packages eight different categories of information for use by over 100 different telecom companies in over 50 different countries around the world, including usage data from the phones of children as young as 13. These categories include use of video, demographics, location, use of Wi-Fi and cellular networks, personal interests, device information, and friend homophily, an academic term of art. A 2017 article on social media friendship from the Journal of the Society of Multivariate Experimental Psychology defined “homophily” in this context as “the tendency of nodes to form relations with those who are similar to themselves.” In other words, Facebook is using your phone to not only provide behavioral data about you to cellphone carriers, but about your friends as well.
From these eight categories alone, a third party could learn an extraordinary amount about patterns of users’ daily life, and although the document claims that the data collected through the program is “aggregated and anonymized,” academic studies have found time and again that so-called anonymized user data can be easily de-anonymized. Today, such claims of anonymization and aggregation are essentially boilerplate from companies who wager you’ll be comfortable with them possessing a mammoth trove of personal observations and behavioral predictions about your past and future if the underlying data is sufficiently neutered and grouped with your neighbor’s.
A Facebook spokesperson told The Intercept that Actionable Insights doesn’t collect any data from user devices that wasn’t already being collected anyway. Rather, this spokesperson said Actionable Insights repackages the data in novel ways useful to third-party advertisers in the telecom and smartphone industries.
Material reviewed by The Intercept show demographic information presented in a dashboard-style view, with maps showing customer locations at the county and city level. A Facebook spokesperson said they “didn’t think it goes more specific than zip code.” But armed with location data beamed straight from your phone, Facebook could technically provide customer location accurate to a range of several meters, indoors or out.
Targeting By Race and Likely Creditworthiness
Despite Facebook’s repeated assurances that user information is completely anonymized and aggregated, the Actionable Insights materials undermine this claim. One Actionable Insights case study from the overview document promotes how an unnamed North American cellular carrier had previously used its Actionable Insights access to target a specific, unnamed racial group. Facebook’s targeting of “multicultural affinity groups,” as the company formerly referred to race, was discontinued in 2017 after the targeting practice was widely criticized as potentially discriminatory.
Another case study described how Actionable Insights can be used to single out individual customers on the basis of creditworthiness. In this example, Facebook explained how one of its advertising clients, based outside the U.S., wanted to exclude individuals from future promotional offers on the basis of their credit. Using data provided through Actionable Insights, a Data Science Strategist, a role for which Facebook continues to hire, was able to generate profiles of customers with desirable and undesirable credit standings. The advertising client then used these profiles to target or exclude Facebook users who resembled these profiles.
“What they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the Fair Credit Reporting Act. … It’s no different from Equifax providing the data to Chase.”
The use of so-called lookalike audiences is common in digital advertising, allowing marketers to take a list of existing customers and let Facebook match them to users that resemble the original list based on factors like demographics and stated interests. As Facebook puts it in an online guide for advertisers, “a Lookalike Audience is a way to reach new people who are likely to be interested in your business because they’re similar to your best existing customers.”
But these lookalike audiences aren’t just potential new customers — they can also be used to exclude unwanted customers in the future, creating a sort of ad targeting demographic blacklist.
By promoting this technique in its confidential document Facebook markets to future corporate clients, and appears to have worked with the advertising client to enable, the targeting of credit-eligible individuals based at least in part on behavioral data pulled from their phones — in other words, to allow advertisers to decide who deserves to view an ad based only on some invisible and entirely inscrutable mechanism.
There’s no indication of how exactly Facebook’s data could be used by a third party to determine who is creditworthy, nor has there ever been any indication from the company that how you use its products influences whether you’ll be singled out and excluded from certain offers in the future. Perhaps it’s as simple as Facebook enabling companies to say People with bad credit look and act like this on social networks, a case of correlational profiling quite different from our commonsense notions of good personal finance hygiene required to keep our credit scores polished. How consumers would be expected to navigate this invisible, unofficial credit-scoring process, given that they’re never informed of its existence, remains an open question.
This mechanism is also reminiscent of so-called redlining, the historical (and now illegal) practice of denying mortgages and other loans to marginalized groups on the basis of their demographics, according to Ashkan Sultani, a privacy researcher and former chief technologist of the Federal Trade Commission.
The thought of seeing fewer ads from Facebook might strike some as an unalloyed good — it certainly seems to beat the alternative. But credit reporting, profoundly dull as it might sound, is an enormously sensitive practice with profound economic consequences, determining who can and can’t, say, own or rent a home, or get easy financial access to a new cellphone. Facebook here seems to be allowing companies to reach you on the basis of a sort of unofficial credit score, a gray market determination of whether you’re a good consumer based on how much you and your habits resemble a vast pool of strangers.
Facebook here seems to be allowing companies to reach you on the basis of a sort of unofficial credit score, a gray market determination of whether you’re a good consumer based on how much you and your habits resemble a vast pool of strangers.
In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record, but provided the following statement: “We do not, nor have we ever, rated people’s credit worthiness for Actionable Insights or across ads, and Facebook does not use people’s credit information in how we show ads.” Crucially, this statement doesn’t contradict the practice of Facebook enabling others to do this kind of credit-based targeting using the data it provides. The fact that Facebook promoted this use of its data as a marketing success story certainly undermines the idea that it does not serve ads targeted on the basis of credit information.
A Facebook spokesperson declined to answer whether the company condones or endorses advertising partners using Facebook user data for this purpose, or whether it audits how Actionable Insights is used by third parties, but noted its partners are only permitted to use Actionable Insights for “internal” purposes and agree not to share the data further. The spokesperson did not answer whether the company believes that this application of Actionable Insights data is compliant with the Fair Credit Reporting Act.
According to Joel Reidenberg, a professor and director of Fordham’s Center on Law and Information Policy, Facebook’s credit-screening business seems to inhabit a fuzzy nether zone with regards to the FCRA, neither matching the legal definition of a credit agency nor falling outside the activities the law was meant to regulate. “It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.” Reidenberg questioned the potential for Facebook to invisibly incorporate data on race, gender, or marital status in its screening process, exactly the sort of practice that made legislation like the FCRA necessary in the first place. Reidenberg explained that there are “all sorts of discrimination laws in terms of granting credit,” and that Facebook “may also be in a gray area with respect to those laws because they’re not offering credit, they’re offering an advertising space,” a distinction he described as “a very slippery slope.” An academic study published in April found that Facebook’s ad display algorithms were inherently biased with regards to gender and race.
Reidenberg also doubted whether Facebook would be exempt from regulatory scrutiny if it’s providing data to a third party that’s later indirectly used to exclude people based on their credit, rather than doing the credit score crunching itself, à la Equifax or Experian. “If Facebook is providing a consumer’s data to be used for the purposes of credit screening by the third party, Facebook would be a credit reporting agency,” Reidenberg explained. “The [FCRA] statute applies when the data ‘is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for … credit.’” If Facebook is providing data about you and your friends that eventually ends up in a corporate credit screening operation, “It’s no different from Equifax providing the data to Chase to determine whether or not to issue a credit card to the consumer,” according to Reidenberg.
An FTC spokesperson declined to comment.
Chris Hoofnagle, a privacy scholar at the University of California, Berkeley School of Law, told The Intercept that this sort of consumer rating scheme has worrying implications for matters far wider than whether T-Mobile et al. will sell you a discounted phone. For those concerned with their credit score, the path to virtue has always been a matter of commonsense personal finance savvy. The jump from conventional wisdom like “pay your bills on time” to completely inscrutable calculations based on Facebook’s observation of your smartphone usage and “friend homophily” isn’t exactly intuitive. “We’re going to move to a world where you won’t know how to act,” said Hoofnagle. “If we think about the world as rational consumers engaged in utility maximalization in the world, what we’re up against is this, this shadow system. How do you compete?”
________________________________________________
Sam Biddle – sam.biddle@theintercept.com
Go to Original – theintercept.com
Tags: Big Brother, Big tech, Capitalism, Economics, Facebook, Justice, Media, Power, Surveillance
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
Read more
Click here to go to the current weekly digest or pick another article:
WHISTLEBLOWING - SURVEILLANCE: