Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Facebook privacy

<< Back Forward >>
Topics from 1 to 10 | in all: 402

Facebook has paused election reminders in Europe after data watchdog raises transparency concerns

19:35 | 27 February

Big tech’s lead privacy regulator in Europe has intervened to flag transparency concerns about a Facebook election reminder feature — asking the tech giant to provide it with information about what data it collects from users who interact with the notification and how their personal data is used, including whether it’s used for targeting them with ads.

Facebook confirmed to TechCrunch it has paused use of the election reminder feature in the European Union while it works on addressing the Irish Data Protection Commission (DPC)’s concerns.

Facebook’s Election Day Reminder (EDR) feature is a notification the platform can display to users on the day of an election — ostensibly to encourage voter participation. However, as ever with the data-driven ad business, there’s a whole wrapper of associated questions about what information Facebook’s platform might be harvesting when it chooses to deploy the nudge (and how the ad business is making use of the data).

On an FAQ on its website about the election reminder Facebook writes vaguely that users “may see reminders and posts about elections and voting”.

Facebook does not explain what criteria it uses to determine whether to target (or not to target) a particular user with an election reminder.

Yet a study carried out by Facebook in 2012, working with academics from the University of California at San Diego, found an election day reminder sent via its platform on the day of the 2010 US congressional elections boosted voter turnout by about 340,000 people — which has led to concern that selective deployment of election reminders by Facebook could have the potential to influence poll outcomes.

If, for example, Facebook chose to target an election reminder at certain types of users who it knows via its profiling of them are likely to lean towards voting a particular way. Or if the reminder was targeted at key regions where a poll result could be swung with a small shift in voter turnout. So the lack of transparency around how the tool is deployed by Facebook is also concerning. 

Under EU law, entities processing personal data that reveals political opinions must also meet a higher standard of regulatory compliance for this so-called “special category data” — including around transparency and consent. (If relying on user consent to collect this type of data it would need to be explicit — requiring a clear, purpose-specific statement that the user affirms, for instance.)

In a statement today the DPC writes that it notified Facebook of a number of “data protection concerns” related to the EDR ahead of the recent Irish General Election — which took place February 8 — raising particular concerns about “transparency to users about how personal data is collected when interacting with the feature and subsequently used by Facebook”.

The DPC said it asked Facebook to make some changes to the feature but because these “remedial actions” could not be implemented in advance of the Irish election it says Facebook decided not to activate the EDR during that poll.

We understand the main issue for the regulator centers on the provision of in-context transparency for users on how their personal data would be collected and used when they engaged with the feature — such as the types of data being collected and the purposes the data is used for, including whether it’s used for advertising purposes.

In its statement, the DPC says that following its intervention Facebook has paused use of the EDR across the EU, writing: “Facebook has confirmed that the Election Day Reminder feature will not be activated during any EU elections pending a response to the DPC addressing the concerns raised.”

It’s not clear how long this intervention-triggered pause will last — neither the DPC nor Facebook have given a timeframe for when the transparency problems might be resolved.

We reached out to Facebook with questions on the DPC’s intervention.

The company sent this statement, attributed to a spokesperson:

We are committed to processing people’s information lawfully, fairly, and in a transparent manner. However, following concerns raised by the Irish Data Protection Commission around whether we give users enough information about how the feature works, we have paused this feature in the EU for the time being. We will continue working with the DPC to address their concerns.

“We believe that the Election Day reminder is a positive feature which reminds people to vote and helps them find their polling place,” Facebook added.

Forthcoming elections in Europe include Slovak parliamentary elections this month; North Macedonian and Serbian parliamentary elections, which are due to take place in April; and UK local elections in early May.

The intervention by the Irish DPC against Facebook is the second such public event in around a fortnight — after the regulator also published a statement revealing it had raised concerns about Facebook’s planned launch of a dating feature in the EU.

That launch was also put on ice following its intervention, although Facebook claimed it chose to postpone the rollout to get the launch “right”; while the DPC said it’s waiting for adequate responses and expects the feature won’t be launched before it gets them.

It looks like public statements of concern could be a new tactic by the regulator to try to address the sticky challenge of reining in big tech.

The DPC is certainly under huge pressure to deliver key decisions to prove that the EU’s flagship General Data Protection Regulation (GDPR) is functioning as intended. Critics say it’s taking too long, even as its case load continues to pile up.

No GDPR decisions on major cases involving tech giants including Facebook and Google have yet been handed down in Dublin — despite the GDPR fast approaching its second birthday.

At the same time it’s clear tech giants have no shortage of money, resources and lawyers to inject friction into the regulatory process — with the aim of slowing down any enforcement.

So it’s likely the DPC is looking for avenues to bag some quick wins — by making more of its interventions public and thereby putting pressure on a major player like Facebook to respond to publicity generated by it going public with “concerns”.

 


0

Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging

21:24 | 25 February

Just under a month ago Facebook switched on global availability of a tool which affords users a glimpse into the murky world of tracking that its business relies upon to profile users of the wider web for ad targeting purposes.

Facebook is not going boldly into transparent daylight — but rather offering what privacy rights advocacy group Privacy International has dubbed “a tiny sticking plaster on a much wider problem”.

The problem it’s referring to is the lack of active and informed consent for mass surveillance of Internet users via background tracking technologies embedded into apps and websites, including as people browse outside Facebook’s own content garden.

The dominant social platform is also only offering this feature in the wake of the 2018 Cambridge Analytica data misuse scandal, when Mark Zuckerberg faced awkward questions in Congress about the extent of Facebook’s general web tracking. Since then policymakers around the world have dialled up scrutiny of how its business operates — and realized there’s a troubling lack of transparency in and around adtech generally and Facebook specifically

Facebook’s tracking pixels and social plugins — aka the share/like buttons that pepper the mainstream web — have created a vast tracking infrastructure which silently informs the tech giant of Internet users’ activity, even when a person hasn’t interacted with any Facebook-branded buttons.

Facebook claims this is just ‘how the web works’. And other tech giants are similarly engaged in tracking Internet users (notably Google). But as a platform with 2.2BN+ users Facebook has got a march on the lion’s share of rivals when it comes to harvesting people’s data and building out a global database of person profiles.

It’s also positioned as a dominant player in an adtech ecosystem which means it’s the one being fed with intel by data brokers and publishers who deploy tracking tech to try to survive in such a skewed system.

Meanwhile the opacity of online tracking means the average Internet user is none the wiser that Facebook can be following what they’re browsing all over the Internet. Questions of consent loom very large indeed.

Facebook is also able to track people’s usage of third party apps if a person chooses a Facebook login option which the company encourages developers to implement in their apps — again the carrot being to be able to offer a lower friction choice vs requiring users create yet another login credential.

The price for this ‘convenience’ is data and user privacy as the Facebook login gives the tech giant a window into third part app usage.

The company has also used a VPN app it bought and badged as a security tool to glean data on third party app usage — though it’s recently stepped back from the Onavo app after a public backlash (though that did not stop it running a similar tracking program targeted at teens).

Background tracking is how Facebook’s creepy ads function (it prefers to call such behaviorally targeted ads ‘relevant’) — and how they have functioned for years

Yet it’s only in recent months that it’s offered users a glimpse into this network of online informers — by providing limited information about the entities that are passing tracking data to Facebook, as well as some limited controls.

From ‘Clear History’ to “Off-Facebook Activity”

Originally briefed in May 2018, at the crux of the Cambridge Analytica scandal, as a ‘Clear History’ option this has since been renamed ‘Off-Facebook Activity’ — a label so bloodless and devoid of ‘call to action’ that the average Facebook user, should they stumble upon it buried deep in unlovely settings menus, would more likely move along than feel moved to carry out a privacy purge.

(For the record you can access the setting here — but you do need to be logged into Facebook to do so.)

The other problem is that Facebook’s tool doesn’t actually let you purge your browsing history, it just delinks it from being associated with your Facebook ID. There is no option to actually clear your browsing history via its button. Another reason for the name switch. So, no, Facebook hasn’t built a clear history ‘button’.

“While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action,” said Privacy International this week, criticizing Facebook for “not telling you everything”.

As the saying goes, a little knowledge can be a dangerous thing. So a little transparency implies — well — anything but clarity. And Privacy International sums up the Off-Facebook Activity tool with an apt oxymoron — describing it as “a new window to the opacity”.

“This tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook,” it writes, warning with emphasis: “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”

It points out, for instance, that the information provided here is limited to a “simple name” — thereby preventing the user from “exercising their right to seek more information about how this data was collected”, which EU users at least are entitled to.

“As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads?” it adds.

Another criticism is Facebook is only providing limited information about each data transfer — with Privacy International noting some events are marked “under a cryptic CUSTOM” label; and that Facebook provides “no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place”.

“Does Facebook really display everything they process/store about those events in the log/export?” queries privacy researcher Wolfie Christl, who tracks the adtech industry’s tracking techniques. “They have to, because otherwise they don’t fulfil their SAR [Subject Access Request] obligations [under EU law].”

Christl notes Facebook makes users jump through an additional “download” hoop in order to view data on tracked events — and even then, as Privacy International points out, it gives up only a limited view of what has actually been tracked…

“For example, why doesn’t Facebook list the specific sites/URLs visited? Do they infer data from the domains e.g. categories? If yes, why is this not in the logs?” Christl asks.

We reached out to Facebook with a number of questions, including why it doesn’t provide more detail by default. It responded with this statement attributed to spokesperson:

We offer a variety of tools to help people access their Facebook information, and we’ve designed these tools to comply with relevant laws, including GDPR. We disagree with this [Privacy International] article’s claims and would welcome the chance to discuss them with Privacy International.

Facebook also said it’s continuing to develop which information it surfaces through the Off-Facebook Activity tool — and said it welcomes feedback on this.

We also asked it about the legal bases it uses to process people’s information that’s been obtained via its tracking pixels and social plug-ins. It did not provide a response to those questions.

Six names, many questions…

When the company launched the Off-Facebook Activity tool a snap poll of available TechCrunch colleagues showed very diverse results for our respective tallies (which also may not show the most recent activity, per other Facebook caveats) — ranging from one colleague who had an eye-watering 1,117 entities (likely down to doing a lot of app testing); to several with several/a few hundred apiece; to a couple in the middle tens.

In my case I had just six. But from my point of view — as an EU citizen with a suite of rights related to privacy and data protection; and as someone who aims to practice good online privacy hygiene, including having a very locked down approach to using Facebook (never using its mobile app for instance) — it was still six too many. I wanted to find out how these entities had circumvented my attempts not to be tracked.

And in the case of the first one in the list who on earth it was…

Turns out cloudfront is an Amazon Web Services Content Delivery Network subdomain. But I had to go searching online myself to figure out that the owner of that particular domain is (now) a company called Nativo.

Facebook’s list provided only very bare bones information. I also clicked to delink the first entity, since it immediately looked so weird, and found that by doing that Facebook wiped all the entries — which meant I was unable to retain access to what little additional info it had provided about the respective data transfers.

Undeterred I set out to contact each of the six companies directly with questions — asking what data of mine they had transferred to Facebook and what legal basis they thought they had for processing my information.

(On a practical level six names looked like a sample size I could at least try to follow up manually — but remember I was the TechCrunch exception; imagine trying to request data from 1,117 companies, or 450 or even 57, which were the lengths of lists of some of my colleagues.)

This process took about a month and a lot of back and forth/chasing up. It likely only yielded as much info as it did because I was asking as a journalist; an average Internet user may have had a tougher time getting attention on their questions — though, under EU law, citizens have a right to request a copy of personal data held on them.

Eventually, I was able to obtain confirmation that tracking pixels and Facebook share buttons had been involved in my data being passed to Facebook in certain instances. Even so I remain in the dark on many things. Such as exactly what personal data Facebook received.

In one case I was told by a listed company that it doesn’t know itself what data was shared — only Facebook knows because it’s implemented the company’s “proprietary code”. (Insert your own ‘WTAF’ there.)

The legal side of these transfers also remains highly opaque. From my point of view I would not intentionally consent to any of this tracking — but in some instances the entities involved claim that (my) consent was (somehow) obtained (or implied).

In other cases they said they are relying on a legal basis in EU law that’s referred to as ‘legitimate interests’. However this requires a balancing test to be carried out to ensure a business use does not have a disproportionate impact on individual rights.

I wasn’t able to ascertain whether such tests had ever been carried out.

Meanwhile, since Facebook is also making use of the tracking information from its pixels and social plug ins (and seemingly more granular use, since some entities claimed they only get aggregate not individual data), Christl suggests it’s unlikely such a balancing test would be easy to pass for that tiny little ‘platform giant’ reason.

Notably he points out Facebook’s Business Tool terms state that it makes use of so called “event data” to “personalize features and content and to improve and secure the Facebook products” — including for “ads and recommendations”; for R&D purposes; and “to maintain the integrity of and to improve the Facebook Company Products”.

In a section of its legal terms covering the use of its pixels and SDKs Facebook also puts the onus on the entities implementing its tracking technologies to gain consent from users prior to doing so in relevant jurisdictions that “require informed consent” for tracking cookies and similar — giving the example of the EU.

“You must ensure, in a verifiable manner, that an end user provides the necessary consent before you use Facebook Business Tools to enable us to store and access cookies or other information on the end user’s device,” Facebook writes, pointing users of its tools to its Cookie Consent Guide for Sites and Apps for “suggestions on implementing consent mechanisms”.

Christl flags the contradiction between Facebook claiming users of its tracking tech needing to gain prior consent vs claims I was given by some of these entities that they don’t because they’re relying on ‘legitimate interests’.

“Using LI as a legal basis is even controversial if you use a data analytics company that reliably processes personal data strictly on behalf of you,” he argues. “I guess, industry lawyers try to argue for a broader applicability of LI, but in the case of FB business tools I don’t believe that the balancing test (a businesses legitimate interests vs. the impact on the rights and freedoms of data subjects) will work in favor of LI.”

Those entities relying on legitimate interests as a legal base for tracking would still need to offer a mechanism where users can object to the processing — and I couldn’t immediately see such a mechanism in the cases in question.

One thing is crystal clear: Facebook itself does not provide a mechanism for users to object to its processing of tracking data nor opt out of targeted ads. That remains a long-standing complaint against its business in the EU which data protection regulators are still investigating.

One more thing: Non-Facebook users continue to have no way of learning what data of theirs is being tracked and transferred to Facebook. Only Facebook users have access to the Off-Facebook Activity tool, for example. Non-users can’t even access a list.

Facebook has defended its practice of tracking non-users around the Internet as necessary for unspecified ‘security purposes’. It’s an inherently disproportionate argument of course. The practice also remains under legal challenge in the EU.

Tracking the trackers

SimpleReach (aka d8rk54i4mohrb.cloudfront.net)

What is it? A California-based analytics platform (now owned by Nativo) used by publishers and content marketers to measure how well their content/native ads performs on social media. The product began life in the early noughties as a simple tool for publishers to recommend similar content at the bottom of articles before the startup pivoted — aiming to become ‘the PageRank of social’ — offering analytics tools for publishers to track engagement around content in real-time across the social web (plugging into platform APIs). It also built statistical models to predict which pieces of content will be the most social and where, generating a proprietary per article score. SimpleReach was acquired by Nativo last year to complement analytics tools the latter already offered for tracking content on the publisher/brand’s own site.

Why did it appear in your Off-Facebook Activity list? Given it’s a b2b product it does not have a visible consumer brand of its own. And, to my knowledge, I have never visited its own website prior to investigating why it appeared in my Off-Facebook Activity list. Clearly, though, I must have visited a site (or sites) that are using its tracking/analytics tools. Of course an Internet user has no obvious way to know this — unless they’re actively using tools to monitor which trackers are tracking them.

In a further quirk, neither the SimpleReach (nor Nativo) brand names appeared in my Off-Facebook Activity list. Rather a domain name was listed — d8rk54i4mohrb.cloudfront.net — which looked at first glance weird/alarming.

I found this is owned by SimpleReach by using a tracker analytics service.

Once I knew the name I was able to connect the entry to Nativo — via news reports of the acquisition — which led me to an entity I could direct questions to.  

What happened when you asked them about this? There was a bit of back and forth and then they sent a detailed response to my questions in which they claim they do not share any data with Facebook — “or perform ‘off site activity’ as described on Facebook’s activity tool”.

They also suggested that their domain had appeared as a result of their tracking code being implemented on a website I had visited which had also implemented Facebook’s own trackers.

“Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page. It is possible that one of our customers added a Facebook pixel to an article you visited using our technology. This could lead Facebook to attribute this pixel to our domain, though our domain was merely a ‘carrier’ of the code,” they told me.

In terms of the data they collect, they said this: “The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months. These values are used to give our customers a general idea of the number of users that visited the articles tracked.”

So, again, they suggested the reason why their domain appeared in my Off-Facebook Activity list is a combination of Nativo/SimpleReach’s tracking technologies being implemented on a site where Facebook’s retargeting pixel is also embedded — which then resulted in data about my online activity being shared with Facebook (which Facebook then attributes as coming from SimpleReach’s domain).

Commenting on this, Christl agreed it sounds as if publishers “somehow attach Facebook pixel events to SimpleReach’s cloudfront domain”.

“SimpleReach probably doesn’t get data from this. But the question is 1) is SimpleReach perhaps actually responsible (if it happens in the context of their domain); 2) The Off-Facebook activity is a mess (if it contains events related to domains whose owners are not web or app publishers).”

Nativo offered to determine whether they hold any personal information associated with the unique identifier they have assigned to my browser if I could send them this ID. However I was unable to locate such an ID (see below).

In terms of legal base to process my information the company told me: “We have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.”

Nativo also suggested that the Offsite Activity in question might have predated its purchase of the SimpleReach technology — which occurred on March 20, 2019 — saying any activity prior to this would mean my query would need to be addressed directly with SimpleReach, Inc. which Nativo did not acquire. (However in this case the activity registered on the list was dated later than that.)

Here’s what they said on all that in full:

Thank you for submitting your data access request.  We understand that you are a resident of the European Union and are submitting this request pursuant to Article 15(1) of the GDPR.  Article 15(1) requires “data controllers” to respond to individuals’ requests for information about the processing of their personal data.  Although Article 15(1) does not apply to Nativo because we are not a data controller with respect to your data, we have provided information below that will help us in determining the appropriate Data Controllers, which you can contact directly.

First, for details about our role in processing personal data in connection with our SimpleReach product, please see the SimpleReach Privacy Policy.  As the policy explains in more detail, we provide marketing analytics services to other businesses – our customers.  To take advantage of our services, our customers install our technology on their websites, which enables us to collect certain information regarding individuals’ visits to our customers’ websites. We analyze the personal information that we obtain only at the direction of our customer, and only on that customer’s behalf.

SimpleReach is an analytics tracker tool (Similar to Google Analytics) implemented by our customers to inform them of the performance of their content published around the web.  “d8rk54i4mohrb.cloudfront.net” is the domain name of the servers that collect these metrics.  We do not share data with Facebook or perform “off site activity” as described on Facebook’s activity tool.  Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page.  It is possible that one of our customers added a Facebook pixel to an article you visited using our technology.  This could lead Facebook to attribute this pixel to our domain, though our domain was merely a “carrier” of the code.

The SimpleReach tool is implemented on articles posted by our customers and partners of our customers.  It is possible you visited a URL that has contained our tracking code.  It is also possible that the Offsite Activity you are referencing is activity by SimpleReach, Inc. before Nativo purchased the SimpleReach technology. Nativo, Inc. purchased certain technology from SimpleReach, Inc. on March 20, 2019, but we did not purchase the SimpleReach, Inc. entity itself, which remains a separate entity unaffiliated with Nativo, Inc. Accordingly, any activity that occurred before March 20, 2019 pre-dates Nativo’s use of the SimpleReach technology and should be addressed directly with SimpleReach, Inc. If, for example, TechCrunch was a publisher partner of SimpleReach, Inc. and had SimpleReach tracking code implemented on TechCrunch articles or across the TechCrunch website prior to March 20, 2019, any resulting data collection would have been conducted by SimpleReach, Inc., not by Nativo, Inc.

As mentioned above, our tracking script collects and sends information to our servers based on the articles it is implemented on. The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months.  These values are used to give our customers a general idea of the number of users that visited the articles tracked.

We do not, nor have we ever, shared ANY information with Facebook with regards to the information we collect from the SimpleReach Analytics tag, be it Personal Data or otherwise. However, as mentioned above, it is possible that one of our customers added a Facebook retargeting pixel to an article you visited using our technology. If that is the case, we would not have received any information collected from such pixel or have knowledge of whether, and to what extent, the customer shared information with Facebook. Without more information, we are unable to determine the specific customer (if any) on behalf of which we may have processed your personal information. However, if you send us the unique identifier we have assigned to your browser… we can determine whether we have any personal information associated with such browser on behalf of a customer controller, and, if we have, we can forward your request on to the controller to respond directly to your request.

As a Data Processor we have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.  This type of agreement is designed to protect Data Subjects and ensure that Data Processors are held to the same standards that both the GDPR and the Data Controller have put forth.  This is the same type of agreement used by all other analytics tracking tools (as well as many other types of tools) such as Google Analytics, Adobe Analytics, Chartbeat, and many others.

I also asked Nativo to confirm whether Insider.com (see below) is a customer of Nativo/SimpleReach.

The company told me it could not disclose this “due to confidentiality restrictions” and would only reveal the identity of customers if “required by applicable law”.

Again, it said that if I provided the “unique identifier” assigned to my browser it would be “happy to pull a list of personal information the SimpleReach/Nativo systems currently have stored for your unique identifier (if any), including the appropriate Data Controllers”. (“If we have any personal data collected from you on behalf of Insider.com, it would come up in the list of DataControllers,” it suggested.)

I checked multiple browsers that I use on multiple devices but was unable to locate an ID attached to a SimpleReach cookie. So I also asked whether this might appear attached to any other cookie.

Their response:

Because our data is either pseudonymized or anonymized, and we do not record of any other pieces of Personal Data about you, it will not be possible for us to locate this data without the cookie value.  The SimpleReach user cookie is, and has always been, in the “__srui” cookie under the “.simplereach.com” domain or any of its sub-domains. If you are unable to locate a SimpleReach user cookie by this name on your browser, it may be because you are using a different device or because you have cleared your cookies (in which case we would no longer have the ability to map any personal data we have previously collected from you to your browser or device). We do have other cookies (under the domains postrelease.com, admin.nativo.com, and cloud.nativo.com) but those cookies would not be related to the appearance of SimpleReach in the list of Off Site Activity on your Facebook account, per your original inquiry.

What did you learn from their inclusion in the Off-Facebook Activity list? There appeared to be a correlation between this domain and a publisher, Insider.com, which also appeared in my Off-Facebook Activity list — as both logged events bear the same date; plus Insider.com is a publisher so would fall into the right customer category for using Nativo’s tool.

Given those correlations I was able to guess Insider.com is a customer of Nativo. (I confirmed this when I spoke to Insider.com) — so Facebook’s tool is able to leak relational inferences related to the tracking industry by surfacing/mapping business connections that might not have been otherwise evident.

Insider.com

What is it? A New York based business media company which owns brands such as Business Insider and Markets Insider

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a technology article that appeared in my Facebook News Feed or elsewhere but when I was logged into Facebook

What happened when you asked them about this? After about a week of radio silence an employee in Insider’com’s legal department got in touch to say they could discuss the issue on background.

This person told me the information in the Off-Facebook Activity tool came from the Facebook share button which is embedded on all articles it runs on its media websites. They confirmed that the share button can share data with Facebook regardless of whether the site visitor interacts with the button or not.

In my case I certainly would not have interacted with the Facebook share button. Nonetheless data was passed, simply by merit of loading the article page itself.

Insider.com said the Facebook share button widget is integrated into its sites using a standard set-up that Facebook intends publishers to use. If the share button is clicked information related to that action would be shared with Facebook and would also be received by Insider.com (though, in this scenario, it said it doesn’t get any personalized information — but rather gets aggregate data).

Facebook can also automatically collect other information when a user visits a webpage which incorporates its social plug-ins.

Asked whether Insider.com knows what information Facebook receives via this passive route the company told me it does not — noting the plug-in runs proprietary Facebook code. 

Asked how it’s collecting consent from users for their data to be shared passively with Facebook, Insider.com said its Privacy Policy stipulates users consent to sharing their information with Facebook and other social media sites. It also said it uses the legal ground known as legitimate interests to provide functionality and derive analytics on articles.

In the active case (of a user clicking to share an article) Insider.com said it interprets the user’s action as consent.

Insider.com confirmed it uses SimpleReach/Nativo analytics tools, meaning site visitor data is also being passed to Nativo when a user lands on an article. It said consent for this data-sharing is included within its consent management platform (it uses a CMP made by Forcepoint) which asks site visitors to specify their cookie choices.

Here site visitors can choose for their data not to be shared for analytics purposes (which Insider.com said would prevent data being passed).

I usually apply all cookie consent opt outs, where available, so I’m a little surprised Nativo/SimpleReach was passed my data from an Insider.com webpage. Either I failed to click the opt out one time or failed to respond to the cookie notice and data was passed by default.

It’s also possible I did opt out but data was passed anyway — as there has been research which has found a proportion of cookie notifications ignore choices and pass data anyway (unintentionally or otherwise).

Follow up questions I sent to Insider.com after we talked:

1) Can you confirm whether Insider has performed a legitimate interests assessment?
2) Does Insider have a site mechanism where users can object to the passive data transfer to Facebook from the share buttons?

Insider.com did not respond to my additional questions.

What did you learn from their inclusion in the Off-Facebook Activity list? That Insider.com is a customer of Nativo/SimpleReach.

Rei.com

What is it? A California-based ecommerce website selling outdoor gear

Why did it appear in your Off-Facebook Activity list? I don’t recall ever visiting their site prior to looking into why it appeared in the list so I’m really not sure

What happened when you asked them about this? After saying it would investigate it followed up with a statement, rather than detailed responses to my questions, in which it claims it does not hold any personal data associated with — presumably — my TechCrunch email, since it did not ask me what data to check against.

It also appeared to be claiming that it uses Facebook tracking pixels/tags on its website, without explicitly saying as much, writing that: “Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool.”

It claims it has no access to this information — which it says is “pseudonymous to us” but suggested that if I have a Facebook account Facebook could link any browsing on Rei’s site to my Facebook’s identity and therefore track my activity.

The company also pointed me to a Facebook Help Center post where the company names some of the activities that might have resulted in Rei’s website sending activity data on me to Facebook (which it could then link to my Facebook ID) — although Facebook’s list is not exhaustive (included are: “viewing content”, “searching for an item”, “adding an item to a shopping cart” and “making a donation” among other activities the company tracks by having its code embedded on third parties’ sites).

Here’s Rei’s statement in full:

Thank you for your patience as we looked into your questions.  We have checked our systems and determined that REI does not maintain any personal data associated with you based on the information you provided.  Note, however, that Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool. The information that Facebook collects in this manner is pseudonymous to us — meaning we cannot identify you using the information and we do not maintain the information in a manner that is linked to your name or other identifying information. However, if you have a Facebook account, Facebook may be able to match this activity to your Facebook account via a unique identifier unavailable to REI. (Funnily enough, while researching this I found TechCrunch in MY list of Off-Facebook activity!)

For a complete list of activities that could have resulted in REI sharing pseudonymous information about you with Facebook, this Facebook Help Center article may be useful.  For a detailed description of the ways in which we may collect and share customer information, the purposes for which we may process your data, and rights available to EEA residents, please refer to our Privacy Policy.  For information about how REI uses cookies, please refer to our Cookie Policy.

As a follow up question I asked Rei to tell me which Facebook tools it uses, pointing out that: “Given that, just because you aren’t (as I understand it) directly using my data yourself that does not mean you are not responsible for my data being transferred to Facebook.”

The company did not respond to that point.

I also previously asked Rei.com to confirm whether it has any data sharing arrangements with the publisher of Rock & Ice magazine (see below). And, if so, to confirm the processes involved in data being shared. Again, I got no response to that.

What did you learn from their inclusion in the Off-Facebook Activity list? Given that Rei.com appeared alongside Rock & Ice on the list — both displaying the same date and just one activity apiece — I surmised they have some kind of data-sharing arrangement. They are also both outdoors brands so there would be obvious commercial ‘synergies’ to underpin such an arrangement.

That said, neither would confirm a business relationship to me. But Facebook’s list heavily implies there is some background data-sharing going on

Rock & Ice magazine 

What is it? A climbing magazine produced by a California-based publisher, Big Stone Publishing

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a link to a climbing-related article in my Facebook feed or else visited Rock & Ice’s website while I was logged into Facebook in the same browser session

What happened when you asked them about this? After ignoring my initial email query I subsequently received a brief response from the publisher after I followed up — which read:

The Rock and Ice website is opt in, where you have to agree to terms of use to access the website. I don’t know what private data you are saying Rock and Ice shared, so I can’t speak to that. The site terms are here. As stated in the terms you can opt out.

Following up, I asked about the provision in the Rock & Ice website’s cookie notice which states: “By continuing to use our site, you agree to our cookies” — asking whether it’s passing data without waiting for the user to signal their consent.

(Relevant: In October Europe’s top court issued a ruling that active consent is necessary for tracking cookies, so you can’t drop cookies prior to a user giving consent for you to do so.)

The publisher responded:

You have to opt in and agree to the terms to use the website. You may opt out of cookies, which is covered in the terms. If you do not want the benefits of these advertising cookies, you may be able to opt-out by visiting: http://www.networkadvertising.org/optout_nonppii.asp.

If you don’t want any cookies, you can find extensions such as Ghostery or the browser itself to stop and refuse cookies. By doing so though some websites might not work properly.

I followed up again to point out that I’m not asking about the options to opt in or opt out but, rather, the behavior of the website if the visitor does not provide a consent response yet continues browsing — asking for confirmation Rock & Ice’s site interprets this state as consent and therefore sends data.

The publisher stopped responding at that point.

Earlier I had asked it to confirm whether its website shares visitor data with Rei.com? (As noted above, the two appeared with the same date on the list which suggests data may be being passed between them.) I did not get a respond to that question either.

What did you learn from their inclusion in the Off-Facebook Activity list? That the magazine appears to have a data-sharing arrangement with outdoor retailer Rei.com, given how the pair appeared at the same point in my list. However neither would confirm this when I asked

MatterHackers

What is it? A California-based retailer focused on 3D printing and digital manufacturing

Why did it appear in your Off-Facebook Activity list? I honestly have no idea. I have never to my knowledge visited their site prior to investigating why they should appear on my Off Site Activity list.

I remain pretty interested to know how/why they managed to track me. I can only surmise I clicked on some technology-related content in my Facebook feed, either intentionally or by accident.

What happened when you asked them about this? They first asked me for confirmation that they were on my list. After I had sent a screenshot, they followed up to say they would investigate. I pushed again after hearing nothing for several weeks. At this point they asked for additional information from the Off-Facebook Activity tool — namely more granular metrics, such as a time and date per event and some label information — to help with tracking down this particular data-exchange.

I had previously provided them with the date (as it appears in the screenshot) but it’s possible to download additional an additional level of information about data transfers which includes per event time/date-stamps and labels/tags, such as “VIEW_CONTENT” .

However, as noted above, I had previously selected and deleted one item off of my Off-Facebook Activity list, after which Facebook’s platform had immediately erased all entries and associated metrics. There was no obvious way I could recover access to that information.

“Without this information I would speculate that you viewed an article or product on our site — we publish a lot of ‘How To’ content related to 3D printing and other digital manufacturing technologies — this information could have then been captured by Facebook via Adroll for ad retargeting purposes,” a MatterHackers spokesman told me. “Operationally, we have no other data sharing mechanism with Facebook.”

Subsequently, the company confirmed it implements Facebook’s tracking pixel on every page of its website.

Of the pixel Facebook writes that it enables website owners to track “conversions” (i.e. website actions); create custom audiences which segment site visitors by criteria that Facebook can identify and match across its user-base, allowing for the site owner to target ads via Facebook’s platform at non-customers with a similar profile/criteria to existing customers that are browsing its site; and for creating dynamic ads where a template ad gets populated with product content based on tracking data for that particular visitor.

Regarding the legal base for the data sharing, MatterHackers had this to say: “MatterHackers is not an EU entity, nor do we conduct business in the EU and so have not undertaken GDPR compliance measures. CCPA [California’s Consumer Privacy Act] will likely apply to our business as of 2021 and we have begun the process of ensuring that our website will be in compliance with those regulations as of January 1st.”

I pointed out that GDPR is extraterritorial in scope — and can apply to non-EU based entities, such as if they’re monitoring individuals in the EU (as in this case).

Also likely relevant: A ruling last year by Europe’s top court found sites that embed third party plug-ins such as Facebook’s like button are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

Nonetheless it’s still not clear what legal base the company is relying on for implementing the tracking pixel and passing data on EU Facebook users.

When asked about this MatterHacker COO, Kevin Pope, told me:

While we appreciate the sentiment of GDPR, in this case the EU lacks the legal standing to pursue an enforcement action. I’m sure you can appreciate the potential negative consequences if any arbitrary country (or jurisdiction) were able to enforce legal penalties against any website simply for having visitors from that country. Techcrunch would have been fined to oblivion many times over by China or even Thailand (for covering the King in a negative light). In this way, the attempted overreach of the GDPR’s language sets a dangerous precedent.
To provide a little more detail – MatterHackers, at the time of your visit, wouldn’t have known that you were from the EU until we cross-referenced your session with  Facebook, who does know. At that point you would have been filtered from any advertising by us. MatterHackers makes money when our (U.S.) customers buy 3D printers or materials and then succeed at using them (hence the how-to articles), we don’t make any money selling advertising or data.
Given that Facebook does legally exist in the EU and does have direct revenues from EU advertisers, it’s entirely appropriate that Facebook should comply with EU regulations. As a global solution, I believe more privacy settings options should be available to its users. However, given Facebook’s business model, I wouldn’t expect anything other than continued deflection (note the careful wording on their tool) and avoidance from them on this issue.

What did you learn from their inclusion in the Off-Facebook Activity List? I found out that an ecommerce company I had never heard of had been tracking me

Wallapop

What is it? A Barcelona-based peer-to-peer marketplace app that lets people list secondhand stuff for sale and/or to search for things to buy in their proximity. Users can meet in person to carry out a transaction paying in cash or there can be an option to pay via the platform and have an item posted

Why did it appear in your Off-Facebook Activity list? This was the only digital activity that appeared in the list that was something I could explain — figuring out I must have used a Facebook sign-in option when using the Wallapop app to buy/sell. I wouldn’t normally use Facebook sign-in but for trust-based marketplaces there may be user benefits to leveraging network effects.

What happened when you asked them about this? After my query was booted around a bit a PR company that works with Wallapop responded asking to talk through what information I was trying to ascertain.

After we chatted they sent this response — attributed to sources from Wallapop:

Same as it happens with other apps, wallapop can appear on our users’ Facebook Off Site Activity page if they have interacted in any way with the platform while they were logged in their Facebook accounts. Some interaction examples include logging in via Facebook, visiting our website or having both apps opened and logged.

As other apps do, wallapop only shares activity events with Facebook to optimize users’ ad experience. This includes if a user is registered in wallapop, if they have uploaded an item or if they have started a conversation. Under no circumstance wallapop shares with Facebook our users’ personal data (including sex, name, email address or telephone number).

At wallapop, we are thoroughly committed with the security of our community and we do a safe treatment of the data they choose to share with us, in compliance with EU’s General Data Protection Regulation. Under no circumstance these data are shared with third parties without explicit authorization.

I followed up to ask for further details about these “activity events” — asking whether, for instance, Wallapop shares messaging content with Facebook as well as letting the social network know which items a user is chatting about.

“Under no circumstance the content of our users’ messages is shared with Facebook,” the spokesperson told me. “What is shared is limited to the fact that a conversation has been initiated with another user in relation to a specific item, this is, activity events. Under no circumstance we would share our users’ personal information either.”

Of course the point is Facebook is able to link all app activity with the user ID it already has — so every piece of activity data being shared is personal data.

I also asked what legal base Wallapop relies on to share activity data with Facebook. They said the legal basis is “explicit consent given by users” at the point of signing up to use the app.

“Wallapop collects explicit consent from our users and at any time they can exercise their rights to their data, which include the modification of consent given in the first place,” they said.

“Users give their explicit consent by clicking in the corresponding box when they register in the app, where they also get the chance to opt out and not do it. If later on they want to change the consent they gave in first instance, they also have that option through the app. All the information is clearly available on our Privacy Policy, which is GDPR compliant.”

“At wallapop we take our community’s privacy and security very seriously and we follow recommendations from the Spanish Data Protection Agency,” it added

What did you learn from their inclusion in the Off-Facebook Activity list? Not much more than I would have already guessed — i.e. that using a Facebook sign-in option in a third party app grants the social media giant a high degree of visibility into your activity within another service.

In this case the Wallapop app registered the most activity events of all six of the listed apps, displaying 13 vs only one apiece for the others — so it gave a bit of a suggestive glimpse into the volume of third party app data that can be passed if you opt to open a Facebook login wormhole into a separate service.

 


0

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

02:01 | 20 February

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.

But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.

The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.

The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.

This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.

In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)

The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.

Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.

This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)

In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).

There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.

While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)

The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]

Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.

So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.

Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.

One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)

Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )

Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.

Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”

In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.

“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”

One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.

That’s one deadline that may help to concentrate minds on issuing decisions.

Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).

There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

 


0

Facebook Dating launch blocked in Europe after it fails to show privacy workings

11:59 | 13 February

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.

In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.

Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.

Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…

We’ve reached out to Facebook for comment and to ask when it carried out the DPIA.

We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws.

Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. And a dating product clearly is.

While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

 


0

UK names its pick for social media ‘harms’ watchdog

15:05 | 12 February

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are

.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins,

today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

 


0

Facebook will pay $550 million to settle class action lawsuit over privacy violations

03:34 | 30 January

Facebook will pay over half a billion dollars to settle a class action lawsuit that alleged systematic violation of an Illinois consumer privacy law. The settlement amount is large indeed, but a small fraction of the $35 billion maximum the company could have faced.

Class members — basically Illinois Facebook users from mid-2011 to mid-2015 — may expect as much as $200 each, but that depends on several factors. If you’re one of them you should receive some notification once the settlement is approved by the court and the formalities are worked out.

The proposed settlement would require Facebook to obtain consent in the future from Illinois users for such purposes as face analysis for automatic tagging.

This is the second major settlement from Facebook in six months; an seemingly enormous $5 billion settlement of FTC violations was announced over the summer, but it’s actually a bit of a joke.

The Illinois suit was filed in 2015, alleging that Facebook collected facial recognition data on images of users in the state without disclosure, in contravention of the state’s 2008 Biometric Information Privacy Act (BIPA). Similar suits were filed against Shutterfly, Snapchat, and Google.

Facebook pushed back in 2016, saying that facial recognition processing didn’t count as biometric data, and that anyway Illinois law didn’t apply to it, a California company. The judge rejected these arguments with flair, saying the definition of biometric was “cramped” and the assertion of Facebook’s immunity would be “a complete negation” of Illinois law in this context.

Facebook was also suspected at the time of heavy lobbying efforts towards defanging BIPA. One state senator proposed an amendment after the lawsuit was filed that would exclude digital images from BIPA coverage, which would of course have completely destroyed the case. It’s hard to imagine such a ridiculous proposal was the suggestion of anyone but the industry, which tends to regard the strong protections of the law in Illinois as quite superfluous.

As I noted in 2018, the Illinois Chamber of Commerce proposed the amendment, and a tech council there was chaired by Facebook’s own Manager of State Policy at the time. Facebook told me then that it had not taken any position on the amendment or spoken to any legislators about it.

2019 took the case to the 9th U.S. Circuit Court of Appeals, where Facebook was again rebuffed; the court concluded that “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.”

Facebook’s request for a rehearing en banc, which is to say with the full complement of judges there present, was unanimously denied two months later.

At last, after some 5 years of this, Facebook decided to settle, a representative told TechCrunch, “as it was in the best interest of our community and our shareholders to move past this matter.” Obviously it admits to no wrongdoing.

The $550 million amount negotiated is “the largest all-cash privacy class action settlement to date,” according to law firm Edelson PC, one of three that represented the plaintiffs in the suit.

“Biometrics is one of the two primary battlegrounds, along with geolocation, that will define our privacy rights for the next generation,” said Edelson PC founder and CEO Jay Edelson in a press release. “We are proud of the strong team we had in place that had the resolve to fight this critically important case over the last five years. We hope and expect that other companies will follow Facebook’s lead and pay significant attention to the importance of our biometric information.”

 


0

Facebook’s dodgy defaults face more scrutiny in Europe

18:34 | 24 January

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.

At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.

Lack of transparency

The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.

In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.

The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.

In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:

We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.

Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.

Dodgy defaults

In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.

A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.

However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.

Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 

This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).

But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.

The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default

The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.

Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”

We’ve reached out to Facebook for a response.

 


0

Dating and fertility apps among those snitching to “out of control” adtech, report finds

16:35 | 14 January

The latest report to warn that surveillance capitalism is out of control — and ‘free’ digital services can in fact be very costly to people’s privacy and rights — comes courtesy of the Norwegian Consumer Council which has published an analysis of how popular apps are sharing user data with the behavioral ad industry.

It suggests smartphone users have little hope of escaping adtech’s pervasive profiling machinery — short of not using a smartphone at all.

A majority of the apps that were tested for the report were found to transmit data to “unexpected third parties” — with users not being clearly informed about who was getting their information and what they were doing with it. Most of the apps also did not provide any meaningful options or on-board settings for users to prevent or reduce the sharing of data with third parties.

“The evidence keeps mounting against the commercial surveillance systems at the heart of online advertising,” the Council writes, dubbing the current situation “completely out of control, harming consumers, societies, and businesses”, and calling for curbs to prevalent practices in which app users’ personal data is broadcast and spread “with few restraints”. 

“The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.

“The comprehensive digital surveillance happening across the adtech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions,” it also warns.

In the report app users’ data is documented being shared with tech giants such as Facebook, Google and Twitter — which operate their own mobile ad platforms and/or other key infrastructure related to the collection and sharing of smartphone users’ data for ad targeting purposes — but also with scores of other faceless entities that the average consumer is unlikely to have heard of.

The Council commissioned a data flow analysis of ten popular apps running on Google’s Android smartphone platform — generating a snapshot of the privacy blackhole that mobile users inexorably tumble into when they try to go about their digital business, despite the existence (in Europe) of a legal framework that’s supposed to protect people by giving citizens a swathe of rights over their personal data.

Among the findings are a make-up filter app sharing the precise GPS coordinates of its users; ovulation-, period- and mood-tracking apps sharing users’ intimate personal data with Facebook and Google (among others); dating apps exchanging user data with each other, and also sharing with third parties sensitive user info like individuals’ sexual preferences (and real-time device specific tells such as sensor data from the gyroscope… ); and a games app for young children that was found to contain 25 embedded SDKs and which shared the Android Advertising ID of a test device with eight third parties.

The ten apps whose data flows were analyzed for the report are the dating apps Grindr, Happn, OkCupid, and Tinder; fertility/period tracker apps Clue and MyDays; makeup app Perfect365; religious app Muslim: Qibla Finder; children’s app My Talking Tom 2; and the keyboard app Wave Keyboard.

“Altogether, Mnemonic [the company which the Council commissioned to conduct the technical analysis] observed data transmissions from the apps to 216 different domains belonging to a large number of companies. Based on their analysis of the apps and data transmissions, they have identified at least 135 companies related to advertising. One app, Perfect365, was observed communicating with at least 72 different such companies,” the report notes.

“Because of the scope of tests, size of the third parties that were observed receiving data, and popularity of the apps, we regard the findings from these tests to be representative of widespread practices in the adtech industry,” it adds.

Aside from the usual suspect (ad)tech giants, less well-known entities seen receiving user data include location data brokers Fysical, Fluxloop, Placer, Places/Fouraquare, Safegraph and Unacast; behavioral ad targeting players like Receptiv/Verve, Neura, Braze and LeanPlum; mobile app marketing analytics firms like AppsFlyer; and ad platforms and exchanges like AdColony, AT&T’s AppNexus, Bucksense, OpenX, PubNative, Smaato and Vungle.

In the report the Forbrukerrådet concludes that the pervasive tracking of smartphone users which underpins the behavioral ad industry is all but impossible for smartphone users to escape — even if they are able to locate an on-device setting to opt out of behavioral ads.

This is because multiple identifiers are being attached to them and their devices, and also because of frequent sharing/syncing of identifiers by adtech players across the industry. (It also points out that on the Android platform a setting where users can opt-out of behavioral ads does not actually obscure the identifier — meaning users have to take it on trust that adtech entities won’t just ignore their request and track them anyway.)

The Council argues its findings suggest widespread breaches of Europe’s General Data Protection Regulation (GDPR), given that key principles of that pan-EU framework — such as data protection by design and default — are in stark conflict with the systematic, pervasive background profiling of app users it found (apps were, for instance, found sharing personal data by default, requiring users to actively seek out an obscure device setting to try to prevent being profiled).

“The extent of tracking and complexity of the adtech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the adtech industry is systematically at odds with our fundamental rights and freedoms,” it also argues.

Where (user) consent is being relied upon as a legal basis to process personal data the standard required by GDPR states it must be informed, freely given and specific.

But the Council’s analysis of the apps found them sorely lacking on that front.

“In the cases described in this report, none of the apps or third parties appear to fulfil the legal conditions for collecting valid consent,” it writes. “Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.”

It also dismisses another possible legal base — known as legitimate interests — arguing app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases”.

The report points out that other forms of digital advertising (such as contextual advertising) which do not rely on third parties processing personal data are available — arguing that further undermines any adtech industry claims of ‘legitimate interests’ as a valid base for helping themselves to smartphone users’ data.

“The large amount of personal data being sent to a variety of third parties, who all have their own purposes and policies for data processing, constitutes a widespread violation of data subjects’ privacy,” the Council argues. “Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.”

The suggestion, therefore, is that “a large number of third parties that collect consumer data for purposes such as behavioural profiling, targeted advertising and real-time bidding, are in breach of the General Data Protection Regulation”.

The report also discussing the harms attached to such widespread violation of privacy — pointing out risks such as discrimination and manipulation of vulnerable individuals, as well as chilling effects on speech, added fuel for ad fraud and the torching of trust in the digital economy, among other society-afflicting ill being fuelled by adtech’s obsession with profiling everyone…

Some of the harm of this data exploitation stems from significant knowledge and power asymmetries that render consumers powerless. The overarching lack of transparency of the system makes consumers vulnerable to manipulation, particularly when unknown companies know almost everything about the individual consumer. However, even if regular consumers had comprehensive knowledge of the technologies and systems driving the adtech industry, there would still be very limited ways to stop or control the data exploitation.

Since the number and complexity of actors involved in digital marketing is staggering, consumers have no meaningful ways to resist or otherwise protect themselves from the effects of profiling. These effects include different forms of discrimination and exclusion, data being used for new and unknowable purposes, widespread fraud, and the chilling effects of massive commercial surveillance systems. In the long run, these issues are also contributing to the erosion of trust in the digital industry, which may have serious consequences for the digital economy.

To shift what it dubs the “significant power imbalance between consumers and third party companies”, the Council calls for an end to the current practices of “extensive tracking and profiling” — either by companies changing their practices to “respect consumers’ rights”, or — where they won’t — urging national regulators and enforcement authorities to “take active enforcement measures, to establish legal precedent to protect consumers against the illegal exploitation of personal data”.

It’s fair to day that enforcement of GDPR remains a work in progress at this stage, some 20 months after the regulation came into force, back in May 2018. With scores of cross-border complaints yet to culminate in a decision (though there have been a couple of interesting adtech– and consent-related enforcements in France).

We reached out to Ireland’s Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) for comment on the Council’s report. The Irish regulator has multiple investigations ongoing into various aspects of adtech and tech giants’ handling of online privacy, including a probe related to security concerns attached to Google’s ad exchange and the real-time bidding process which features in some programmatic advertising. It has previously suggested the first decisions from its hefty backlog of GDPR complaints will be coming early this year. But at the time of writing the DPC had not responded to our request for comment on the report.

A spokeswoman for the ICO — which last year put out its own warnings to the behavioral advertising industry, urging it to change its practices — sent us this statement, attributed to Simon McDougall, its executive director for technology and innovation, in which he says the regulator has been prioritizing engaging with the adtech industry over its use of personal data and has called for change itself — but which does not once mention the word ‘enforcement’…

Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.

Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.

Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.

Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.

Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

 


0

At CES, companies slowly start to realize that privacy matters

19:12 | 11 January

Every year, Consumer Electronics Show attendees receive a branded backpack, but this year’s edition was special; made out of transparent plastic, the bag’s contents were visible without the wearer needing to unzip. It isn’t just a fashion decision. Over the years, security has become more intense and cumbersome, but attendees with transparent backpacks didn’t have to open their bags when entering.

That cheap backpack is a metaphor for an ongoing debate — how many of us are willing to exchange privacy for convenience?

Privacy was on everyone’s mind at this year’s CES in Las Vegas, from CEOs to policymakers, PR agencies and people in charge of programming the panels. For the first time in decades, Apple had a formal presence at the event; Senior Director of Global Privacy Jane Horvath spoke on a panel focused on privacy with other privacy leaders.

 


0

Facebook won’t ban political ads, prefers to keep screwing democracy

17:48 | 9 January

It’s 2020 — a key election year in the US — and Facebook is doubling down on its policy of letting people pay it to fuck around with democracy.

Despite trenchant criticism — including from US lawmakers accusing Facebook’s CEO to his face of damaging American democracy the company is digging in, announcing as much today by reiterating its defence of continuing to accept money to run microtargeted political ads.

Instead of banning political ads Facebook is trumpeting a few tweaks to the information it lets users see about political ads — claiming it’s boosting “transparency” and “controls” while leaving its users vulnerable to default settings that offer neither.  

Political ads running on Facebook are able to be targeted at individuals’ preferences as a result of the company’s pervasive tracking and profiling of Internet users. And ethical concerns about microtargeting led the UK’s data protection watchdog to call in 2018 for a pause on the use of digital ad tools like Facebook by political campaigns — warning of grave risks to democracy.

Facebook isn’t for pausing political microtargeting, though. Even though various elements of its data-gathering activities are also subject to privacy and consent complaints, regulatory scrutiny and legal challenge in Europe, under regional data protection legislation.

Instead, the company made it clear last fall that it won’t fact-check political ads, nor block political messages that violate its speech policies — thereby giving politicians carte blanche to run hateful lies, if they so choose.

Facebook’s algorithms also demonstrably select for maximum eyeball engagement, making it simply the ‘smart choice’ for the modern digitally campaigning politician to run outrageous BS on Facebook — as long time Facebook exec Andrew Bosworth recently pointed out in an internal posting that leaked in full to the NYT.

Facebook founder Mark Zuckerberg’s defence of his social network’s political ads policy boils down to repeatedly claiming ‘it’s all free speech man’ (we paraphrase).

This is an entirely nuance-free argument that comedian Sacha Baron Cohen expertly demolished last year, pointing out that: “Under this twisted logic if Facebook were around in the 1930s it would have allowed Hitler to post 30-second ads on his solution to the ‘Jewish problem.’”

Facebook responded to the take-down with a denial that hate speech exists on its platform since it has a policy against it — per its typical crisis PR playbook. And it’s more of the same selectively self-serving arguments being dispensed by Facebook today.

In a blog post attributed to its director of product management, Rob Leathern, it expends more than 1,000 words on why it’s still not banning political ads (it would be bad for advertisers wanting to reaching “key audiences”, is the non-specific claim) — including making a diversionary call for regulators to set ad standards, thereby passing the buck on ‘democratic accountability’ to lawmakers (whose electability might very well depend on how many Facebook ads they run…), while spinning cosmetic, made-for-PR tweaks to its ad settings and what’s displayed in an ad archive that most Facebook users will never have heard of as “expanded transparency” and “more control”. 

In fact these tweaks do nothing to reform the fundamental problem of damaging defaults.

The onus remains on Facebook users to do the leg work on understanding what its platform is pushing at their eyeballs and why.

Even as the ‘extra’ info now being drip-fed to the Ad Library is still highly fuzzy (“We are adding ranges for Potential Reach, which is the estimated target audience size for each political, electoral or social issue ad so you can see how many people an advertiser wanted to reach with every ad,” as Facebook writes of one tweak.)

The new controls similarly require users to delve into complex settings menus in order to avail themselves of inherently incremental limits — such as an option that will let people opt into seeing “fewer” political and social issue ads. (Fewer is naturally relative, ergo the scale of the reduction remains entirely within Facebook’s control — so it’s more meaningless ‘control theatre’ from the lord of dark pattern design. Why can’t people switch off political and issue ads entirely?)

Another incremental setting lets users “stop seeing ads based on an advertiser’s Custom Audience from a list”.

But just imagine trying to explain WTF that means to your parents or grandparents — let alone an average Internet user actually being able to track down the ‘control’ and exercise any meaningful agency over the political junk ads they’re being exposed to on Facebook.

It is, to quote Baron Cohen, “bullshit”.

Nor are outsiders the only ones calling out Zuckerberg on his BS and “twisted logic”: A number of Facebook’s own employees warned in an open letter last year that allowing politicians to lie in Facebook ads essentially weaponizes the platform.

They also argued that the platform’s advanced targeting and behavioral tracking tools make it “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — accusing the company’s leadership of making disingenuous arguments in defence of a toxic, anti-democratic policy. 

Nothing in what Facebook has announced today resets the anti-democratic asymmetry inherent in the platform’s relationship to its users.

Facebook users — and democratic societies — remain, by default, preyed upon by self-interested political interests thanks to Facebook’s policies which are dressed up in a self-interested misappropriation of ‘free speech’ as a cloak for its unfettered exploitation of individual attention as fuel for a propaganda-as-service business.

Yet other policy positions are available.

Twitter announced a total ban on political ads last year — and while the move doesn’t resolve wider disinformation issues attached to its platform, the decision to bar political ads has been widely lauded as a positive, standard-setting example.

Google also followed suit by announcing a ban on “demonstrably false claims” in political ads. It also put limits on the targeting terms that can be used for political advertising buys that appear in search, on display ads and on YouTube.

Still Facebook prefers to exploit “the absence of regulation”, as its blog post puts it, to not do the right thing and keep sticking two fingers up at democratic accountability — because not applying limits on behavioral advertising best serves its business interests. Screw democracy.

“We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Facebook writes, ignoring the fact that some of its own staff already pointed out the sketchy hypocrisy of trying to claim that complex ad targeting tools and techniques are open to public scrutiny.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 402

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short