Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: European union

<< Back Forward >>
Topics from 1 to 10 | in all: 382

Facebook’s dodgy defaults face more scrutiny in Europe

18:34 | 24 January

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.

At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.

Lack of transparency

The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.

In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.

The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.

In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:

We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.

Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.

Dodgy defaults

In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.

A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.

However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.

Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 

This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).

But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.

The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default

The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.

Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”

We’ve reached out to Facebook for a response.

 


0

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

16:07 | 24 January

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.

The deployment comes after a multi-year period of trials by the Met and police in South Wales.

The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.

“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.

It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.

“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”

The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.

In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.

“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.

London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.

The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.

The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.

However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.

So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.

Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.

Instead it makes pains to couch the technology as “additional tool” to assist its officers.

“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.

While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.

A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.

On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.

UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.

Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.

It also suggested such use would not meet key legal requirements.

“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.

Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.

A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

 


0

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

17:11 | 20 January

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

 


0

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

22:12 | 17 January

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

 


0

Privacy experts slam UK’s “disastrous” failure to tackle unlawful adtech

17:08 | 17 January

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.

The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.

Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.

The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.

In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.

It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.

The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.

The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.

“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.

The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.

“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.

“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.

Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.

In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)

“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.

McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.

(Trade body, the Internet Advertising Bureau’s UK branch, has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)

McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.

Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off unlawful and insecure high velocity background trading of Internet users’ personal data.

“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.

“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”

Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers, dubbed the ICO’s response “disastrous”.

“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.

“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.

“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models. 20 months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU commission must act.”

 


0

Bolt raises €50M in venture debt from the EU to expand its ride-hailing business

14:30 | 16 January

Bolt, the billion-dollar startup out of Estonia that’s building a ride-hailing, scooter and food delivery business across Europe and Africa, has picked up a tranche of funding in its bid to take on Uber and the rest in the world of on-demand transportation.

The company has picked up €50 million (about $56 million) from the European Investment Bank to continue developing its technology and safety features, as well as to expand newer areas of its business such as food delivery and personal transport like e-scooters.

With this latest money, Bolt has raised over €250 million in funding since opening for business in 2013 and as of its last equity round in July 2019 (when it raised $67 million), it was valued at over $1 billion, which Bolt has confirmed to me remains the valuation here.

Bolt further said that its service now has over 30 million users in 150 cities and 35 countries and is profitable in two-thirds of its markets.

“Bolt is a good example of European excellence in tech and innovation. As you say, to stand still is to go backwards, and Bolt is never standing still,” said The EIB’s Vice President Alexander Stubb in a statement. “The Bank is very happy to support the company in improving its services, as well as allowing it to branch out into new service fields. In other words, we’re fully on board!”

The EIB is the non-profit, long-term lending arm of the European Union, and this financing in the form of a quasi-equity facility.

Also known as venture debt, the financing is structured as a loan, where repayment terms are based on a percentage of future revenue streams, and ownership is not diluted. The funding is backed in turn by the European Fund for Strategic Investments, as part of a bigger strategy to boost promising companies, and specifically riskier startups, in the tech industry. It expects to make and spur some €458.8 billion in investments across 1 million startups and SMEs.

Opting for “quasi-equity” loan instead of a straight equity or debt investment is attractive to Bolt for a couple of reasons. One is fact that the funding comes without ownership dilution is one attractive factor of the funding. Two is the endorsement and support of the EU itself, in a market category where tech disruptors have been known to run afoul of regulators and lawmakers, in part because of the ubiquity and nature of the transportation/mobility industry.

“Mobility is one of the areas where Europe will really benefit from a local champion who shares the values of European consumers and regulators,” said Martin Villig, the co-founder and CEO of Bolt, in a statement. :Therefore, we are thrilled to have the European Investment Bank join the ranks of Bolt’s backers as this enables us to move faster towards serving many more people in Europe.”

(Butting heads with authorities is something that Bolt is no stranger to: it tried to enter the lucrative London taxi market through a backdoor to bypass the waiting time to get a license. It really didn’t work, and the company had to wait another 21 months to come to London doing it by the book. In its first six months of operation in London, the company has picked up 1.5 million customers.)

While private VCs account for the majority of startup funding, backing from government groups is an interesting and strategic route for tech companies that are making waves in large industries that sit adjacent to technology. Before it was acquired by PayPal, IZettle also picked up a round from funding from the EIB specifically to invest in its AI R&D. Navya, the self-driving bus and shuttle startup, has also raised money from the EIB in the past.

One of the big issues with on-demand transportation companies has been its safety record — indeed, this is at the center of Uber’s latest scuffle in Europe, where London’s transport regulator has rejected a license renewal for the company over concerns about Uber’s safety record. (Uber is appealing and while it does, it’s business as usual. )

So it’s no surprise that with this funding, Bolt says that it will be specifically using the money to develop technology to “improve the safety, reliability and sustainability of its services while maintaining the high efficiency of the company’s operations.”

Bolt is one of a group of companies that have been hatched out of Estonia, which has worked to position itself as a leader in Europe’s tech industry as part of its own economic regeneration in the decades after existing as part of the Soviet Union (it formally left in 1990). The EIB has invested around €830 million in Estonian projects in the last five years.

“Estonia is as the forefront of digital transformation in Europe,” said Paolo Gentiloni, European Commissioner for the Economy, in a statement. “I am proud that Europe, through the Investment Plan, supports Estonian platform Bolt’s research and development strategy to create innovative and safe services that will enhance urban mobility.”

 


0

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

18:39 | 15 January

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.

Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.

A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.

At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.

Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.

The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.

Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.

“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.

The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.

Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).

The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.

There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.

If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.

Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.

Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.

In a

welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”

The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.

The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.

“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 

“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

 


0

Dating and fertility apps among those snitching to “out of control” adtech, report finds

16:35 | 14 January

The latest report to warn that surveillance capitalism is out of control — and ‘free’ digital services can in fact be very costly to people’s privacy and rights — comes courtesy of the Norwegian Consumer Council which has published an analysis of how popular apps are sharing user data with the behavioral ad industry.

It suggests smartphone users have little hope of escaping adtech’s pervasive profiling machinery — short of not using a smartphone at all.

A majority of the apps that were tested for the report were found to transmit data to “unexpected third parties” — with users not being clearly informed about who was getting their information and what they were doing with it. Most of the apps also did not provide any meaningful options or on-board settings for users to prevent or reduce the sharing of data with third parties.

“The evidence keeps mounting against the commercial surveillance systems at the heart of online advertising,” the Council writes, dubbing the current situation “completely out of control, harming consumers, societies, and businesses”, and calling for curbs to prevalent practices in which app users’ personal data is broadcast and spread “with few restraints”. 

“The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.

“The comprehensive digital surveillance happening across the adtech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions,” it also warns.

In the report app users’ data is documented being shared with tech giants such as Facebook, Google and Twitter — which operate their own mobile ad platforms and/or other key infrastructure related to the collection and sharing of smartphone users’ data for ad targeting purposes — but also with scores of other faceless entities that the average consumer is unlikely to have heard of.

The Council commissioned a data flow analysis of ten popular apps running on Google’s Android smartphone platform — generating a snapshot of the privacy blackhole that mobile users inexorably tumble into when they try to go about their digital business, despite the existence (in Europe) of a legal framework that’s supposed to protect people by giving citizens a swathe of rights over their personal data.

Among the findings are a make-up filter app sharing the precise GPS coordinates of its users; ovulation-, period- and mood-tracking apps sharing users’ intimate personal data with Facebook and Google (among others); dating apps exchanging user data with each other, and also sharing with third parties sensitive user info like individuals’ sexual preferences (and real-time device specific tells such as sensor data from the gyroscope… ); and a games app for young children that was found to contain 25 embedded SDKs and which shared the Android Advertising ID of a test device with eight third parties.

The ten apps whose data flows were analyzed for the report are the dating apps Grindr, Happn, OkCupid, and Tinder; fertility/period tracker apps Clue and MyDays; makeup app Perfect365; religious app Muslim: Qibla Finder; children’s app My Talking Tom 2; and the keyboard app Wave Keyboard.

“Altogether, Mnemonic [the company which the Council commissioned to conduct the technical analysis] observed data transmissions from the apps to 216 different domains belonging to a large number of companies. Based on their analysis of the apps and data transmissions, they have identified at least 135 companies related to advertising. One app, Perfect365, was observed communicating with at least 72 different such companies,” the report notes.

“Because of the scope of tests, size of the third parties that were observed receiving data, and popularity of the apps, we regard the findings from these tests to be representative of widespread practices in the adtech industry,” it adds.

Aside from the usual suspect (ad)tech giants, less well-known entities seen receiving user data include location data brokers Fysical, Fluxloop, Placer, Places/Fouraquare, Safegraph and Unacast; behavioral ad targeting players like Receptiv/Verve, Neura, Braze and LeanPlum; mobile app marketing analytics firms like AppsFlyer; and ad platforms and exchanges like AdColony, AT&T’s AppNexus, Bucksense, OpenX, PubNative, Smaato and Vungle.

In the report the Forbrukerrådet concludes that the pervasive tracking of smartphone users which underpins the behavioral ad industry is all but impossible for smartphone users to escape — even if they are able to locate an on-device setting to opt out of behavioral ads.

This is because multiple identifiers are being attached to them and their devices, and also because of frequent sharing/syncing of identifiers by adtech players across the industry. (It also points out that on the Android platform a setting where users can opt-out of behavioral ads does not actually obscure the identifier — meaning users have to take it on trust that adtech entities won’t just ignore their request and track them anyway.)

The Council argues its findings suggest widespread breaches of Europe’s General Data Protection Regulation (GDPR), given that key principles of that pan-EU framework — such as data protection by design and default — are in stark conflict with the systematic, pervasive background profiling of app users it found (apps were, for instance, found sharing personal data by default, requiring users to actively seek out an obscure device setting to try to prevent being profiled).

“The extent of tracking and complexity of the adtech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the adtech industry is systematically at odds with our fundamental rights and freedoms,” it also argues.

Where (user) consent is being relied upon as a legal basis to process personal data the standard required by GDPR states it must be informed, freely given and specific.

But the Council’s analysis of the apps found them sorely lacking on that front.

“In the cases described in this report, none of the apps or third parties appear to fulfil the legal conditions for collecting valid consent,” it writes. “Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.”

It also dismisses another possible legal base — known as legitimate interests — arguing app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases”.

The report points out that other forms of digital advertising (such as contextual advertising) which do not rely on third parties processing personal data are available — arguing that further undermines any adtech industry claims of ‘legitimate interests’ as a valid base for helping themselves to smartphone users’ data.

“The large amount of personal data being sent to a variety of third parties, who all have their own purposes and policies for data processing, constitutes a widespread violation of data subjects’ privacy,” the Council argues. “Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.”

The suggestion, therefore, is that “a large number of third parties that collect consumer data for purposes such as behavioural profiling, targeted advertising and real-time bidding, are in breach of the General Data Protection Regulation”.

The report also discussing the harms attached to such widespread violation of privacy — pointing out risks such as discrimination and manipulation of vulnerable individuals, as well as chilling effects on speech, added fuel for ad fraud and the torching of trust in the digital economy, among other society-afflicting ill being fuelled by adtech’s obsession with profiling everyone…

Some of the harm of this data exploitation stems from significant knowledge and power asymmetries that render consumers powerless. The overarching lack of transparency of the system makes consumers vulnerable to manipulation, particularly when unknown companies know almost everything about the individual consumer. However, even if regular consumers had comprehensive knowledge of the technologies and systems driving the adtech industry, there would still be very limited ways to stop or control the data exploitation.

Since the number and complexity of actors involved in digital marketing is staggering, consumers have no meaningful ways to resist or otherwise protect themselves from the effects of profiling. These effects include different forms of discrimination and exclusion, data being used for new and unknowable purposes, widespread fraud, and the chilling effects of massive commercial surveillance systems. In the long run, these issues are also contributing to the erosion of trust in the digital industry, which may have serious consequences for the digital economy.

To shift what it dubs the “significant power imbalance between consumers and third party companies”, the Council calls for an end to the current practices of “extensive tracking and profiling” — either by companies changing their practices to “respect consumers’ rights”, or — where they won’t — urging national regulators and enforcement authorities to “take active enforcement measures, to establish legal precedent to protect consumers against the illegal exploitation of personal data”.

It’s fair to day that enforcement of GDPR remains a work in progress at this stage, some 20 months after the regulation came into force, back in May 2018. With scores of cross-border complaints yet to culminate in a decision (though there have been a couple of interesting adtech– and consent-related enforcements in France).

We reached out to Ireland’s Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) for comment on the Council’s report. The Irish regulator has multiple investigations ongoing into various aspects of adtech and tech giants’ handling of online privacy, including a probe related to security concerns attached to Google’s ad exchange and the real-time bidding process which features in some programmatic advertising. It has previously suggested the first decisions from its hefty backlog of GDPR complaints will be coming early this year. But at the time of writing the DPC had not responded to our request for comment on the report.

A spokeswoman for the ICO — which last year put out its own warnings to the behavioral advertising industry, urging it to change its practices — sent us this statement, attributed to Simon McDougall, its executive director for technology and innovation, in which he says the regulator has been prioritizing engaging with the adtech industry over its use of personal data and has called for change itself — but which does not once mention the word ‘enforcement’…

Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.

Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.

Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.

Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.

Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

 


0

Cross-border investments aren’t dead, they’re getting smarter

00:15 | 11 January

Yohei Nakajima Contributor
Yohei Nakajima is an investor at Scrum Ventures, an early-stage venture capital fund based in San Francisco, and Senior Vice President at Scrum Studio, helping global corporations connect and work with innovative startups.

In recent years, the venture capital and startup worlds have seen a significant shift towards globalization. More and more startups are going global and breaking borders, such as payments giant Stripe and their recent expansion to Latin America, e-scooter startup Bird’s massive European expansion, or fashion subscription service (an investment in our portfolio) Le Tote’s entrance into China.

Likewise, more VC Funds are spanning geographies in both investment focus and the limited partners, or LPs, who fuel those investments. While Silicon Valley is very much seen as an epicenter for tech — it is no longer the sole proprietor for innovation — with new technology hubs rising across the world from Israel to the UK to Latin America and beyond.

Yet, many have commented on a shift or slowdown of globalization, or “slobalization,” in recent months. Whether it be from the current political climate or other factors, it’s been said that there’s been a marked decrease in cross-border investments of late — leading to the question: Is the world still interested in U.S. startups?
To answer this and better understand the hunger from foreign investors in participating in U.S. funding rounds, both from a geographic and stage perspective, I looked at Crunchbase data in U.S. seed and VC rounds between the years of 2009 to 2018. The data shows that cross-border investments are far from dead — but they are getting smarter and perhaps even more global with the rise of investments from Asia.

 


0

Cookie consent tools are being used to undermine EU privacy rules, study suggests

21:08 | 10 January

Most cookie consent pop-ups served to Internet users in the European Union — ostensibly seeking permission to track people’s web activity — are likely to be flouting regional privacy laws, a new study by researchers at MIT, UCL and Aarhus University suggests.

“The results of our empirical survey of CMPs [consent management platforms] today illustrates the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye to — or worse, incentivising — clearly illegal configurations of their systems,” the researchers argue, adding that: “Enforcement in this area is sorely lacking.”

Their findings, published in a paper entitled Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence, chime with another piece of research we covered back in August — which also concluded a majority of the current implementations of cookie notices offer no meaningful choice to Europe’s Internet users — even though EU law requires one.

When consent is being relied upon as the legal basis for processing web users’ personal data, the bar for valid (i.e. legal) consent that’s set by the EU’s General Data Protection Regulation (GDPR) is clear: It must be informed, specific and freely given.

Recent jurisprudence by the Court of Justice of the European Union also further crystalized the law around cookies, making it clear that consent must be actively signalled — meaning a digital service cannot infer consent to tracking by indirect actions (such as the pop-up being closed by the user without a response or ignored in favor of interacting with the service).

Many websites use a so-called CMP to solicit consent to tracking cookies. But if it’s configured to contain pre-ticked boxes that opt users into sharing data by default — requiring an affirmative user action to opt out — any gathered ‘consent’ also isn’t legal.

Consent to tracking must also be obtained prior to a digital service dropping or accessing a cookie; Only service-essential cookies can be deployed without asking first.

All of which means — per EU law — it should be equally easy for website visitors to choose not to be tracked as to agree to their personal data being processed.

However the Dark Patterns after the GDPR study found that’s very far from the case right now.

“We found that dark patterns and implied consent are ubiquitous,” the researchers write in summary, saying that only slightly more than one in ten (11.8%) of the CMPs they looked at “meet the minimal requirements that we set based on European law” — which they define as being “if it has no optional boxes pre-ticked, if rejection is as easy as acceptance, and if consent is explicit”.

For the study, the researchers scraped the top 10,000 UK websites, as ranked by Alexa, to gather data on the most prevalent CMPs in the market — which are made by five companies: QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak — and analyzed how the design and configurations of these tools affected Internet users’ choices. (They obtained a data set of 680 CMP instances via their method — a sample they calculate is representative of at least 57% of the total population of the top 10k sites that run a CMP, given prior research found only around a fifth do so.)

Implicit consent — aka (illegally) inferring consent via non-affirmative user actions (such as the user visiting or scrolling on the website or a failure to respond to a consent pop-up or closing it without a response) — was found to be common (32.5%) among the studied sites.

“Popular CMP implementation wizards still allow their clients to choose implied consent, even when they have already indicated the CMP should check whether the visitor’s IP is within the geographical scope of the EU, which should be mutually exclusive,” they note, arguing that: “This raises significant questions over adherence with the concept of data protection by design in the GDPR.”

They also found that the vast majority of CMPs make rejecting all tracking “substantially more difficult than accepting it” — with a majority (50.1%) of studied sites not having a ‘reject all’ button. While only a tiny minority (12.6%) of sites had a ‘reject all’ button accessible with the same or fewer number of clicks as an ‘accept all’ button.

Or, to put it another way, ‘Ohhai dark pattern design‘…

“An ‘accept all’ button was never buried in a second layer,” the researchers go on to point out, also finding that “74.3% of reject all buttons were one layer deep, requiring two clicks to press; 0.9% of them were two layers away, requiring at minimum three.”

Pre-ticked boxes were found to be widely deployed in the studied CMPs as well — despite such a setting not being legally valid. (On this they found: “56.2% of sites pre-ticked optional vendors or purposes/categories, with 54.1% of sites pre-ticking optional purposes, 32.3% pre-ticking optional categories, and 30.3% pre-ticking both”.)

They also point out that the high number of third-party trackers routinely being used by sites poses a major problem for the EU consent model — given it requires a “prohibitively long time” for users to become clearly informed enough to be able to legally consent.

The exact number of third party trackers they found being packed like sardines into CMPs varied — with between tens and several hundreds in play depending on the site.

Fifty-eight was the lowest number they encountered. While the highest instance was 542 vendors — on an implementation of QuantCast’s CMP. (And, well, just imagine the ‘friction’ involved in manually unticking all those, assuming that was one of the sites that also lacked a ‘reject all’ button… )

Sites relied on a large number of third party trackers, which would take a prohibitively long time for users to inform themselves about clearly. Out of the 85.4% of sites that did list vendors (e.g. third party trackers) within the CMP, there was a median number of 315 vendors (low. quartile 58, upp. quartile 542). Different CMP vendors have different average numbers of vendors, with the highest being QuantCast at 542… 75% of sites had over 58 vendors. 76.47% of sites provide some descriptions of their vendors. The mean total length of these descriptions per site is 7,985 words: roughly 31.9 minutes of reading for the average 250 words-per-minute reader, not counting interaction time to e.g. unfold collapsed boxes or navigating to and reading specific privacy policies of a vendor.

A second part of the research involved a field experiment involving 40 participants to investigate how the eight most common CMP designs affect Internet users’ consent choices.

“We found that notification style (banner or barrier) has no effect [on consent choice]; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points,” they write in summary on that.

They argue this portion of the study supports the notion that two of the most common consent interface designs – “not showing a ‘reject all’ button on the first page; and showing bulk options before showing granular control” – make it more likely for users to provide consent, thereby “violating the [GDPR] principle of “freely given””.

They also make reference to “qualitative reflections” of the participants in the paper — which were obtained via  survey after individuals’ consent choices had been registered during the field study — suggesting these responses “put into question the entire notice-and-consent model not because of specific design decisions but merely because an action is required before the user can accomplish their main task and because they appear too frequently if they are shown on a website-by-website basis”.

So, in other words, just the fact of interrupting a web user to ask them to make a choice may itself apply substantial enough pressure that it might render any resulting ‘consent’ invalid.

The study’s finding of the prevalence of manipulative designs and configurations intended to nudge or even force consent suggests Internet users in Europe are not actually benefiting from a legal framework that’s supposed to protection their digital data from unwanted exploitation — and are rather being subject to a lot of noisy, distracting and disingenuous ‘consent theatre’.

Cookie notices not only generate friction and frustration for the average Internet user, as they try to go about their daily business online, but the current situation is creating a faux veneer of compliance — atop what is actually a massive trampling of rights via what amounts to digital daylight robbery of people’s data at scale.

The problem here is that EU regulators have for years looked the other way where online tracking is concerned, failing entirely to enforce the on-paper standard.

Enforcement is indeed sorely lacking, as the researchers note. (Industry lobbying/political pressure, limited resources, risk aversion and regulatory capture, and a legacy of inaction around digital rights are all likely to blame.)

And while the GDPR only started being applied in May 2018, Europe has had regulations on data-gathering mechanisms like cookies for approaching two decades — with the paper pointing out that an amendment to the ePrivacy Directive all the way back in 2002 made it a requirement that “storing or accessing information on a user’s device not ‘strictly necessary’ for providing an explicitly requested service requires both clear and comprehensive information and opt-in consent”.

Asked about the research findings, lead author, Midas Nouwens, questioned why CMP vendors are selling so called ‘compliance’ tools that allow for non-compliant configurations in the first place.

“It’s sad, but I don’t think anyone is surprised anymore by how few pop-ups comply with the GDPR,” he told TechCrunch. “What is shocking is how non-compliant interface designs are allowed by the companies that provide consent pop-ups. Why do they let their clients count scrolling as consent or bury the decline button somewhere on the third page?”

“Enforcement is really the next big challenge if we don’t want the GDPR to go down the same path as the ePrivacy directive,” he added. “Since enforcement agencies have limited resources, focusing on the popular consent pop-up providers could be a much more effective strategy than targeting individual websites.

“Unfortunately, while we wait for enforcement, the dark patterns in these pop-ups are still manipulating people into being tracked.”

Another of the researchers behind the paper, Michael Veale, a lecturer in digital rights and regulation at UCL, also expressed shock that CMP vendors are allowing their tools to be configured in ways which are clearly intended to manipulate Internet users — thereby flouting the law.

In the paper the researchers urge regulators to take a smarter approach to tackling such widespread violation, such as by making use of automated tools “to expedite discovery and enforcement” of non-compliant cookie notices, and suggest they work “further upstream” — such as by placing requirements on the vendors of CMPs “to only allow compliant designs to be placed on the market”.

“It’s shocking to see how many of the large providers of consent pop-ups allow their systems to be misconfigured, such as through implicit consent, in ways that clearly infringe data protection law,” Veale told us, adding: “I suspect data protection authorities see this widespread illegality and are not sure exactly where to start. Yet if they do not start enforcing these guidelines, it’s unclear when this widespread illegality will start to stop.”

“This study even overestimates compliance, as we don’t focus on what actually happens to the tracking when you click on these buttons, which other recent studies have emphasised in many cases mislead individuals and do nothing at all,” he also pointed out.

We reached out to the UK’s data protection watchdog, the ICO, for a response to the research — and a spokeswoman pointed us to this cookie advice blog post it published last year, saying the advice it contains “still stands”.

In the blog Ali Shah, the ICO’s head of technology policy, suggests there could be some (albeit limited) action from the regulator this year to clean up cookie consent, with Shah writing that: “Cookie compliance will be an increasing regulatory priority for the ICO in the future. However, as is the case with all our powers, any future action would be proportionate and risk-based.”

While European citizens wait for data protection regulators to take meaningful action over systematic breaches of the GDPR — including those attached to consent-less tracking of web users — there is one step European web users can take to shrink the pain of cookie consent pop-ups: The researchers behind the study have built an open source browser extension that can automatically answer pop-ups based on user-customizable preferences.

It’s called Consent-o-Matic — and there are versions available for Firefox and Chrome.

At release the tool can automatically respond to cookie banners built by the five big CMP suppliers (QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak).

But being as it’s open source, the hope is others will build on it to expand the types of pop-ups it’s able to auto-respond to. In the absence of a legally enforced ‘Do Not Track’ browser standard this is about as good as it gets for Internet users desperately seeking easier agency over the online tracking industry.

In a

last month announcing the tool, Nouwens described the project as making use of “adversarial interoperability” as a pro-privacy tactic.

“Automating consent and privacy preferences is not new (DNT and P3P), but this project uses adversarial interoperability, rather than rely on industry self-regulation or buy-in from fundamentally opposed stakeholders (browsers, advertisers, publishers),” he observed.

However he added one caveat, reminding users to be on their guard for further non-compliance from the data suckers — pointing to the earlier research paper also flagged by Veale which found a small portion of sites (~7%) entirely ignore responses to cookie pop-ups and track users regardless of response.

So sometimes even a seamlessly automated ‘no’ to tracking might still sum to being tracked…

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 382

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short