Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 98

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

ivanov5056 Ivanov

ivanov5056 Ivanov, 69

Joined: 20 July 2019

Interests: No data



Main article: European union

<< Back Forward >>
Topics from 1 to 10 | in all: 349

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

15:04 | 12 November

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide de Mol with data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.

Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing de Mol’s image on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.

It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing de Mol’s likeness from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides de Mol with data on the accounts that had been misusing his image within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “

“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises

around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 

 


0

Facebook agrees to pay UK data watchdog’s Cambridge Analytica fine but settles without admitting liability

18:09 | 30 October

Facebook has reached a settlement with the UK’s data protection watchdog, the ICO, agreeing to pay in full a £500,000 (~$643k) fine following the latter’s investigating into the Cambridge Analytica data misuse scandal.

As part of the arrangement Facebook has agreed to drop its legal appeal against the penalty. But under the terms of the settlement it has not admitted any liability in relation to paying the fine, which is the maximum possible monetary penalty under the applicable UK data protection law. (The Cambridge Analytica scandal predates Europe’s GDPR framework coming into force.)

Facebook’s appeal against the ICO’s penalty was focused on a claim that there was no evidence that U.K. Facebook users’ data had being mis-used by Cambridge Analytica .

But there’s a further twist here in that the company had secured a win, from a first tier legal tribunal — which held in June that “procedural fairness and allegations of bias” on the part of the ICO should be considered as part of its appeal.

The decision required the ICO to disclose materials relating to its decision-making process regarding the Facebook fine. The ICO, evidently less than keen for its emails to be trawled through, appealed last month. It’s now withdrawing the action as part of the settlement, Facebook having dropped its legal action.

In a statement laying out the bare bones of the settlement reached, the ICO writes: “The Commissioner considers that this agreement best serves the interests of all UK data subjects who are Facebook users. Both Facebook and the ICO are committed to continuing to work to ensure compliance with applicable data protection laws.”

An ICO spokeswoman did not respond to additional questions — telling us it does not have anything further to add than its public statement.

As part of the settlement, the ICO writes that Facebook is being allowed to retain some (unspecified) “documents” that the ICO had disclosed during the appeal process — to use for “other purposes”, including for furthering its own investigation into issues around Cambridge Analytica.

“Parts of this investigation had previously been put on hold at the ICO’s direction and can now resume,” the ICO adds.

Under the terms of the settlement the ICO and Facebook each pay their own legal costs. While the £500k fine is not kept by the ICO but paid to HM Treasury’s consolidated fund.

Commenting in a statement, deputy commissioner, James Dipple-Johnstone, said:

The ICO welcomes the agreement reached with Facebook for the withdrawal of their appeal against our Monetary Penalty Notice and agreement to pay the fine. The ICO’s main concern was that UK citizen data was exposed to a serious risk of harm. Protection of personal information and personal privacy is of fundamental importance, not only for the rights of individuals, but also as we now know, for the preservation of a strong democracy. We are pleased to hear that Facebook has taken, and will continue to take, significant steps to comply with the fundamental principles of data protection. With this strong commitment to protecting people’s personal information and privacy, we expect that Facebook will be able to move forward and learn from the events of this case.

In its own supporting statement, attached to the ICO’s remarks, Harry Kinmonth, director and associate general counsel at Facebook, added:

We are pleased to have reached a settlement with the ICO. As we have said before, we wish we had done more to investigate claims about Cambridge Analytica in 2015. We made major changes to our platform back then, significantly restricting the information which app developers could access. Protecting people’s information and privacy is a top priority for Facebook, and we are continuing to build new controls to help people protect and manage their information. The ICO has stated that it has not discovered evidence that the data of Facebook users in the EU was transferred to Cambridge Analytica by Dr Kogan. However, we look forward to continuing to cooperate with the ICO’s wider and ongoing investigation into the use of data analytics for political purposes.

A charitable interpretation of what’s gone on here is that both Facebook and the ICO have reached a stalemate where their interests are better served by taking a quick win that puts the issue to bed, rather than dragging on with legal appeals that might also have raised fresh embarrassments. 

That’s quick wins in terms of PR (a paid fine for the ICO; and drawing a line under the issue for Facebook), as well as (potentially) useful data to further Facebook’s internal investigation of the Cambridge Analytica scandal.

We don’t know exactly it’s getting from the ICO’s document stash. But we do know it’s facing a number of lawsuits and legal challenges over the scandal in the US. 

The ICO announced its intention to fine Facebook over the Cambridge Analytica scandal just over a year ago.

In March 2018 it had raided the UK offices of the now defunct data company, after obtaining a warrant, taking away hard drives and computers for analysis. It had also earlier ordered Facebook to withdraw its own investigators from the company’s offices.

Speaking to a UK parliamentary committee a year ago the information commissioner, Elizabeth Denham, and deputy Dipple-Johnstone, discussed their (then) ongoing investigation of data seized from Cambridge Analytica — saying they believed the Facebook user data-set the company had misappropriated could have been passed to more entities than were publicly known.

The ICO said at that point it was looking into “about half a dozen” entities.

It also told the committee it had evidence that, even as recently as early 2018, Cambridge Analytica might have retained some of the Facebook data — despite having claimed it had deleted everything.

“The follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000,” Denham also said at the time. 

Some of this evidence will likely be very useful for Facebook as it prepares to defend itself in legal challenges related to Cambridge Analytica. As well as aiding its claimed platform audit — when, in the wake of the scandal, Facebook said it would run a historical app audit and challenge all developers who it determined had downloaded large amounts of user data.

The audit, which it announced in March 2018, apparently remains ongoing.

 


0

US search market needs a ‘choice screen’ remedy now, says DuckDuckGo

15:48 | 30 October

US regulators shouldn’t be sitting on their hands while the 50+ state, federal and congressional antitrust investigations of Google to grind along, search rival DuckDuckGo argues.

It’s put out a piece of research today that suggests choice screens which let smartphone users choose from a number of search engines to be their device default — aka “preference menus” as DuckDuckGo founder Gabe Weinberg prefers to call them — offer an easy and quick win for regulators to reboot competition in the search space by rebalancing markets right now.

“If designed properly we think [preference menus] are a quick and effective key piece in the puzzle for a good remedy,” Weinberg tells TechCrunch. “And that’s because it finally enables people to change the search defaults across the entire device which has been difficult in the past… It’s at a point, during device set-up, where you can promote the users to take a moment to think about whether they want to try out an alternative search engine.”

Google is already offering such a choice to Android users in Europe, following an EU antitrust decision against Android last year.

Google Android choice screen

DuckDuckGo is concerned US regulators aren’t thinking pro-actively enough about remedies for competition in the US search market — and is hoping to encourage more of a lean-in approach to support boosting diversity so that rivals aren’t left waiting years for the courts to issue judgements before any relief is possible.

In a survey of Internet users which it commissioned, polling more than 3,400 adults in the US, UK, Germany and Australia, people were asked to respond to a 4-choice screen design, based on an initial Google Android remedy proposal, as well as an 8-choice variant.

“We found that in each surveyed country, people select the Google alternatives at a rate that could increase their collective mobile market share by 300%-800%, with overall mobile search market share immediately changing by over 10%,” it writes [emphasis its].

Survey takers were also asked about factors that motivate them to switch search engines — with the number one reason given being a better quality of search results, and the next reason being if a search engine doesn’t track their searches or data.

Of course DuckDuckGo stands to gain from any pro-privacy switching, having built an alternative search business by offering non-tracked searches supported by contextual ads. Its model directly contrasts with Google’s, which relies on pervasive tracking of Internet users to determine which ads to serve.

But there’s plenty of evidence consumers hate being tracked. Not least the rise in use of tracker blockers.

“Using the original design puzzle [i.e. that Google devised] we saw a lot of people selecting alternative search engines and we think it would go up from there,” says Weinberg. “But even initially a 10% market share change is really significant.”

He points to regulatory efforts in Europe and also Russia which have resulted in antitrust decisions and enforcements against Google — and where choice screens are already in use promoting alternative search engine choices to Android users.

He also notes that regulators in Australia and the UK are pursuing choice screens — as actual or potential remedies for rebalancing the search market.

Russia has the lead here, with its regulator — the FAS — slapping Google with an order against bundling its services with Android all the way back in 2015, a few months after local search giant Yandex filed a complaint. A choice screen was implemented in 2017 and Russia’s homegrown Internet giant has increased its search market share on Android devices as a result. Google continues to do well in Russia. But the result is greater diversity in the local search market, as a direct result of implementing a choice screen mechanism.

“We think that all regulatory agencies that are now considering search market competition should really implement this remedy immediately,” says Weinberg. “They should do other things… as well but I don’t see any reason why one should wait on not implementing this because it would take a while to roll out and it’s a good start.”

Of course US regulators have yet to issue any antitrust findings against Google — despite there now being tens of investigations into “potential monopolistic behavior”. And Weinberg concedes that US regulators haven’t yet reached the stage of discussing remedies.

“It feels at a very investigatory stage,” he agrees. “But we would like to accelerate that… As well as bigger remedial changes — similar to privacy and how we’re pushing Do Not Track legislation — as something you can do right now as kind of low hanging fruit. I view this preference menu in the same way.”

“It’s a very high leverage thing that you can do immediately to move market share and increase search competition and so one should do it faster and then take the things that need to be slower slower,” he adds, referring to more radical possible competition interventions — such as breaking a business up.

There is certainly growing concern among policymakers around the world that the current modus operandi of enforcing competition law has failed to keep pace with increasingly powerful technology-driven businesses and platforms — hence ‘winner takes all’ skews which exist in certain markets and marketplaces, reducing choice for consumers and shrinking opportunities for startups to compete.

This concern was raised as a question for Europe’s competition chief, Margrethe Vestager, during her hearing in front of the EU parliament earlier this month. She pointed to the Commission’s use of interim measures in an ongoing case against chipmaker Broadcom as an example of how the EU is trying to speed up its regulatory response, noting it’s the first time such an application has been made for two decades.

In a press conference shortly afterwards, to confirm the application of EU interim measures against Broadcom, Vestager added: “Interim measures are one way to tackle the challenge of enforcing our competition rules in a fast and effective manner. This is why they are important. And especially that in fast moving markets. Whenever necessary I’m therefore committed to making the best possible use of this important tool.”

Weinberg is critical of Google’s latest proposals around search engine choice in Europe — after it released details of its idea to ‘evolve’ the search choice screen — by applying an auction model, starting early next year. Other rivals, such as French pro-privacy engine Qwant, have also blasted the proposal.

Clearly, how choice screens are implemented is key to their market impact.

“The way the current design is my read is smaller search engines, including us and including European search engines will not be on the screen long term the way it’s set up,” says Weinberg. “There will need to be additional changes to get the effects that we were seeing in our studies we made.

“There’s many reasons why us and others would not be those highest bidders,” he says of the proposed auction. “But needless to say the bigger companies can weigh outweigh the smaller ones and so there are alternative ways to set this up.”

 


0

Github removes Tsunami Democràtic’s APK after a takedown order from Spain

13:30 | 30 October

Microsoft-owned Github has removed the APK of an app for organizing political protests in the autonomous community of Catalonia — acting on a takedown request from Spain’s military police (aka the Guardia Civil).

As we reported earlier this month supporters of independence for Catalonia have regrouped under a new banner — calling itself Tsunami Democràtic — with the aim of rebooting the political movement and campaigning for self-determination by mobilizing street protests and peaceful civil disobedience.

The group has also been developing bespoke technology tools to coordinate protest action. It’s one of these tools, the Tsunami Democràtic app, which was being hosted as an APK on Github and has now been taken down.

The app registers supporters of independence by asking them to communicate their availability and resources for taking part in local protest actions across Catalonia. Users are also asked to register for protest actions and check-in when they get there — at which point the app asks them to abide by a promise of non-violence (see point 3 in this sample screengrab):

image1 2 1

Users of the app see only upcoming protests relevant to their location and availability — making it different to the one-to-many broadcasts that Tsunami Democràtic also puts out via its channel on the Telegram messaging app.

Essentially, it’s a decentalized tool for mobilizing smaller, localized protest actions vs the largest demos which continue to be organized via Telegram broadcasts (such as a mass blockade of Barcelona airport, earlier this month).

A source with knowledge of Tsunami Democràtic previously told us the sorts of protests intended to be coordinated via the app could include actions such as go-slows to disrupt traffic on local roads and fake shopping sprees in supermarkets, with protestors abandoning carts filled with products in the store.

In a section of Github’s site detailing government takedowns the request from the Spanish state to remove the Tsunami Democràtic app sits alongside folders containing historical takedown requests from China and Russia.

“There is an ongoing investigation being carried out by the National High Court where the movement Tsunami Democràtic has been confirmed as a criminal organization driving people to commit terrorist attacks. Tsunami Democràtic’s main goal is coordinating these riots and terrorist actions by using any possible mean,” Spain’s military police write in the letter sent to Github.

We’ve reached out to Microsoft for comment on Github’s decision to remove the app APK.

In a note about government takedowns on Github’s website it writes:

From time to time, GitHub receives requests from governments to remove content that has been declared unlawful in their local jurisdiction. Although we may not always agree with those laws, we may need to block content if we receive a valid request from a government official so that our users in that jurisdiction may continue to have access to GitHub to collaborate and build software.

“GitHub does not endorse or adopt any assertion contained in the following notices,” it adds in a further caveat on the page.

The trigger for the latest wave of street demonstrations in Catalonia were lengthy jail sentences handed down to a number of Catalan political and cultural leaders by Spain’s Supreme Court earlier this month.

These were people involved in organizing an illegal independence referendum two years ago. The majority of these Catalan leaders were convicted for sedition. None were found guilty of the more serious charge of rebellion — but sentences ran as long as 13 years nonetheless.

This month Spanish judges also reissued a European arrest warrant seeking to extradite the former leader of the Catalan government, Carles Puigdemont, from Brussels to Spain to face trial.  Last year a court in Germany refused his extradition to Spain on charges of rebellion or sedition — only allowing it on lesser grounds of misuse of public funds. A charge which Spain did not pursue.

Puigdemont fled Catalonia in the wake of the failed 2017 independence bid and has remained living in exile in Brussels. He has also since been elected as an MEP but has been unable to take up his seat in the EU parliament after the Spanish state moved to block him from being recognized as a parliamentarian.

Shortly after the latest wave of pro-independence demonstrations took off in Catalonia the Tsunami Democràtic movement’s website was taken offline — also as a result of a takedown request by the Spanish state.

The website remains offline at the time of writing.

While the Tsunami Democràtic app could be accused of encouraging disruption, the charge of “terrorism” is clearly overblown. Unless your definition of terrorism extends to harnessing the power of peaceful civil resistance to generate momentum for political change. 

And while there has been unrest on the streets of Barcelona and other Catalan towns and cities this month, with fires being lit and projectiles thrown at police, there are conflicting reports about what has triggered these clashes between police and protestors — including criticism of the police response as overly aggressive vs what has been, in the main, large but peaceful crowds of pro-democracy demonstrators.

The police response on the day of the 2017 referendum was also widely condemned as violently disproportionate, with scenes of riot gear clad police officers beating up people as they tried to cast a vote.

Local press in Catalonia has reported the European Commission response to Spain’s takedown of the Tsunami Democràtic website — saying the pan-EU body said Spain has a responsibility to find “the right balance between guaranteeing freedom of expression and upholding public order and ensuring security, as well as protecting [citizens] from illegal content”.

Asked what impact the Github takedown of the Tsunami Democràtic app’s APK will have on the app, a source with knowledge of the movement suggested very little — pointing out that the APK is now being hosted on Telegram.

Similarly, the content that was available on the movement’s website is being posted to its 380,000+ subscribers on Telegram — a messaging platform that’s itself been targeted for blocks by authoritarian states in various locations around the world. (Though not, so far, in Spain.)

Another protest support tool that’s been in the works in Catalonia — a live-map for crowdsourcing information about street protests which looks similar to the HKlive.maps app used by pro-democracy campaigners in Hong Kong — is still in testing but expected to launch soon, per the source.

 


0

Tech giants still not doing enough to fight fakes, says European Commission

21:48 | 29 October

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.

Its latest report card on this voluntary effort sums to the platforms could do better.

The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.

Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).

In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.

“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]

“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.

The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.

The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.

But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)

A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.

Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 

But it flags poor progress on implementing commitments to empower consumers and the research community.

“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 

This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.

The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.

But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road. 

The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.

Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.

So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.

There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.

In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.

Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”

It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018. 

Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.

Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.

Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.

Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.

For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.

The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).

“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”

Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”

 


0

Alexa, where are the legal limits on what Amazon can do with my health data?

13:46 | 24 October

The contract between the UK’s National Health Service (NHS) and ecommerce giant Amazon — for a health information licensing partnership involving its Alexa voice AI — has been released following a Freedom of Information request.

The government announced the partnership this summer. But the date on the contract, which was published on the gov.uk contracts finder site months after the FOI was filed, shows the open-ended arrangement to funnel nipped-and-tucked health advice from the NHS’ website to Alexa users in audio form was inked back in December 2018.

The contract is between the UK government and Amazon US (Amazon Digital Services, Delaware) — rather than Amazon UK. 

Nor is it a standard NHS Choices content syndication contract. A spokeswoman for the Department of Health and Social Care (DHSC) confirmed the legal agreement uses an Amazon contract template. She told us the department had worked jointly with Amazon to adapt the template to fit the intended use — i.e. access to publicly funded healthcare information from the NHS’ website.

The NHS does make the same information freely available on its website, of course. As well as via API — to some 1,500 organizations. But Amazon is not just any organization; It’s a powerful US platform giant with a massive ecommerce business.

The contract reflects that power imbalance; not being a standard NHS content syndication agreement — but rather DHSC tweaking Amazon’s standard terms.

“It was drawn up between both Amazon UK and the Department for Health and Social Care,” a department spokeswoman told us. “Given that Amazon is in the business of holding standard agreements with content providers they provided the template that was used as the starting point for the discussions but it was drawn up in negotiation with the Department for Health and Social Care, and obviously it was altered to apply to UK law rather than US law.”

In July, when the government officially announced the Alexa-NHS partnership, its PR provided a few sample queries of how Amazon’s voice AI might respond to what it dubbed “NHS-verified” information — such as: “Alexa, how do I treat a migraine?”; “Alexa, what are the symptoms of flu?”; “Alexa, what are the symptoms of chickenpox?”.

But of course as anyone who’s ever googled a health symptom could tell you, the types of stuff people are actually likely to ask Alexa — once they realize they can treat it as an NHS-verified info-dispensing robot, and go down the symptom-querying rabbit hole — is likely to range very far beyond the common cold.

At the official launch of what the government couched as a ‘collaboration’ with Amazon, it explained its decision to allow NHS content to be freely piped through Alexa by suggesting that voice technology has “the potential to reduce the pressure on the NHS and GPs by providing information for common illnesses”.

Its PR cited an unattributed claim that “by 2020, half of all searches are expected to be made through voice-assisted technology”.

This prediction is frequently attributed to ComScore, a media measurement firm that was last month charged with fraud by the SEC. However it actually appears to originate with computer scientist Andrew Ng, from when he was chief scientist at Chinese tech giant Baidu.

Econsultancy noted last year that Mary Meeker included Ng’s claim on a slide in her 2016 Internet Trends report — which is likely how the prediction got so widely amplified.

But on Meeker’s slide you can see that the prediction is in fact “images or speech”, not voice alone…

Screenshot 2019 10 24 at 10.04.40

So it turns out the UK government incorrectly cited a tech giant prediction to push a claim that “voice search has been increasing rapidly” — in turn its justification for funnelling NHS users towards Amazon.

“We want to empower every patient to take better control of their healthcare and technology like this is a great example of how people can access reliable, world-leading NHS advice from the comfort of their home, reducing the pressure on our hardworking GPs and pharmacists,” said health secretary Matt Hancock in a July statement.

Since landing at the health department, the app-loving former digital minister has been pushing a tech-first agenda for transforming the NHS — promising to plug in “healthtech” apps and services, and touting “preventative, predictive and personalised care”. He’s also announced an AI lab housed within a new unit that’s intended to oversee the digitization of the NHS.

Compared with all that, plugging the NHS’ website into Alexa probably seems like an easy ‘on-message’ win. But immediately the collaboration was announced concerns were raised that the government is recklessly mixing the streams of critical (and sensitive) national healthcare infrastructure with the rapacious data-appetite of a foreign tech giant with both an advertising and ecommerce business, plus major ambitions of its own in the healthcare space.

On the latter front, just yesterday news broke of Amazon’s second health-related acquisition: Health Navigator, a startup with an API platform for integrating with health services, such as telemedicine and medical call centers, which offers natural language processing tools for documenting health complaints and care recommendations.

Last year Amazon also picked up online pharmacy PillPack — for just under $1BN. While last month it launched a pilot of a healthcare service offering to its own employees in and around Seattle, called Amazon Care. That looks intended to be a road-test for addressing the broader U.S. market down the line. So the company’s commercial designs on healthcare are becoming increasingly clear.

Returning to the UK, in response to early critical feedback on the Alexa-NHS arrangement, the IT delivery arm of the service, NHS Digital, published a blog post going into more detail about the arrangement — following what it couched as “interesting discussion about the challenges for the NHS of working with large commercial organisations like Amazon”.

A core critical “discussion” point is the question of what Amazon will do with people’s medical voice query data, given the partnership is clearly encouraging people to get used to asking Alexa for health advice.

“We have stuck to the fundamental principle of not agreeing a way of working with Amazon that we would not be willing to consider with any single partner – large or small. We have been careful about data, commercialisation, privacy and liability, and we have spent months working with knowledgeable colleagues to get it right,” NHS Digital claimed in July.

In another section of the blog post, responding to questions about what Amazon will do with the data and “what about privacy”, it further asserted there would be no health profiling of customers — writing:

We have worked with the Amazon team to ensure that we can be totally confident that Amazon is not sharing any of this information with third parties. Amazon has been very clear that it is not selling products or making product recommendations based on this health information, nor is it building a health profile on customers. All information is treated with high confidentiality. Amazon restrict access through multi-factor authentication, services are all encrypted, and regular audits run on their control environment to protect it.

Yet it turns out the contract DHSC signed with Amazon is just a content licensing agreement. There are no terms contained in it concerning what can or can’t be done with the medical voice query data Alexa is collecting with the help of “NHS-verified” information.

Per the contract terms, Amazon is required to attribute content to the NHS when Alexa responds to a query with information from the service’s website. (Though the company says Alexa also makes use of medical content from the Mayo Clinic and Wikipedia.) So, from the user’s point of view, they will at times feel like they’re talking to an NHS-branded service.

But without any legally binding confidentiality clauses around what can be done with their medical voice queries it’s not clear how NHS Digital can confidently assert that Amazon isn’t creating health profiles.

The situation seems to sum to, er, trust Amazon. (NHS Digital wouldn’t comment; saying it’s only responsible for delivery not policy setting, and referring us to the DHSC.)

Asked what it does with medical voice query data generated as a result of the NHS collaboration an Amazon spokesperson told us: “We do not build customer health profiles based on interactions with nhs.uk content or use such requests for marketing purposes.”

But the spokesperson could not point to any legally binding contract clauses in the licensing agreement that restrict what Amazon can do with people’s medical queries.

We’ve also asked the company to confirm whether medical voice queries that return NHS content are being processed in the US.

“This collaboration only provides content already available on the NHS.UK website, and absolutely no personal data is being shared by NHS to Amazon or vice versa,” Amazon also told us, eliding the key point that it’s not NHS data being shared with Amazon but NHS users, reassured by the presence of a trusted public brand, being encouraged to feed Alexa sensitive personal data by asking about their ailments and health concerns.

Bizarrely, the Department of Health and Social Care went further. Its spokeswoman claimed in an email that “there will be no data shared, collected or processed by Amazon and this is just an alternative way of providing readily available information from NHS.UK.”

When we spoke to DHSC on the phone prior to this, to raise the issue of medical voice query data generated via the partnership and fed to Amazon — also asking where in the contract are clauses to protect people’s data — the spokeswoman said she would have to get back to us.

All of which suggests the government has a very vague idea (to put it generously) of how cloud-powered voice AIs function.

Presumably no one at DHSC bothered to read the information on Amazon’s own Alexa privacy page — although the department spokeswomen was at least aware this page existed (because she knew Amazon had pointed us to what she called its “privacy notice”, which she said “sets out how customers are in control of their data and utterances”).

If you do read the page you’ll find Amazon offers some broad-brush explanation there which tells you that after an Alexa device has been woken by its wake word, the AI will “begin recording and sending your request to Amazon’s secure cloud”.

Ergo data is collected and processed. And indeed stored on Amazon’s servers. So, yes, data is ‘shared’.

The more detailed Alexa Internet Privacy Notice, meanwhile, sets out broad-brush parameters to enable Amazon’s reuse of Alexa user data — stating that “the information we learn from users helps us personalize and continually improve your Alexa experience and provide information about Internet trends, website popularity and traffic, and related content”. [emphasis ours]

The DHSC sees the matter very differently, though.

With no contractual binds covering health-related queries UK users of Alexa are being encouraged to whisper into Amazon’s robotic ears — data that’s naturally linked to Alexa and Amazon account IDs (and which the Alexa Internet Privacy Notice also specifies can be accessed by “a limited number of employees”) — the government is accepting the tech giant’s standard data processing terms for a commercial, consumer product which is deeply integrated into its increasingly sprawling business empire.

Terms such as indefinite retention of audio recordings — unless users pro-actively request that they are deleted. And even then Amazon admitted this summer it doesn’t always delete the text transcripts of recordings. So even if you keep deleting all your audio snippets, traces of medical queries may well remain on Amazon’s servers.

Earlier this year it also emerged the company employs contractors around the world to listen in to Alexa recordings as part of internal efforts to improve the performance of the AI.

A number of tech giants recently admitted to the presence of such ‘speech grading’ programs, as they’re sometimes called — though none had been up front and transparent about the fact their shiny AIs needed an army of external human eavesdroppers to pull off a show of faux intelligence.

It’s been journalists highlighting the privacy risks for users of AI assistants; and media exposure leading to public pressure on tech giants to force changes to concealed internal processes that have, by default, treated people’s information as an owned commodity that exists to serve and reserve their own corporate interests.

Data protection? Only if you interpret the term as meaning your personal data is theirs to capture and that they’ll aggressively defend the IP they generate from it.

So, in other words, actual humans — both employed by Amazon directly and not — may be listening to the medical stuff you’re telling Alexa. Unless the user finds and activates a recently added ‘no human review’ option buried in Alexa settings.

Many of these arrangements remain under regulatory scrutiny in Europe. Amazon’s lead data protection regulator in Europe confirmed in August it’s in discussions with it over concerns related to its manual reviews of Alexa recordings. So UK citizens — whose taxes fund the NHS — might be forgiven for expecting more care from their own government around such a ‘collaboration’.

Rather than a wholesale swallowing of tech giant T&Cs in exchange for free access to the NHS brand and  “NHS-verified” information which helps Amazon burnish Alexa’s utility and credibility, allowing it to gather valuable insights for its commercial healthcare ambitions.

To date there has been no recognition from DHSC the government has a duty of care towards NHS users as regards potential risks its content partnership might generate as Alexa harvests their voice queries via a commercial conduit that only affords users very partial controls over what happens to their personal data.

Nor is DHSC considering the value being generously gifted by the state to Amazon — in exchange for a vague supposition that a few citizens might go to the doctor a bit less if a robot tells them what flu symptoms look like.

“The NHS logo is supposed to mean something,” says Sam Smith, coordinator at patient data privacy advocacy group, MedConfidential — one of the organizations that makes use of the NHS’ free APIs for health content (but which he points out did not write its own contract for the government to sign).

“When DHSC signed Amazon’s template contract to put the NHS logo on anything Amazon chooses to do, it left patients to fend for themselves against the business model of Amazon in America.”

In a related development this week, Europe’s data protection supervisor has warned of serious data protection concerns related to standard contracts EU institutions have inked with another tech giant, Microsoft, to use its software and services.

The watchdog recently created a strategic forum that’s intended to bring together the region’s public administrations to work on drawing up standard contracts with fairer terms for the public sector — to shrink the risk of institutions feeling outgunned and pressured into accepting T&Cs written by the same few powerful tech providers.

Such an effort is sorely needed — though it comes too late to hand-hold the UK government into striking more patient-sensitive terms with Amazon US.

 


0

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

15:43 | 23 October

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.

This despite the EU parliament calling last year for the mechanism to be suspended.

The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.

This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.

The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.

At heart is a fundamental legal clash between EU privacy rights and US national security priorities.

The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.

Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.

The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.

To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).

The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.

It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.

Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”

Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.

It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)

The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.

So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…

“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.

All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.

In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.

There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.

The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.

So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.

 


0

Google’s Play Store is giving an age-rating finger to Fleksy, a Gboard rival 🖕

14:11 | 23 October

Platform power is a helluva a drug. Do a search on Google’s Play store in Europe and you’ll find the company’s own Gboard app has an age rating of PEGI 3 — aka the pan-European game information labelling system which signifies content is suitable for all age groups.

PEGI 3 means it may still contain a little cartoon violence. Say, for example, an emoji fist or middle finger.

Now do a search on Play for the rival Fleksy keyboard app and you’ll find it has a PEGI 12 age rating. This label signifies the rated content can contain slightly more graphic fantasy violence and mild bad language.

The discrepancy in labelling suggests there’s a material difference between Gboard and Fleksy — in terms of the content you might encounter. Yet both are pretty similar keyboard apps — with features like predictive emoji and baked in GIFs. Gboard also lets you create custom emoji. While Fleksy puts mini apps at your fingertips.

A more major difference is that Gboard is made by Play Store owner and platform controller, Google. Whereas Fleksy is an indie keyboard that since 2017 has been developed by ThingThing, a startup based out of Spain.

Fleksy’s keyboard didn’t used to carry a 12+ age rating — this is a new development. Not based on its content changing but based on Google enforcing its Play Store policies differently.

The Fleksy app, which has been on the Play Store for around eight years at this point — and per Play Store install stats has had more than 5M downloads to date — was PEGI 3 rating until earlier this month. But then Google stepped in and forced the team to up the rating to 12. Which means the Play Store description for Fleksy in Europe now rates it PEGI 12 and specifies it contains “Mild Swearing”.

Screenshot 2019 10 23 at 12.39.45

The Play store’s system for age ratings requires developers to fill in a content ratings form, responding to a series of questions about their app’s content, in order to obtain a suggested rating.

Fleksy’s team have done so over the years — and come up with the PEGI 3 rating without issue. But this month they found they were being issued the questionnaire multiple times and then that their latest app update was blocked without explanation — meaning they had to reach out to Play Developer Support to ask what was going wrong.

After some email back and forth with support staff they were told that the app contained age inappropriate emoji content. Here’s what Google wrote:

During review, we found that the content rating is not accurate for your app… Content ratings are used to inform consumers, especially parents, of potentially objectionable content that exists within an app.

For example, we found that your app contains content (e.g. emoji) that is not appropriate for all ages. Please refer to the attached screenshot.

In the attached screenshot Google’s staff fingered the middle finger emoji as the reason for blocking the update:

Fleksy Play review emoji violation

 

“We never thought a simple emoji is meant to be 12+,” ThingThing CEO Olivier Plante tells us.

With their update rejected the team was forced to raise the rating of Fleksy to PEGI 12 — just to get their update unblocked so they could push out a round of bug fixes for the app.

That’s not the end of the saga, though. Google’s Play Store team is still not happy with the regional age rating for Fleksy — and wants to push the rating even higher — claiming, in a subsequent email, that “your app contains mature content (e.g. emoji) and should have higher rating”.

Now, to be crystal clear, Google’s own Gboard app also contains the middle finger emoji. We are 100% sure of this because we double-checked…

Gboard finger

Emojis available on Google’s Gboard keyboard, including the ‘screw you’ middle finger. Photo credit: Romain Dillet/TechCrunch

This is not surprising. Pretty much any smartphone keyboard — native or add-on — would contain this symbol because it’s a totally standard emoji.

But when Plante pointed out to Google that the middle finger emoji can be found in both Fleksy’s and Gboard’s keyboards — and asked them to drop Fleksy’s rating back to PEGI 3 like Gboard — the Play team did not respond.

A PEGI 16 rating means the depiction of violence (or sexual activity) “reaches a stage that looks the same as would be expected in real life”, per official guidance on the labels, while the use of bad language can be “more extreme”, and content may include the use of tobacco, alcohol or illegal drugs.

And remember Google is objecting to “mature” emoji. So perhaps its app reviewers have been clutching at their pearls after finding other standard emojis which depict stuff like glasses of beer, martinis and wine…  ‍♀️

Over on the US Play Store, meanwhile, the Fleksy app is rated “teen”.

While Gboard is — yup, you guessed it! — ‘E for Everyone’…

image 1 1

 

Plante says the double standard Google is imposing on its own app vs third party keyboards is infuriating, and he accuses the platform giant of anti-competitive behavior.

“We’re all-in for competition, it’s healthy… but incumbent players like Google playing it unfair, making their keyboard 3+ with identical emojis, is another showcase of abuse of power,” he tells TechCrunch.

A quick search of the Play Store for other third party keyboard apps unearths a mixture of ratings — most rated PEGI 3 (such as Microsoft-owned SwiftKey and Grammarly Keyboard); some PEGI 12 (such as Facemoji Emoji Keyboard which, per Play Store’s summary contains “violence”).

Only one that we could find among the top listed keyboard apps has a PEGI 16 rating.

This is an app called Classic Big Keyboard — whose listing specifies it contains “Strong Language” (and what keyboard might not, frankly!?). Though, judging by the Play store screenshots, it appears to be a fairly bog standard keyboard that simply offers adjustable key sizes. As well as, yes, standard emoji.

“It came as a surprise,” says Plante describing how the trouble with Play started. “At first, in the past weeks, we started to fill in the rating reviews and I got constant emails the rating form needed to be filled with no details as why we needed to revise it so often (6 times) and then this last week we got rejected for the same reason. This emoji was in our product since day 1 of its existence.”

Asked whether he can think of any trigger for Fleksy to come under scrutiny by Play store reviewers now, he says: “We don’t know why but for sure we’re progressing nicely in the penetration of our keyboard. We’re growing fast for sure but unsure this is the reason.”

“I suspect someone is doubling down on competitive keyboards over there as they lost quite some grip of their search business via the alternative browsers in Europe…. Perhaps there is a correlation?” he adds, referring to the European Commission’s antitrust decision against Google Android last year — when the tech giant was hit with a $5BN fine for various breaches of EU competition law. A fine which it’s appealing.

“I’ll continue to fight for a fair market and am glad that Europe is leading the way in this,” adds Plante.

Following the EU antitrust ruling against Android, which Google is legally compelled to comply with during any appeals process, it now displays choice screens to Android users in Europe — offering alternative search engines and browsers for download, alongside Google’s own dominate search  and browser (Chrome) apps.

However the company still retains plenty of levers it can pull and push to influence the presentation of content within its dominant Play Store — influencing how rival apps are perceived by Android users and so whether or not they choose to download them.

So requiring that a keyboard app rival gets badged with a much higher age rating than Google’s own keyboard app isn’t a good look to say the least.

We reached out to Google for an explanation about the discrepancy in age ratings between Fleksy and Gboard and will update this report with any further response. At first glance a spokesman agreed with us that the situation looks odd.

 


0

EU contracts with Microsoft raising “serious” data concerns, says watchdog

17:38 | 21 October

Europe’s chief data protection watchdog has raised concerns over contractual arrangements between Microsoft and the European Union institutions which are making use of its software products and services.

The European Data Protection Supervisor (EDPS) opened an enquiry into the contractual arrangements between EU institutions and the tech giant this April, following changes to rules governing EU outsourcing.

Today it writes [with emphasis]: “Though the investigation is still ongoing, preliminary results reveal serious concerns over the compliance of the relevant contractual terms with data protection rules and the role of Microsoft as a processor for EU institutions using its products and services.”

We’ve reached out to Microsoft for comment.

A spokesperson for the company told Reuters: “We are committed to helping our customers comply with GDPR [General Data Protection Regulation], Regulation 2018/1725 and other applicable laws. We are in discussions with our customers in the EU institutions and will soon announce contractual changes that will address concerns such as those raised by the EDPS.”

The preliminary finding follows risk assessments carried out by the Dutch Ministry of Justice and Security, published this summer, which also found similar issues, per the EDPS.

At issue is whether contractual terms are compatible with EU data protection laws intended to protect individual rights across the region.

“Amended contractual terms, technical safeguards and settings agreed between the Dutch Ministry of Justice and Security and Microsoft to better protect the rights of individuals shows that there is significant scope for improvement in the development of contracts between public administration and the most powerful software developers and online service outsourcers,” the watchdog writes today.

“The EDPS is of the opinion that such solutions should be extended not only to all public and private bodies in the EU, which is our short-term expectation, but also to individuals.”

A conference, jointly organized by the EDPS and the Dutch Ministry, which was held in August, brought together EU customers of cloud giants to work on a joint response to tackle regulatory risks related to cloud software provision. The event agenda included a debate on what was billed as “Strategic Vendor Management with respect to hyperscalers such as Microsoft, Amazon Web Services and Google”.

The EDPS says the idea for The Hague Forum — as it’s been named — is to develop a common strategy to “take back control” over IT services and products sold to the public sector by cloud giants.

Such as by creating standard contracts with fair terms for public administration, instead of the EU’s various public bodies feeling forced into accepting T&Cs as written by the same few powerful providers.

Commenting in a statement today, assistant EDPS, Wojciech Wiewiórowski, said: “We expect that the creation of The Hague Forum and the results of our investigation will help improve the data protection compliance of all EU institutions, but we are also committed to driving positive change outside the EU institutions, in order to ensure maximum benefit for as many people as possible. The agreement reached between the Dutch Ministry of Justice and Security and Microsoft on appropriate contractual and technical safeguards and measures to mitigate risks to individuals is a positive step forward. Through The Hague Forum and by reinforcing regulatory cooperation, we aim to ensure that these safeguards and measures apply to all consumers and public authorities living and operating in the EEA.”

EU data protection law means data controllers who make use of third parties to process personal data on their behalf remain accountable for what’s done with the data — meaning EU public institutions have a responsibility to assess risks around cloud provision, and have appropriate contractual and technical safeguards in place to mitigate risks. So there’s a legal imperative to dial up scrutiny of cloud contracts.

In parallel, the EDPS has been pushing for greater transparency in consumer agreements too.

On the latter front Microsoft’s arrangements with consumers using its desktop OS remain under scrutiny in the EU. Earlier this year the Dutch data protection agency referred privacy concerns about how Windows 10 gathers user data to the company’s lead regulator in Europe.

While this summer the company made changes to its privacy policy for its VoIP product Skype and AI assistant Cortana after media reports revealed it employed contractors who could listen in to audio snippets to improve automated translation and inferences.

The French government, meanwhile, has been loudly pursuing a strategy of digital sovereignty to reduce the state’s reliance on foreign tech providers. Though kicking the cloud giant habit may prove harder than ditching Google search.

 


0

Europe issues interim antitrust order against Broadcom as probe continues

14:23 | 16 October

Europe has ordered chipmaker Broadcom to stop applying exclusivity clauses in agreements with six of its major customers — imposing so called ‘interim measures’ based on preliminary findings from an ongoing antitrust investigation.

The move follows a formal statement of objections issued by the Competition Commission in June. At the time the regulator said it would seek to order Broadcom to halt its behaviour while the investigation proceeds — “to avoid any risk of serious and irreparable harm to competition”.

Today Broadcom has been ordered to unilaterally stop applying “anticompetitive provisions” in agreements with six customers, and to inform them it will no longer apply such measures.

It is also barred from agreeing provisions with the same or similar effect, and from taking any retaliatory practices intended to punish customers with an equivalent effect.

Commenting in a statement, antitrust chief Margrethe Vestager, said: “We have strong indications that Broadcom, the world’s leading supplier of chipsets used for TV set-top boxes and modems, is engaging in anticompetitive practices. Broadcom’s behaviour is likely, in the absence of intervention, to create serious and irreversible harm to competition. We cannot let this happen, or else European customers and consumers would face higher prices and less choice and innovation. We therefore ordered Broadcom to immediately stop its conduct.”

We’ve reached out to Broadcom for comment.

The chipmaker has 30 days to comply with the interim measures, though it can choose to challenge the order in court.

Should the order stand it will apply for up to three years — or the date of adoption of a final competition decision on the case (whichever is earlier).

The Commission began investigations into Broadcom a year ago.

“We have reached the conclusion that in first sight — or in legal lingo, prima facie — Broadcom is currently infringing competition rules by abusing its dominant position in the system on a chip market in TV set-top boxes, fiber modems and xDSL modems,” said Vestager today, speaking during a press conference setting out the interim measures decision.

In June, when the Commission issued formal objections, it said it believes the chipmaker holds a dominant position in markets for the supply of systems-on-a-chip for TV set-top boxes and modems — identifying clauses in agreements with manufacturers that it suspected could harm competition.

At the time it flagged seven agreements. That’s now been reduced to six as the scope of the investigation has been limited to three markets, following submissions from Broadcom after the Statement of Objections.

Vestager said the slight reduction in scope is “a reflection of a process having heard Broadcom’s arguments” over the past few months.

The use of interim measures is noteworthy — as a sign of how the EU regulator is seeking to evolve competition enforcement to keep up with market activity. It’s the first time in 18 years the commission has sought to use the tool.

“Interim measures are one way to tackle the challenge of enforcing our competition rules in a fast and effective manner,” said Vestager. “This is why they are important. And especially that in fast moving markets. Whenever necessary I’m therefore committed to making the best possible use of this important tool.”

During a recent hearing in front of the EU parliament — as the commissioner heads towards another five years as Europe’s competition chief combined with an expanded role as an EVP setting digital policy — she suggested she will seek to make greater use of interim orders as an enforcement tool.

Asked today whether she has already identified other cases where interim measures could be applied, she said she hasn’t but added: “The tool is on the table. And if we find cases that live up to the two things that have to be fulfilled at the same time, yes we will indeed use interim measures more often.

“We don’t have a line up of cases [where interim measures might be applied],” she added. “Two quite substantial conditions will have to be met. One we have to prove that it’s likely there will be serious and irreparable harm to competition, and second we’ll have to find that there is an infringement at first sight.

“[It’s] an instrument, a tool, where we still will have to be careful and precise,” she went on, noting that the Broadcom investigation has taken a full year’s investigation work up to this point. “We are careful and we will not compromise on the right for the company in question to defend themself.”

Responding to a question about whether interim measures might be more difficult to apply in digital vs traditional markets, she said the regulator will need to be able to identify harm.

“The thing is for an interim measures case to work obviously you will have to be able to identify the harm. And that of course when markets are fast moving — that is the first sort of port of call. Can we identify harm in this market?” she said. “But… we do a lot of different things to fully grasp how competition works in fast moving, platform-driven, network-driven markets in order to be able to do that. And to be able to use the instrument if we find a case where this would be the thing to do in order to prevent irreparable and serious harm to competition.”

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 349

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short