Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: European commission

<< Back Forward >>
Topics from 1 to 10 | in all: 155

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

19:12 | 20 February

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

 


0

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

16:15 | 20 February

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.

Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.

Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.

Google, meanwhile, is in the process of dialling up its designs on the health space.

In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.

We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.

The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.

It also says it intends to remain “vigilant in this and similar cases in the future”.

The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.

“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.

We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.

Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.

However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)

“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.

We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.

 


0

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

02:01 | 20 February

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.

But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.

The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.

The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.

This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.

In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)

The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.

Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.

This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)

In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).

There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.

While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)

The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]

Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.

So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.

Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.

One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)

Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )

Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.

Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”

In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.

“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”

One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.

That’s one deadline that may help to concentrate minds on issuing decisions.

Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).

There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

 


0

Europe sets out plan to boost data reuse and regulate “high risk” AIs

17:20 | 19 February

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.

It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.

Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.

Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”

The top-line proposals are:

AI

  • Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
  • A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
  • Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
  • A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
  • A voluntary labelling scheme for lower risk AI applications
  • Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc

Data

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
  • A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
  • Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
  • Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health

The full data strategy proposal can be found here.

While the Commission’s white paper on AI “excellence and trust” is here.

Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.

A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.

Tech for good

At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.

The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.

The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.

The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper

Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”

She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.

“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.

“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”

Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.

“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.

Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.

“More than ever a green transition and digital transition goes hand in hand.”

On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.

“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.

“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”

“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”

Trustworthy artificial intelligence

On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.

The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.

On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.

To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.

If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.

The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.

Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.

If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.

In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.

Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”

“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”

She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.

She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.

“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”

“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”

Towards a rights-respecting common data space

The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.

Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.

Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.

“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.

The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.

The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.

But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.

The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.

Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.

There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.

Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.

The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.

Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)

The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.

But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.

It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.

Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.

While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .

“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”

At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.

Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.

“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.

The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Platform liability

There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.

That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.

During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.

“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”

Internal market commissioner, Thierry Breton

 


0

Facebook pushes EU for dilute and fuzzy Internet content rules

18:28 | 17 February

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend where he spoke about the kind of regulation he’d like applied to his platform ahead of a slate of planned meetings with digital heavyweights at the European Commission.

“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.

He went on to suggest “there’s a question about which framework you use”, telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”

“I actually think where we should be is somewhere in between,” he added, making his plea for Internet platforms to be a special case.

At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.

The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.

The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymaker are eyeing risk-based rules to wrap around AI.)

More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2BN+ social media empire squarely in regulators’ sights.

Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around Internet platforms.

The detail of the DSA has yet to be publicly laid out but any move to rethink platform liabilities could present a disruptive risk for a content distributing giant such as Facebook.

Going into meetings with key commissioners Zuckerberg made his preference for being considered a ‘special’ case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.

On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that ‘we’re just a technology platform’, and wash his hands of tricky content stuff, are long gone.

Russia’s 2016 foray into digital campaigning in the US elections and sundry content horrors/scandals before and since have put paid to that — from nation-state backed fake news campaigns to livestreamed suicides and mass murder.

Facebook has been forced to increase its investment in content moderation. Meanwhile it announced a News section launch last year — saying it would hand pick publishers content to show in a dedicated tab.

The ‘we’re just a platform’ line hasn’t been working for years. And EU policymakers are preparing to do something about that.

With regulation looming Facebook is now directing its lobbying energies onto trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation”.

Here the Facebook chief looks to be applying a similar playbook as the Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.

In a blog post published today Facebook pulls its latest policy lever: Putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.

Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.

Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.

Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of ‘acceptable vileness’ — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”

It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different”.

“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.

“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.

“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

 


0

Facebook Dating launch blocked in Europe after it fails to show privacy workings

11:59 | 13 February

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.

In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.

Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.

Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…

We’ve reached out to Facebook for comment and to ask when it carried out the DPIA.

We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws.

Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. And a dating product clearly is.

While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

 


0

Facebook’s use of Onavo spyware faces questions in EU antitrust probe — report

18:24 | 6 February

Facebook’s use of the Onavo spyware VPN app it acquired in 2013 — and used to inform its 2014 purchase of the then rival WhatsApp messaging platform — is on the radar of Europe’s antitrust regulator, per a report in the Wall Street Journal.

The newspaper reports that the Commission has requested a large volume of internal documents as part of a preliminary investigation into Facebook’s data practices which was announced in December.

The WSJ cites people familiar with the matter who told it the regulator’s enquiry is focused on allegations Facebook sought to identify and crush potential rivals and thereby stifle competition by leveraging its access to user data.

Facebook announced it was shutting down Onavo a year ago — in the face of rising controversial about its use of the VPN tool as a data-gathering business intelligence dragnet that’s both hostile to user privacy and raises major questions about anti-competitive practices.

As recently as 2018 Facebook was still actively pushing Onavo at users of its main social networking app — marketing it under a ‘Protect’ banner intended to convince users that the tool would help them protect their information.

In fact the VPN allowed Facebook to monitor their activity across third party apps — enabling the tech giant to spot emerging trends across the larger mobile ecosystem. (So, as we’ve said before, ‘Protect Facebook’s business’ would have been a more accurate label for the tool.)

By the end of 2018 further details about how Facebook had used Onavo as a key intelligence lever in major acquisitions emerged when a UK parliamentary committee obtained a cache of internal documents related to a US court case brought by a third party developer which filed suit alleging unfair treatment on its app platform.

UK parliamentarians concluded that Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, apparently without their knowledge — using the intel to assess not just how many people had downloaded apps but how often they used them, which in turn helped the tech giant to decide which companies to acquire and which to treat as a threat.

The parliamentary committee went on to call for competition and data protection authorities to investigate Facebook’s business practices.

So it’s not surprising that Europe’s competition commission should also be digging into how Facebook used Onavo. The Commission also been reviewing changes Facebook made to its developer APIs which affected what information it made available, per the WSJ’s sources.

Internal documents published by the UK parliament also highlighted developer access issues — such as Facebook’s practice of whitelisting certain favored developers’ access to user data, raising questions about user consent to the sharing of their data — as well as fairness vis-a-vis non-whitelisted developers.

According to the newspaper’s report the regulator has requested a wide array of internal Facebook documents as part of its preliminary investigation, including emails, chat logs and presentations. It says Facebook’s lawyers have pushed back — seeking to narrow the discovery process by arguing that the request for info is so broad it would produce millions of documents and could reveal Facebook employees’ personal data.

Some of the WSJ’s sources also told it the Commission has withdrawn the original order and intends to issue a narrower request.

We’ve reached out to Facebook and the competition regulator for comment.

Back in 2017 the European Commission fined Facebook $122M for providing incorrect or misleading information at the time of the WhatsApp acquisition. Facebook had given regulator assurances that user accounts could not be linked across the two services — which cleared the way for it to be allowed to acquire WhatsApp — only for the company to u-turn in 2016 by saying it would be linking user data.

In addition to investigating Facebook’s data practices over potential antitrust concerns, the EU’s competition regulator is also looking into Google’s data practices — announcing a preliminary probe in December.

 


0

Qualcomm faces fresh competition scrutiny in Europe over RFFE chips for 5G

13:35 | 6 February

Qualcomm is facing fresh antitrust scrutiny from the European Commission, with the regulator raising questions about radio frequency front-end (RFFE) chips which can be used in 5G devices.

The chipmaker has been expanding into selling RFFE chips for 5G devices, per Reuters, encouraging buyers of its 5G modems to also buy its radio frequency front-end chips, rather than buying from other vendors and integrating their hardware with its 5G modem chips.

A European Commission spokeswomen confirmed the action, telling us: “We can confirm that the Commission has sent out questionnaires, as part of a preliminary investigation into the market for radio frequency front end.”

We’ve reached out to Qualcomm for comment.

The chipmaker disclosed the activity in its 10Q investor filing — where it writes that the regulator wrote to request information in early December: “notifying us that it is investigating whether we engaged in anti-competitive behavior in the European Union (EU)/European Economic Area (EEA) by leveraging our market position in 5G baseband processors in the RFFE space”.

Qualcomm says it’s in the process of responding to the request for information.

It’s not yet clear whether the investigation will move to a formal footing in future. “Our preliminary investigation is ongoing. We cannot comment on or predict its timing or outcome,” the EC spokeswoman told us.

“It is difficult to predict the outcome of this matter or what remedies, if any, may be imposed by the EC,” Qualcomm also writes in the investor filing, adding: “We believe that our business practices do not violate the EU competition rules.”

If a violation is found it also warns investors that the EC has the power to impose a fine of up to 10% of its annual revenues, and could also issue injunctive relief that prohibits or restricts certain business practices.

The preliminary probe of Qualcomm’s 5G modem business is by no means the first antitrust action the chip giant has faced in Europe.

Last summer Europe’s competition commission fined Qualcomm close to $270M — following a long-running antitrust investigation into whether it used predatory pricing when selling UMTS baseband chips, with the regulator concluding Qualcomm had used predatory pricing to force a competitor out of the market.

Two years ago the Commission also fined the chipmaker a full $1.23BN in another antitrust case related to its dominance in LTE chipsets for smartphones, and specifically related to its relationship with iPhone maker, Apple.

In both cases Qualcomm is appealing the decisions.

It is also battling a major competition case on its home turf: In 2017 the U.S. Federal Trade Commission (FTC) filed charges against Qualcomm — accusing it of using anticompetitive tactics in an attempt to maintain a monopoly in its chip business.

Last year a US court sided with the FTC, agreeing the chip giant had violated antitrust law — and warning that such behavior would likely continue, given Qualcomm’s key role in making modems for next-gen 5G cellular tech. But, again, Qualcomm has appealed — and the legal process is continuing, with a decision on the appeal possible this year.

Its investor filing notes it was granted a motion to expedite the appeal against the FTC in July — with a hearing scheduled for February 13, 2020.

Most recently, in August, the chipmaker won a partial stay against an earlier court decision that had required it to grant patent licenses to rivals and end its practice of requiring its chip customers sign a patent license before purchasing chips.

“We will continue to vigorously defend ourself in the foregoing matters. However, litigation and investigations are inherently uncertain, and we face difficulties in evaluating or estimating likely outcomes or ranges of possible loss in antitrust and trade regulation investigations in particular,” Qualcomm adds.

 


0

UK Council websites are letting citizens be profiled for ads, study shows

15:32 | 4 February

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.

Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.

While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.

Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).

Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.

Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.

The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.

However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.

One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.

Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.

And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.

But, as the saying goes, sunlight disinfects.

Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”

We’re reached out to the ICO for a response to the report’s findings.

 


0

No pan-EU Huawei ban as Commission endorses 5G risk mitigation plan

18:57 | 29 January

The European Commission has endorsed a risk mitigation approach to managing 5G rollouts across the bloc — meaning there will be no pan-EU ban on Huawei. Rather it’s calling for Member States to coordinate and implement a package of “mitigating measures” in a 5G toolbox it announced last October and has endorsed today.

“Through the toolbox, the Member States are committing to move forward in a joint manner based on an objective assessment of identified risks and proportionate mitigating measures,” it writes in a press release.

It adds that Member States have agreed to “strengthen security requirements, to assess the risk profiles of suppliers, to apply relevant restrictions for suppliers considered to be high risk including necessary exclusions for key assets considered as critical and sensitive (such as the core network functions), and to have strategies in place to ensure the diversification of vendors”.

The move is another blow for the Trump administration — after the UK government announced yesterday that it would not be banning so-called “high risk” providers from supplying 5G networks.

Instead the UK said it will place restrictions on such suppliers — barring their kit from the “sensitive” ‘core’ of 5G networks, as well as from certain strategic sites (such as military locations), and placing a 35% cap on such kit supplying the access network.

However the US has been amping up pressure on the international community to shut the door entirely on the Chinese tech giant, claiming there’s inherent strategic risk in allowing Huawei to be involved in supplying such critical infrastructure — with the Trump administration seeking to demolish trust in Chinese-made technology.

Next-gen 5G is expected to support a new breed of responsive applications — such as self-driving cars and personalized telemedicine — where risks, should there be any network failure, are likely to scale too.

But the Commission take the view that such risks can be collectively managed.

The approach to 5G security continues to leave decisions on “specific security” measures as the responsibility of Member States. So there’s a possibility of individual countries making their own decisions to shut out Huawei. But in Europe the momentum appears to be against such moves.

“The collective work on the toolbox demonstrates a strong determination to jointly respond to the security challenges of 5G networks,” the EU writes. “This is essential for a successful and credible EU approach to 5G security and to ensure the continued openness of the internal market provided risk-based EU security requirements are respected.”

The next deadline for the 5G toolbox is April 2020, when the Commission expects Member States to have implemented the recommended measures. A joint report on their implementation will follow later this year.

Key actions being endorsed in the toolbox include:

  •     Strengthen security requirements for mobile network operators (e.g. strict access controls, rules on secure operation and monitoring, limitations on outsourcing of specific functions, etc.);
  •     Assess the risk profile of suppliers; as a consequence,  apply relevant restrictions for suppliers considered to be high risk – including necessary exclusions to effectively mitigate risks – for key assets defined as critical and sensitive in the EU-wide coordinated risk assessment (e.g. core network functions, network management and orchestration functions, and access network functions);
  •     Ensure that each operator has an appropriate multi-vendor strategy to avoid or limit any major dependency on a single supplier (or suppliers with a similar risk profile), ensure an adequate balance of suppliers at national level and avoid dependency on suppliers considered to be high risk; this also requires avoiding any situations of lock-in with a single supplier, including by promoting greater interoperability of equipment;

The Commission also recommends that Member States should contribute towards increasing diversification and sustainability in the 5G supply chain and co-ordinate on standardization around security objectives and on developing EU-wide certification schemes.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 155

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short