Blog of the website «TechCrunch» Прогноз погоды
  • Facebook: This social network profile is already connected to another user

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 27

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 9

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Law enforcement

<< Back Forward >>
Topics from 1 to 10 | in all: 128

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

00:28 | 30 March

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.

The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.

The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”

Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.

Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.

Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.

Smith wrote:

We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.

The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.

Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.

Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.

“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”

 

 

 


0

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

19:12 | 20 February

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

 


0

Europe sets out plan to boost data reuse and regulate “high risk” AIs

17:20 | 19 February

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.

It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.

Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.

Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”

The top-line proposals are:

AI

  • Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
  • A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
  • Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
  • A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
  • A voluntary labelling scheme for lower risk AI applications
  • Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc

Data

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
  • A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
  • Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
  • Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health

The full data strategy proposal can be found here.

While the Commission’s white paper on AI “excellence and trust” is here.

Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.

A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.

Tech for good

At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.

The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.

The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.

The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper

Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”

She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.

“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.

“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”

Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.

“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.

Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.

“More than ever a green transition and digital transition goes hand in hand.”

On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.

“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.

“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”

“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”

Trustworthy artificial intelligence

On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.

The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.

On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.

To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.

If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.

The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.

Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.

If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.

In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.

Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”

“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”

She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.

She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.

“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”

“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”

Towards a rights-respecting common data space

The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.

Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.

Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.

“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.

The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.

The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.

But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.

The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.

Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.

There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.

Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.

The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.

Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)

The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.

But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.

It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.

Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.

While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .

“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”

At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.

Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.

“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.

The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Platform liability

There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.

That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.

During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.

“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”

Internal market commissioner, Thierry Breton

 


0

Sixgill raises $15M to expand its dark web intelligence platform

19:00 | 11 February

Sixgill, an Israeli cyber threat intelligence company that specializes in monitoring the deep and dark web, today announced that it has raised a $15 million funding round led by Sonae IM, a fund based in Portugal, and London-based REV Venture Partners. Crowdfunding platform OurCrowd also participated in the round, as did previous investors Elron and Terra Venture Partners.

According to Crunchbase, this brings the company’s total funding to $21 million to date.

Sixgill, which was founded in 2014, plans to use the new funding to expand its efforts in North America, EMEA and APAC. In addition to expanding its geographic focus, Sixgill also plans to expand its product’s capabilities, including its Dynamic CVE Rating.

 

It’s current customer base mostly includes large enterprises, law enforcement and other government agencies as well as other security providers.

Given its focus, that client list doesn’t come as a surprise. The company uses its technology to automatically monitor dark web forums and marketplaces for potential threats and then find those that could affect its clients. Users can either access Sixgill’s through its SaaS platform or install it on-premises. For enterprises and agencies that don’t have their own staff to run the service, Sixgills also offers access to its internal analysts.

“Sixgill uses advanced automation and artificial intelligence technologies to provide accurate, contextual intelligence to customers. The solution integrates seamlessly into the platforms that security teams use to orchestrate, automate, and manage security events,” said Sharon Wagner, CEO of Sixgill. “The market has made it clear that Sixgill has built a powerful real-time engine for more effective handling of the rapidly expanding threat landscape; this investment will position us for significant growth and expansion in 2020.”

 


0

ACLU says it’ll fight DHS efforts to use app locations for deportations

22:09 | 7 February

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.

“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.

The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.

According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.

It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.

Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.

The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.

ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

 


0

Ancestry.com rejected a police warrant to access user DNA records on a technicality

18:55 | 4 February

DNA profiling company Ancestry.com has narrowly avoided complying with a search warrant in Pennsylvania after a search warrant was rejected on technical grounds, a move that is likely to help law enforcement refine their efforts to obtain user information despite the company’s efforts to keep the data private.

Little is known about the demands of the search warrant, only that a court in Pennsylvania approved law enforcement to “seek access” to Utah-based Ancestry.com’s database of more than 15 million DNA profiles.

TechCrunch was not able to identify the search warrant or its associated court case, which was first reported by BuzzFeed News on Monday. But it’s not uncommon for criminal cases still in the early stages of gathering evidence to remain under seal and hidden from public records until a suspect is apprehended.

DNA profiling companies like Ancestry.com are increasingly popular with customers hoping to build up family trees by discovering new family members and better understanding their cultural and ethnic backgrounds. But these companies are also ripe for picking by law enforcement, which want access to genetic databases to try to solve crimes from DNA left at crime scenes.

In an email to TechCrunch, the company confirmed that the warrant was “improperly served” on the company and was flatly rejected.

“We did not provide any access or customer data in response,” said spokesperson Gina Spatafore. “Ancestry has not received any follow-up from law enforcement on this matter.”

Ancestry.com, the largest of the DNA profiling companies, would not go into specifics, but the company’s transparency report said it rejected the warrant on “jurisdictional grounds.”

“I would guess it was just an out of state warrant that has no legal effect on Ancestry.com in its home state,” said Orin S. Kerr, law professor at the University of California, Berkeley, in an email to TechCrunch. “Warrants normally are only binding within the state in which they are issued, so a warrant for Ancestry.com issued in a different state has no legal effect,” he added.

But the rejection is likely to only stir tensions between police and the DNA profiling services over access to the data they store.

Ancestry.com’s Spatafore said it would “always advocate for our customers’ privacy and seek to narrow the scope of any compelled disclosure, or even eliminate it entirely.” It’s a sentiment shared by 23andMe, another DNA profiling company, which last year said that it had “successfully challenged” all of its seven legal demands, and as a result has “never turned over any customer data to law enforcement.”

The statements were in response to criticism that rival GEDmatch had controversially allowed law enforcement to search its database of more than a million records. The decision to allow in law enforcement was later revealed as crucial in helping to catch the notorious Golden Gate Killer, one of the most prolific murderers in U.S. history.

But the move was widely panned by privacy advocates for accepting a warrant to search its database without exhausting its legal options.

It’s not uncommon for companies to receive law enforcement demands for user data. Most tech giants, like Apple, Facebook, Google and Microsoft, publish transparency reports detailing the number of legal demands and orders they receive for user data each year or half-year.

Although both Ancestry.com and 23andMe provide transparency reports, detailing the amount of law enforcement demands for user data they receive, not all are as forthcoming. GEDmatch still does not publish its data demand figures, nor does MyHeritage, which said it “does not cooperate” with law enforcement. FamilyTreeDNA said it was “working” on publishing a transparency report.

But as police continue to demand data from DNA profiling and genealogy companies, they risk turning customers away — a lose-lose for both police and the companies.

Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said it would be “alarming” if law enforcement were able to get access to these databases containing millions of people’s information.

“Ancestry did the right thing in pushing back against the government request, and other companies should follow suit,” said Eidelman.

 


0

Amazon quietly publishes its latest transparency report

03:13 | 31 January

Just as Amazon was basking in the news of a massive earnings win, the tech giant quietly published — as it always does — its latest transparency report, revealing a slight dip in the number of government demands for user data.

It’s a rarely seen decline in the number of demands received by a tech company during a year where almost every other tech giant — including Facebook, Google, Microsoft and Twitter — all saw an increase in the number of demands they receive. Only Apple reported a decline in the number of demands it received.

Amazon said it received 1,841 subpoenas, 440 search warrants and 114 other court orders for user data — such as its Echo and Fire devices — during the six-month period ending 2019.

That’s about a 4% decline on the first six months of the year.

The company’s cloud unit, Amazon Web Services, also saw a decline in the number of demands for data stored by customers, down by about 10%.

Amazon also said it received between 0 and 249 national security requests for both its consumer and cloud services (rules set out by the Justice Department only allow tech and telecom companies to report in ranges).

At the time of writing, Amazon has not yet updated its law enforcement requests page to list the latest report.

Amazon’s biannual transparency report is one of the lightest reads of any company’s figures across the tech industry. We previously reported on how Amazon’s transparency reports have purposefully become more vague over the years rather than clearer — bucking the industry trend. At just three pages, the company spends most of it explaining how it responds to each kind of legal demand rather than expanding on the numbers themselves.

The company’s Ring smart camera division, which has faced heavy criticism for its poor security practices and its cozy relationship with law enforcement, still hasn’t released its own data demand figures.

 


0

Ring’s new security ‘control center’ isn’t nearly enough

23:17 | 30 January

On the same day that a Mississippi family is suing Amazon -owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.

In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.

Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras, and opt-out of allowing police to access their video recordings without the user’s consent.

But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.

Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.

Yet, Ring still has not done anything to mitigate this most basic security problem.

TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.

To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.

But Ring still uses a weak form of two-factor, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.

Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.

The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.

Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not published the number of times police demand access to customer videos, with or without a legal request.)

Ring now says its control center will allow users to decide if police can access their videos or not.

But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.

There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.

Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.

 


0

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

16:07 | 24 January

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.

The deployment comes after a multi-year period of trials by the Met and police in South Wales.

The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.

“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.

It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.

“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”

The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.

In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.

“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.

London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.

The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.

The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.

However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.

So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.

Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.

Instead it makes pains to couch the technology as “additional tool” to assist its officers.

“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.

While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.

A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.

On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.

UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.

Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.

It also suggested such use would not meet key legal requirements.

“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.

Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.

A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

 


0

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

22:12 | 17 January

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 128

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short