Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: United Kingdom

<< Back Forward >>
Topics from 1 to 10 | in all: 578

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

19:12 | 20 February

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

 


0

HungryPanda, a food delivery app for Chinese communities, raises $20 million

16:00 | 20 February

HungryPanda, a food delivery service for Chinese communities in cities around the world, announced today it has raised $20 million in funding. The round was led by investors 83North and Felix Capital and will be used on hiring, product development and global expansion, particularly in the United States. The startup, which did not disclose its current valuation, said its goal is to reach an annual run rate of $200 million by May.

Founded in the United Kingdom, where its service first launched in Nottingham, HungryPanda is now available in 31 cities in the U.K., Italy, France, Australia, New Zealand and the U.S.

Food delivery is a competitive space with tight margins, but HungryPanda is carving out its own niche, and differentiating from competitors like UberEats, Deliveroo and FoodPanda, by tailoring its platform for Chinese-language users, including business owners, and focusing on Chinese food and grocery deliveries. It also accepts payment services like Alipay and WeChat Pay, and uses WeChat for marketing.

Chinese communities around the world present a major market opportunity and HungryPanda says its operations in the United Kingdom and New York City are already profitable. According to a U.S. Census Bureau report published last year, the Chinese diaspora around the world ranges from about 10 million, when counting people born in China, to about 45 million under a wider definition that also includes second-generation immigrants and other groups.

In a press statement, HungryPanda CEO Eric Liu said “we are delighted to secure the backing of 83North and Felix Capital to bring our unique service to more people in more places. Their unrivaled industry investment experience, coupled with our ability to focus on the precise needs of our customers and launch in every new city within a two-week window, means we are in an ideal position to significantly scale to the business to meet the huge level of demand created by Chinese cuisine.”

Both 83North and Felix Capital already have other food delivery startups in their portfolios. 83North is an investor and Just Eat and Helsinki-based Wolt, while Felix Capital has backed Deliveroo and Frichti, a French startup that makes all its meals in-house.

 


0

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

02:01 | 20 February

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.

But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.

The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.

The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.

This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.

In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)

The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.

Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.

This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)

In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).

There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.

While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)

The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]

Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.

So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.

Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.

One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)

Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )

Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.

Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”

In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.

“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”

One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.

That’s one deadline that may help to concentrate minds on issuing decisions.

Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).

There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

 


0

Noom competitor OurPath rebrands as Second Nature, raises $10M Series A

22:27 | 18 February

Back in 2018 OurPath emerged as a startup in the UK tackling the problem of diabetes. The company helped customers tackle the disease, and raised a $3m round of funding by combining advice from health experts with tracking technology via a smartphone app to help people build healthy habits and lose weight.

Now rebranded as Second Nature, it’s raised a fresh $10m in Series A funding.

New investors include Uniqa Ventures, the venture capital fund of Uniqa, a European insurance group, and the founders of mySugr, the digital diabetes management platform which was acquired by health giant Roche .

The round also secured the backing of existing investors including Connect and Speedinvest, two European seed funds, and Bethnal Green Ventures, the early-stage Impact investor, as well as angels including Taavet Hinrikus, founder of Transferwise.

This new injection takes the total investment in the company to $13m.

Competitors to the company include Weight Watchers and Noom, which provides a similar program and has raised $114.7M.

Second Nature claims to have a different, more intensive and personalized, approach to create habit change. The startup claims 10,000 of its participants revealed an average weight loss of 5.9kg at the 12-week mark. Separate peer-reviewed scientific data published by the company showed that much of this weight-loss is sustained at the 6-month and 12-month mark

Under its former guise as OurPath, the startup was the first ‘lifestyle change program’ to be commissioned by the NHS for diabetes management.

Second Nature was founded in 2015 by Chris Edson and Mike Gibbs, former healthcare strategy consultants, who designed the program to provide people with personalized support in order to make lifestyle changes.

Participants receive a set of ‘smart’ scales and an activity tracker that links with the app, allowing them to track their weight loss progress and daily step count. They are placed in a peer support group of 15 people starting simultaneously. Each group is coached by a qualified dietitian or nutritionist, who provides participants with daily 1:1 advice, support and motivation to via the app. Throughout the 12-week program, people have access to healthy recipes and daily articles covering topics like meal planning, how to sleep better, and overcoming emotional eating.

Gibbs said: “Our goal as Second Nature is to solve obesity. We need to rise above the confusing health misinformation to provide clarity about what’s really important: changing habits. Our new brand and investment will help us realize that.”

Philip Edmondson-Jones, Investment Manager at Beringea, who led the investment and joins the Board of Directors of Second Nature said: “Healthcare systems are struggling to cope with spiraling rates of obesity and associated illnesses, which are projected to cost the global economy $1.2tn annually by 2025. Second Nature’s pioneering approach to lifestyle change empowers people to address these conditions.”

 


0

UK names its pick for social media ‘harms’ watchdog

15:05 | 12 February

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are

.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins,

today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

 


0

Hyundai taps EV startup Canoo to develop electric vehicles

01:49 | 12 February

Hyundai Motor Group said it will jointly develop an electric vehicle platform with Los Angeles-based startup Canoo, the latest startup tapped by the automaker as part of an $87 billion push to invest in electrification and other future technologies.

The electric vehicle platform will be based on Canoo’s proprietary skateboard design, according to the agreement that was announced Tuesday. The platform will be used for future Hyundai and Kia electric vehicles as well as the automaker group’s so-called “purpose built vehicles.” The PBV, which Hyundai showcased last month at CES 2020, is a pod-like vehicle that the company says can be used for various functions in transit, such as a restaurant or clinic. The concept is similar to Toyota’s e-Palette vehicle, which can theoretically be customized to serve as a retail shop, restaurant or shuttle for people.

The partnership with Canoo is the latest example of Hyundai Motor ramping up efforts and investments into electrification, autonomous technology and other futuristic mobility trends, including flying cars. Earlier this month, Hyundai said it would invest $110 million in UK startup Arrival and jointly develop electric commercial vehicles.

Hyundai Motor Group has committed to invest $87 billion over the next five years. Of this total group commitment, Hyundai will invest $52 billion into “future technologies” and Kia will put $25 billion towards electrification and future mobility technologies. The company says its goal is for “eco-friendly vehicles” to comprise 25% of its total sales by 2025.

Canoo said it will provide engineering services to develop the electric platform.

Canoo_Engineering Skateboard

Canoo started as Evelozcity in 2017 by two former Faraday Future executives Stefan Krause and Ulrich Kranz. The company rebranded as Canoo in spring 2019 and debuted its first vehicle last September. The first Canoo vehicles are expected to appear on the road by 2021 and will be offered only as a subscription. Canoo company recently opened the waitlist for its first vehicle.

The heart of Canoo’s first vehicle, which looks more like a microbus than a traditional electric SUV, is the “skateboard” architecture that houses the batteries and the electric drivetrain in a chassis underneath the vehicle’s cabin. It’s this Canoo architecture that Hyundai Motor Group is interested.

Hyundai Motor Group is counting on this underlying architecture to help the company reduce the cost and complexity of production and allow for it to respond quickly to changing market demands and customer preferences.

“We were highly impressed by the speed and efficiency in which Canoo developed their innovative EV architecture, making them the perfect engineering partner for us as we transition to become a frontrunner in the future mobility industry,” Albert Biermann, head of R&D at Hyundai Motor Group, said in a statement. “We will collaborate with Canoo engineers to develop a cost-effective Hyundai platform concept that is autonomous ready and suitable for mass adoption.”

 


0

UK public sector failing to be open about its use of AI, review finds

17:56 | 10 February

A report into the use of artificial intelligence by the UK’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer funded healthcare — with health minister Matt Hancock setting out a tech-fuelled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.

He has also personally championed a chatbot startup, Babylon Health, that’s using AI for healthcare triage — and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into UK public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI ‘efficiencies’ risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data-sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controllability — ordering an immediate halt to its use.

The UK parliamentary committee which reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public sector use of AI and warning that the technology challenges three key principles of service delivery: Openness, accountability, and objectivity.

“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.

“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”

“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

In 2018 the UN’s special rapporteur on extreme poverty and human rights raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense”, and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committee’s assessment it is “too early to judge if public sector bodies are successfully upholding accountability”.

Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector”.

On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias”.

The use of AI in the UK public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.

“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery”.

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialling the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers. And points to a Guardian article which reported that one-third of UK councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the UK public sector.

“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector which is generating confusion.

“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the UK Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalization.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards”.

“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat”.

“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

 


0

Index Fund’s portfolio is driving long-overdue innovation in femcare

20:32 | 9 February

U.K. startup Daye is rethinking female intimate care from a woman’s perspective, starting with a tampon infused with cannabidiol that tackles period pain.

It’s also quietly demolishing the retrograde approach to product design that women are still subjected to in the mass market “femcare” space — an anti-philosophy that not only peddles stale and sexist stereotypes, but also can harm women’s bodies.

Those perfumed sanitary pads stinking out the supermarket shelf? Whomever came up with that idea has obviously never experienced thrush or bacterial vaginosis. Nor spoken to a health professional who could have told them vaginal infections can be triggered by perfumed products.

The missing link: There are few people with a vagina in positions leading product strategy. And that’s the disruptive opportunity female-led femcare businesses like Daye are closing in on.

The Index Ventures-backed startup is shaking up a tired category by selling the flip-side: thoughtfully designed products for period care that first do no harm and second take aim at actual problems women have — starting with dysmenorrhea. The overarching strand is building community — to help women better understand what’s going on with their bodies and reinforce shifting product expectations in the process.

We chatted with Index principal Hannah Seal about the fund’s investment in Daye, and to get her thoughts more broadly on a new generation of female-focused startups that are driving long-overdue innovation.

The interview has been edited for length and clarity.

 


0

Facebook’s use of Onavo spyware faces questions in EU antitrust probe — report

18:24 | 6 February

Facebook’s use of the Onavo spyware VPN app it acquired in 2013 — and used to inform its 2014 purchase of the then rival WhatsApp messaging platform — is on the radar of Europe’s antitrust regulator, per a report in the Wall Street Journal.

The newspaper reports that the Commission has requested a large volume of internal documents as part of a preliminary investigation into Facebook’s data practices which was announced in December.

The WSJ cites people familiar with the matter who told it the regulator’s enquiry is focused on allegations Facebook sought to identify and crush potential rivals and thereby stifle competition by leveraging its access to user data.

Facebook announced it was shutting down Onavo a year ago — in the face of rising controversial about its use of the VPN tool as a data-gathering business intelligence dragnet that’s both hostile to user privacy and raises major questions about anti-competitive practices.

As recently as 2018 Facebook was still actively pushing Onavo at users of its main social networking app — marketing it under a ‘Protect’ banner intended to convince users that the tool would help them protect their information.

In fact the VPN allowed Facebook to monitor their activity across third party apps — enabling the tech giant to spot emerging trends across the larger mobile ecosystem. (So, as we’ve said before, ‘Protect Facebook’s business’ would have been a more accurate label for the tool.)

By the end of 2018 further details about how Facebook had used Onavo as a key intelligence lever in major acquisitions emerged when a UK parliamentary committee obtained a cache of internal documents related to a US court case brought by a third party developer which filed suit alleging unfair treatment on its app platform.

UK parliamentarians concluded that Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, apparently without their knowledge — using the intel to assess not just how many people had downloaded apps but how often they used them, which in turn helped the tech giant to decide which companies to acquire and which to treat as a threat.

The parliamentary committee went on to call for competition and data protection authorities to investigate Facebook’s business practices.

So it’s not surprising that Europe’s competition commission should also be digging into how Facebook used Onavo. The Commission also been reviewing changes Facebook made to its developer APIs which affected what information it made available, per the WSJ’s sources.

Internal documents published by the UK parliament also highlighted developer access issues — such as Facebook’s practice of whitelisting certain favored developers’ access to user data, raising questions about user consent to the sharing of their data — as well as fairness vis-a-vis non-whitelisted developers.

According to the newspaper’s report the regulator has requested a wide array of internal Facebook documents as part of its preliminary investigation, including emails, chat logs and presentations. It says Facebook’s lawyers have pushed back — seeking to narrow the discovery process by arguing that the request for info is so broad it would produce millions of documents and could reveal Facebook employees’ personal data.

Some of the WSJ’s sources also told it the Commission has withdrawn the original order and intends to issue a narrower request.

We’ve reached out to Facebook and the competition regulator for comment.

Back in 2017 the European Commission fined Facebook $122M for providing incorrect or misleading information at the time of the WhatsApp acquisition. Facebook had given regulator assurances that user accounts could not be linked across the two services — which cleared the way for it to be allowed to acquire WhatsApp — only for the company to u-turn in 2016 by saying it would be linking user data.

In addition to investigating Facebook’s data practices over potential antitrust concerns, the EU’s competition regulator is also looking into Google’s data practices — announcing a preliminary probe in December.

 


0

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

16:03 | 6 February

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 578

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short