Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 52

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 99

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

ivanov5056 Ivanov

ivanov5056 Ivanov, 69

Joined: 20 July 2019

Interests: No data

Alex Sam

Alex Sam, 33

Joined: 26 March 2019

Interests: No data

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Europe

<< Back Forward >>
Topics from 1 to 10 | in all: 2681

HPE Growth backs WeTransfer in €35M secondary funding round

11:00 | 19 August

WeTransfer, the Amsterdam-headquartered company that is best know for its file-sharing service, is disclosing a €35 million secondary funding round.

The investment is led by European growth equity firm, HPE Growth, with “significant” participation from existing investor Highland Europe. Being secondary funding — meaning that a number of shareholders have sold all or a portion of their holding — no new money has entered WeTransfer’s balance sheet.

We are also told that Jonne de Leeuw, of HPE, will replace WeTransfer co-founder Nalden on the company’s Supervisory Board. He joins Bas Beerens (founder of WeTransfer), Irena Goldenberg (Highland Europe) and Tony Zappalà (Highland Europe).

The exact financial terms of the secondary funding, including valuation, aren’t being disclosed. However, noteworthy is that WeTransfer says it has been profitable for 6 years.

“The valuation of the company is not public, but what I can tell you is that it’s definitely up significantly since the Series A in 2015,” WeTransfer CEO Gordon Willoughby tells me. “WeTransfer has become a trusted brand in its space with significant scale. Our transfer service has 50 million users a month across 195 countries, sharing over 1.5 billion files each month”.

In addition to the wildly popular WeTransfer file-sharing service, the company operates a number of other apps and services, some it built in-house and others it has acquired. They include content sharing app Collect (claiming 4 million monthly users), sketching tool Paper (which has had 25 million downloads) and collaborative presentation tool Paste (which claims 40,000 active teams).

“We want to help people work more effectively and deliver more impactful results, with tools that collectively remove friction from every stage of the creative process — from sparking ideas, capturing content, developing and aligning, to delivery,” says Willoughby.

“Over the past two years, we’ve been investing heavily in our product development and have grown tremendously following the acquisition of the apps Paper and Paste. This strengthened our product set. Our overarching mission is to become the go-to source for beautiful, intuitive tools that facilitate creativity, rather than distract from it. Of course, our transfer service is still a big piece of that — it’s a brilliantly simple tool that more than 50 million people a month love to use”.

Meanwhile, Willoughby describes WeTransfer’s dual revenue model as “pretty unique”. The company offers a premium subscription service called WeTransfer Plus, and sells advertising in the form of “beautiful” full-screen ads called wallpapers on Wetransfer.com.

“Each piece of creative is fully produced in-house by our creative studio with an uncompromising focus on design and user experience,” explains the WeTransfer CEO. “With full-screen advertising, we find that our users don’t feel they’re simply being sold to. This approach to advertising has been incredibly effective, and our ad performance has far outpaced IAB standards. Our advertising inventory is sought out by brands like Apple, Nike, Balenciaga, Adobe, Squarespace, and Saint Laurent”.

Alongside this, WeTransfer says it allocates up to 30% of its advertising inventory and “billions of impressions” to support and spotlight up-and-coming creatives, and causes, such as spearheading campaigns for social issues.

The company has 185 employees in total, with about 150 in Amsterdam and the rest across its U.S. offices in L.A. and New York.

 


0

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity

16:00 | 17 August

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

 


0

WebKit’s new anti-tracking policy puts privacy on a par with security

13:17 | 15 August

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.

 


0

Flatfair, the ‘deposit-free’ renting platform, raises $11M led by Index Ventures

01:00 | 15 August

Flatfair, a London-based fintech that lets landlords offer “deposit-free” renting to tenants, has raised $11 million in funding.

The Series A round is led by Index Ventures, with participation from Revolt Ventures, Adevinta, Greg Marsh (founder of Onefinestay), Jeremy Helbsy (former Savills CEO), and Taavet Hinrikus (TransferWise co-founder).

With the new capital, Flatfair says it plans to hire a “significant” number of product engineers, data scientists and business development specialists.

The startup will also invest in building out new features as it looks to expand its platform with “a focus on making renting fairer and more transparent for landlords and tenants”.

“With the average deposit of £1,110 across England and Wales being just shy of the national living wage, tenants struggle to pay expensive deposits when moving into their new home, often paying double deposits in between tenancies,” Flatfair co-founder and CEO Franz Doerr tells me when asked to frame the problem the startup has set out to solve.

“This creates cash flow issues for tenants, in particular for those with families. Some tenants end up financing the deposit through friends and family or even accrue expensive credit card debt. The latter can have a negative impact on the tenants credit rating, further restricting important access to credit for things that really matter in a tenants life”.

To remedy this, Fatfair’s “insurance-backed” payment technology provides tenants with the option to pay a per-tenancy membership fee instead of a full deposit. They do this by authorising their bank account via debit card with Flatfair, and when it is time to move out, any end-of-tenancy charges are handled via the Flatfair portal, including dispute resolution.

So, for example, rather than having to find a rental deposit equivalent to a month’s rent, which in theory you would get back once you move out sans any end-of-tenancy charges, with Fatfair you pay about a quarter of that as a non-refundable fee.

Of course, there are pros and cons to both, but for tenants that are cashflow restricted, the startup’s model at least offers an alternative financing option.

In addition, tenants registered with Flatfair are given a “trust score” that can go up over time, helping them move tenancy more easily in the future. The company is also trialing the use of Open Banking to help with credit checks by analysing transaction history to verify that you have paid rent regularly and on time in the past.

Landlords are said to like the model. Current Flatfair clients include major property owners and agents, such as Greystar, Places for People, and CBRE. “Before Flatfair, deposits were the only form of tenancy security that landlords trusted,” claims Doerr.

In the event of a dispute over end-of-tenancy charges, both landlords and tenants are asked to upload evidence to the Flatfair platform and to try to settle the disagreement amicably. If they can’t, the case is referred by Flatfair to an independent adjudicator via mydeposits, a U.K. government-backed deposit scheme the company is partnering with.

“In such a case, all the evidence is submitted to mydeposits and they come back with a decision within 24 hours,” explains Doerr. “[If] the adjudicator says that the tenant owes money, we invoice the tenant who has then has 5 days to pay. If the tenant doesn’t pay, we charge her bank account… What’s key here is having the evidence. People are generally happy to pay if the costs are fair and where clear evidence exists, there’s less to argue about”.

More broadly, Doerr says there’s significant scope for digitisation across the buy-to-let sector and that the big vision for Flatfair is to create an “operating system” for rentals.

“The fundamental idea is to streamline processes around the tenancy to create revenue and savings opportunities for landlords and agents, whilst promoting a better customer experience, affordability, and fairness for tenants,” he says.

“We’re working on a host of exciting new features that we’ll be able to talk about in the coming months, but we see opportunities to automate more functions within the lifecycle of a tenancy and think there are a number of big efficiency savings to be made by unifying old systems, dumping old paper systems and streamlining cumbersome admin. Offering a scoring system for tenants is a great way of encouraging better behaviour and given housing represents most people’s biggest expense, it’s only right renters should be able to build up their credit score and benefit from paying on time”.

 


0

Oyo to invest $335M in vacation rental business in Europe push

18:08 | 14 August

Indian budget hotel booking startup Oyo will invest 300 million euros ($335 million) in its vacation home rental business, it said Wednesday, as it looks to expand its footprint in Europe and possibly closely compete with global giant Airbnb, one of its investors.

The Gurgaon-based startup, which acquired Amsterdam-based holiday rental company Leisure in May this year and rebranded it to Oyo Vacation Homes, said it aims to turn Oyo Vacation Homes into the destination for “top-notch” holiday experience and the partner of choice for homeowners.

The new capital will go into “strengthening the relationship with homeowners and enabling them with the resources required to deliver chic hospitality experiences,” and building the largest vacation rental management service business in Europe, managed under OYO Home, Belvilla, Danland, and Dancenter brands.

“We are focusing on enhancing our customer proposition to not just families but new age millennials and young executives, traveling for business or leisure, including consumers from newer geographies who travel to Europe from across the world including US, Asia, China and the Middle East,” Tobias Wann, CEO of OYO Vacation Homes, said in a prepared statement.

OYO, which claims to be the world’s third-biggest and fastest-growing hotel chain, operates more than 23,000 hotels and 125,000 vacation homes, with over 1 million rooms in more than 80 countries, the company said. It claimed that Oyo Vacation Homes has “doubled its growth” since the acquisition of Leisure in May.

@Leisure sees traffic and business from some 2.8 million travelers annually from across 118 countries. Its European footprint covers some 115,000 homes, and some 300,000 rooms globally. Europe’s vacation rental market will be worth some $18.6 billion this year, according to estimates, growing at between four and eight percent annually.

Oyo, which is increasingly expanding its business and recently entered the co-working spaces, recently said it will invest $300 million in expanding its footprint in the U.S.

The announcement today comes weeks after Ritesh Agarwal, the founder and CEO of Oyo, raised his stake in the startup with $2 billion buyback. The move was highly praised by local entrepreneurs in the nation.

 


0

Business management startup vCita acquires email marketing tool WiseStamp

15:00 | 14 August

Just a couple of months after disclosing a $15 million round of funding, vCita, the business management SaaS for SMEs, has made an acquisition: It’s acquiring WiseStamp, a veteran of the Israeli startup scene that launched its email marketing tool a decade ago.

Unsurprisingly, terms of the deal remain undisclosed. However, I understand that vCita has acquired WiseStamp as a company, including all assets, employees, customer base, technology and other IP. In addition, WiseStamp’s two remaining founders will join vCita along with the rest of the 20 person team.

WiseStamp hadn’t taken much capital in its relatively long history, having raised around $400,000 from angel investors.

Founded in 2009 by Orly Izhaki, Tom Piamenta, Tzvika Avnery and Sasha Gimelshtein, WiseStamp offers a email signature solution for self-employed professionals. The company claims over 50,000 paying customers, who it says use the platform to increase social media engagement, expand business reach, and generate more sales.

Meanwhile, the much younger vCita says it has over 100,000 paying users worldwide who use its SaaS to manage their schedule, track invoices, collect payments, and organize client data via the vCita app.

““We’re thrilled to have WiseStamp join our team. Both companies share the same vision: Empowering small business owners to deliver their services at a level comparable to that of a large company, at a fraction of the cost,” says says vCita co-founder and CEO Itzik Levy in a statement.

Adds Orly Izhaki, WiseStamp’s CEO and co-founder: “Over the years, WiseStamp created advanced solutions that enable hundreds of thousands of small enterprises to grow their business online. We are excited about the merger, which will establish us as one of the most dominant players in the SMB market”.

 


0

Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe

13:42 | 14 August

Facebook’s lead privacy regulator in Europe is now asking the company for detailed information about the operation of a voice-to-text feature in Facebook’s Messenger app and how it complies with EU law.

Yesterday Bloomberg reported that Facebook uses human contractors to transcribe app users’ audio messages — yet its privacy policy makes no clear mention of the fact that actual people might listen to your recordings.

A page on Facebook’s help center also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.

A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Google, Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”

Bloomberg’s report follows similar revelations about AI assistant technologies offered by other tech giants, including Apple, Amazon, Google and Microsoft — which have also attracted attention from European privacy regulators in recent weeks.

What this tells us is that the hype around AI voice assistants is still glossing over a far less high tech backend. Even as lashings of machine learning marketing guff have been used to cloak the ‘mechanical turk’ components (i.e. humans) required for the tech to live up to the claims.

This is a very old story indeed. To wit: A full decade ago, a UK startup called Spinvox, which had claimed to have advanced voice recognition technology for converting voicemails to text messages, was reported to be leaning very heavily on call centers in South Africa and the Philippines… staffed by, yep, actual humans.

Returning to present day ‘cutting-edge’ tech, following Bloomberg’s report Facebook said it suspended human transcriptions earlier this month — joining Apple and Google in halting manual reviews of audio snippets for their respective voice AIs. (Amazon has since added an opt out to the Alexa app’s settings.)

We asked Facebook where in the Messenger app it had been informing users that human contractors might be used to transcribe their voice chats/audio messages; and how it collected Messenger users’ consent to this form of data processing — prior to suspending human reviews.

The company did not respond to our questions. Instead a spokesperson provided us with the following statement: “Much like Apple and Google, we paused human review of audio more than a week ago.”

Facebook also described the audio snippets that it sent to contractors as masked and de-identified; said they were only collected when users had opted in to transcription on Messenger; and were only used for improving the transcription performance of the AI.

It also reiterated a long-standing rebuttal by the company to user concerns about general eavesdropping by Facebook, saying it never listens to people’s microphones without device permission nor without explicit activation by users.

How Facebook gathers permission to process data is a key question, though.

The company has recently, for example, used a manipulative consent flow in order to nudge users in Europe to switch on facial recognition technology — rolling back its previous stance, adopted in response to earlier regulatory intervention, of switching the tech off across the bloc.

So a lot rests on how exactly Facebook has described the data processing at any point it is asking users to consent to their voice messages being reviewed by humans (assuming it’s relying on consent as its legal basis for processing this data).

Bundling consent into general T&Cs for using the product is also unlikely to be compliant under EU privacy law, given that the bloc’s General Data Protection Regulation requires consent to be purpose limited, as well as fully informed and freely given.

If Facebook is relying on legitimate interests to process Messenger users’ audio snippets in order to enhance its AI’s performance it would need to balance its own interests against any risk to people’s privacy.

Voice AIs are especially problematic in this respect because audio recordings may capture the personal data of non-users too — given that people in the vicinity of a device (or indeed a person on the other end of the phone line who’s leaving you a message) could have their personal data captured without ever having had the chance to consent to Facebook contractors getting to hear it.

Leaks of Google Assistant snippets to the Belgian press recently highlighted both the sensitive nature of recordings and the risk of reidentification posed by such recordings — with journalists able to identify some of the people in the recordings.

Multiple press reports have also suggested contractors employed by tech giants are routinely overhearing intimate details captured via a range of products that include the ability to record audio and stream this personal data to the cloud for processing.

 


0

Dostavista, the ‘crowdsourced’ same-day delivery service, raises $15M Series B

14:30 | 13 August

Dostavista, the “crowdsourced” same-day delivery service founded in Russia but operating in several countries, has raised a $15 million Series B. Leading the round is Vostok New Ventures, with participation from existing investors Flashpoint, and Addventure.

Founded in 2012 by Mike Alexandrovski, after he initially considered creating a game where players would be asked to deliver virtual items before pivoting to the real thing, Dostavista promises same-day delivery powered by its network of “trusted couriers”.

Via the Dostavista mobile app, a gig economy-styled courier can be booked to pick up and deliver the requested item in a claimed “less than 90 minutes”.

The company currently operates in 11 countries, including Brazil, India, Indonesia, Korea, Malaysia, Mexico, the Philippines, Russia, Thailand, Turkey and Vietnam. It employs almost 400 people and says it is on track to hit a $100 million GMV run rate.

Perhaps most noteworthy in the highly competitive and usually cash intensive market of delivery, Dostavista says it is already profitable.

“These days it’s possible to order food to your house in a half-hour, taxi in minutes, but unless you’re an Amazon Prime member, your options for affordable, same-day delivery of goods are very limited,” says Alexandrovski in a statement. “That’s a problem and one we intend to solve”.

Meanwhile, today’s injection of capital will be used to invest further in its product, try out new “bold” product experiments, and for more aggressive marketing and sales. This will include strengthening the global team through additional hires.

 


0

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

12:16 | 13 August

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

 


0

London edtech startup Pi-Top sees layoffs after major contract loss

21:30 | 12 August

London-based edtech startup Pi-Top has cut a number of staff, TechCrunch has learned.

According to our sources the company has reduced its headcount in recent weeks, with staff being told cuts are a result of restructuring as it seeks to implement a new strategy.

One source told us Pi-Top recently lost out on a large education contract.

Another source said sales at Pi-Top have been much lower than predicted — with all major bids being lost.

Pi-Top confirmed to TechCrunch that it has let staff go, saying it has reduced headcount from 72 to 60 people across its offices in London, Austin and Shenzhen.

Our sources suggest the total number of layoffs could be up to a third. 

In a statement, Pi-Top told us:

pi-top has become one of the fastest growing ed-tech companies in the market in 4.5 years.  We have a unique vision to increase access to coding and technical education through project based learning to inspire a new generation of makers.

As part of this vision we built up our global team with a view to winning a particularly exciting national project in a developing nation, where we had a previous large scale successful implementation. We were disappointed this tender ultimately fell through due to economic factors in the region and have subsequently made the unfortunate but unavoidable decision to reduce our team size from 72 to 60 people across our offices in London, Austin and Shenzhen.

Moving forward we are focusing on our growth within the USA where we continue to enjoy widespread success. We are rolling out our new learning platform pi-top Further which will enable schools everywhere to access a world of content enhanced by practical hands-on project based learning outcomes. We have recently completed a successful Kickstarter campaign and we look forward to releasing our newest product pi-top [4].

We are also proud to have appointed Stanley Buchesky as our new Executive Chairman. Stanley brings a wealth of experience in the ed-tech sector and will be a great asset to our strategy going forward.

Pi-Top sells hardware and software designed for educational use in schools. It’s one of a large number of edtech startups that have sought to tap into the popularity of the ‘learn to code’ movement by piggybacking atop the (also British) low cost Raspberry Pi microprocessor — which provides the computing power for all Pi-Top’s products.

Pi-Top adds its own OS and additional education-focused software to the Pi, as well as proprietary cases — including a bright green laptop housing with a built in rail for breadboarding electronics.

Its most recent product, the Pi-Top 4, which was announced back in January, looks intended to move the company away from its first focus on educational desktop computing to more modular and embeddable hardware hacking which could be used by schools to power a wider variety of robotics and electronics projects.

Despite raising $16M in VC funding just over a year ago, Pi-Top opted to run a crowdfunding campaign for the Pi-Top 4 — going on to raise almost $200,000 on Kickstarter from 521 backers.

Pi-Top 4 backers have been told to expect the device to ship in November.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 2681

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short