Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Social

<< Back Forward >>
Topics from 1 to 10 | in all: 1532

Scammers peddling Islamophobic clickbait is business as usual at Facebook

02:32 | 6 December

A network of scammers used a ring of established right-wing Facebook pages to stoke Islamophobia and make a quick buck in the process, a new report from the Guardian reveals. But it’s less a vast international conspiracy and more simply that Facebook is unable to police its platform to prevent even the most elementary scams — with serious consequences.

The Guardian’s multi-part report depicts the events like a scheme of grand proportions executed for the express purpose of harassing Representatives Ilhan Omar (D-MI), Rashida Tlaib (D-MN) and other prominent Muslims. But the facts it uncovered point towards this being a run-of-the-mill money-making operation that used tawdry, hateful clickbait and evaded Facebook’s apparently negligible protections against this kind of thing.

The scam basically went like this: an administrator of a popular right-wing Facebook page would get a message from a person claiming to share their values that asked if they could be made an editor. Once granted access, this person would publish clickbait stories — frequently targeting Muslims, and often Rep. Omar, since they reliably led to high engagement. The stories appeared on a handful of ad-saturated websites that were presumably owned by the scammers.

That appears to be the extent of the vast conspiracy, or at least its operations — duping credulous conservatives into clicking through to an ad farm.

Its human cost, however, whether incidental or deliberate, is something else entirely. Rep. Omar is already the target of many coordinated attacks, some from self-proclaimed patriots within this country; just last month, an Islamophobic Trump supporter pleaded guilty in federal court to making death threats against her.

Social media is asymmetric warfare in that a single person can be the focal point for the firepower — figurative but often with the threat of literal — of thousands or millions. That a Member of Congress can be the target of such continuous abuse makes one question the utility of the platform on which that abuse is enabled.

In a searing statement offered to the Guardian, Rep. Omar took Facebook to task:

I’ve said it before and I’ll say it again: Facebook’s complacency is a threat to our democracy. It has become clear that they do not take seriously the degree to which they provide a platform for white nationalist hate and dangerous misinformation in this country and around the world. And there is a clear reason for this: they profit off it. I believe their inaction is a grave threat to people’s lives, to our democracy and to democracy around the world.

Despite the scale of its effect on Rep. Omar and other targets, it’s possible and even likely that this entire thing was carried out by a handful of people. The operation was based in Israel, the report repeatedly mentions, but it isn’t a room of state-sponsored hackers feverishly tapping their keyboards — the guy they tracked down is a jewelry retailer and amateur SEO hustler living in a suburb of Tel Aviv who answered the door in sweatpants and nonchalantly denied all involvement.

The funny thing is that, in a way, this does amount to a vast international conspiracy. On one hand, it’s a guy in sweatpants worming his way into some trashy Facebook pages and mass-posting links to his bunk news sites. But on the other, it’s a coordinated effort to promote Islamophobic, right-wing content that produced millions of interactions and doubtless further fanned the flames of hatred.

Why not both? After all, they represent different ways that Facebook fails as a platform to protect its users. “We don’t allow people to misrepresent themselves on Facebook,” the company wrote in a statement to the Guardian. Obviously, that isn’t true. Or rather, perhaps it’s true in the way that running at the pool isn’t allowed. People just do it anyway, because the lifeguards and Facebook don’t do their job.

 


0

This browser extension unhides Instagram Likes

21:51 | 3 December

Instagram is hiding Like counts to make people feel better. But what if you’re curious, competitive, or just petty? Now you can re-embrace the popularity contest by installing the Socialinsider Chrome extension that reveals Instagram Like and comment counts. “The Return Of The Likes” extension overlays the numbers of Likes and comments on the top right corner of posts on Instagram’s website. If you don’t want Instagram’s overprotective helicopter parenting, now you can download the extension here to put an end to it.

Obviously, it’s not as useful as showing Like counts right in the Instagram mobile app. You probably aren’t going to switch to browsing Insta just on the web, but if you see a post you want to know the Like count of, you can easily send yourself the permalink and open it on a computer.

Instagram is currently testing hiding Like counts with a percentage of users in every country worldwide. It started the experiment in Canada in April before adding six more countries in July and then the U.S. last month. Facebook launched a similar hidden likes experiment in Australia in September.

TechCrunch tested Return Of The Likes and verified that it works. It come from social media analytics company Socialinsider, which offers software for measuring engagement and benchmarking performance against competitors. The company insists that “No data is sent to Socialinsider servers.” We asked Instagram if the Chrome extension was in compliance with the app’s rules, and will update if we hear back.

As social media evolves, the emerging trend is for platforms to step in to protect users. In many cases, it’s warranted. Like counts can hurt people’s well-being by leading them into envy spirals comparing themselves against peers, or coercing them to self-censor to avoid an embarrassingly low Like count. Still, the question remains whether users deserve control over their own experience. Should we be able to opt back in to seeing Like counts, the way we have controls over block lists of offensive words?

After the platforms step up to ensure safety, we’ll have to decide when we want step in and demand to see what’s been covered up.

Additional reporting by Lucas Matney

 


0

Facebook expands its efforts against ad discrimination

18:01 | 3 December

Under the terms of a a settlement with the ACLU and other civil rights groups earlier this year, Facebook has been taking steps to prevent discriminatory ad targeting.

Specifically, the company says ads in the United States that involve housing, employment or credit can no longer be targeted based on age, gender, ZIP code or multicultural affinity. Nor can the ads use more detailed targeting that connects to these categories.

Today, Facebook is announcing what VP of Ads Product Marketing Graham Mudd described as the next “milestone in our effort to reduce and eliminate discrimination.”

First, it’s expanding the enforcement of these rules beyond Facebook Ad Manager to encompass every other place where someone might buy ads on Facebook: the Ads Manager app, Instagram Promote, the ad creation tools on Facebook Pages and the Facebook Marketing API (which connects with third-party ad-buying tools).

Second, it’s expanding its searchable ad library — first created in response to concerns about political misinformation — to include housing ads targeted at an U.S. audience.

So moving forward, if a regulatory agency, civil rights group, journalist or anyone else wants to check on how businesses are actually using Facebook to advertise housing, they can check the archive. This portion of the library will start archiving ads from tomorrow (December 4) onward, and Facebook says it will eventually include employment and credit ads as well.

Mudd said that Facebook has also been helping advertisers understand how to work within the new rules. While he described this as “the right tradeoff” to combat discrimination, he also suggested that “there are and have always been very reasonable and legal non-discriminatory advertising practices” that use age- and gender-based targeting.

Now, he said, advertisers are having to “relearn how to use the platform given these restrictions.”

 


0

Facebook launches a photo portability tool, starting in Ireland

14:14 | 2 December

It’s not friend portability, but Facebook has announced the launch today of a photo transfer tool to enable users of its social network to port their photos directly to Google’s photo storage service, via encrypted transfer.

The photo portability feature is initially being offered to Facebook users in Ireland, where the company’s international HQ is based. Facebook says it is still testing and tweaking the feature based on feedback but slates “worldwide availability” as coming in the first half of 2020.

It also suggests porting to other photo storage services will be supported in the future, in addition to Google Photos — which specifying which services it may seek to add.

Facebook says the tool is based on code developed via its participation in the Data Transfer Project — a collaborative effort started last year that’s currently backed by five tech giants (Apple, Facebook, Google, Microsoft and Twitter) who have committed to build “a common framework with open-source code that can connect any two online service providers, enabling a seamless, direct, user initiated portability of data between the two platforms”.

Facebook also points to a white paper it published in September — where it advocates for “clear rules” to govern the types of data that should be portable and “who is responsible for protecting that data as it moves to different providers”.

Behind all these moves is of course the looming threat of antitrust regulation, with legislators and agencies on both sides of the Atlantic now closely eyeing platforms’ grip on markets, eyeballs and data.

Hence Facebook’s white paper couching portability tools as “helping keep competition vibrant among online services”. (Albeit, if the ‘choice’ being offered is to pick another tech giant to get your data that’s not exactly going to reboot the competitive landscape.)

It’s certainly true that portability of user uploaded data can be helpful in encouraging people to feel they can move from a dominant service.

However it is also something of a smokescreen — especially when A) the platform in question is a social network like Facebook (because it’s people who keep other people stuck to these types of services); and B) the value derived from the data is retained by the platform regardless of whether the photos themselves travel elsewhere.

Facebook processes user uploaded data such as photos to gain personal insights to profile users for ad targeting purposes. So even if you send your photos elsewhere that doesn’t diminish what Facebook has already learned about you, having processed your selfies, groupies, baby photos, pet shots and so on. (It has also designed the portability tool to send a copy of the data; ergo, Facebook still retains your photos unless you take additional action — such as deleting your account.)

The company does not offer users any controls (portability tools or access rights) over the inferences it makes based on personal data such as photos.

Or indeed control over insights it services from its analysis of usage of its platform or wider browsing of the Internet (Facebook tracks both users and non users across the web via tools like social plug-ins and tracking pixels).

Given its targeted ads business is powered by a vast outgrowth of tracking (aka personal data processing), there’s little risk to Facebook to offer a portability feature buried in a sub-menu somewhere that lets a few in-the-know users click to send a copy of their photos to another tech giant.

Indeed, it may hope to benefit from similar incoming ports from other platforms in future.

“We hope this product can help advance conversations on the privacy questions we identified in our white paper,” Facebook writes. “We know we can’t do this alone, so we encourage other companies to join the Data Transfer Project to expand options for people and continue to push data portability innovation forward.”

Competition regulators looking to reboot digital markets will need to dig beneath the surface of such self-serving initiatives if they are to alight on a meaningful method of reining in platform power.

 


0

Facebook bowed to a Singapore government order to brand a news post as false

21:05 | 30 November

Facebook added a correction notice to a post by a fringe news site that Singapore’s government said contained false information. It’s the first time the government has tried to enforce a new law against ‘fake news’ outside its borders.

The post by fringe news site States Times Review (STR), contained “scurrilous accusations” according to the Singapore government.

The States Times Review post contained accusations about the arrest of an alleged whistleblower and election-rigging.

Singapore authorities had previously ordered STR editor Alex Tan to correct the post but the Australian citizen said he would “not comply with any order from a foreign government”.

Mr Tan, who was born in Singapore, said he was an Australian citizen living in Australia and was not subject to the law. In a follow-up post, he said he would “defy and resist every unjust law”. He also posted the article on Twitter, LinkedIn and Google Docs and challenged the government to order corrections there as well.

On the note Facebook said it “is legally required to tell you that the Singapore government says this post has false information”. They then embedded the note at the bottom of the original post, which was not altered. Only social media users in Singapore could see the note.

In a statement Facebook said it had applied the label as required under the “fake news” law. The law, known as the Protection from Online Falsehoods and Manipulation bill, came into effect in October.

According to Facebook’s “transparency report” it often blocks content that governments allege violate local laws, with nearly 18,000 cases globally in the year to June.

Facebook — which has its Asia headquarters in Singapore — said it hoped assurances that the law would not impact on free expression “will lead to a measured and transparent approach to implementation”.

Anyone who breaks the law could be fined heavily and face a prison sentence of up to five years. The law also bans the use of fake accounts or bots to spread fake news, with penalties of up to S$1m (£563,000, $733,700) and a jail term of up to 10 years.

Critics say the law’s reach gives Singapore’s government could jeopardize freedom of expression both in the city-state and outside its borders.

 


0

European parliament’s NationBuilder contract under investigation by data regulator

19:28 | 28 November

Europe’s lead data regulator has issued its first ever sanction of an EU institution — taking enforcement action against the European parliament over its use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections.

NationBuilder is a veteran of the digital campaign space — indeed, we first covered the company back in 2011— which has become nearly ubiquitous for digital campaigns in some markets.

But in recent years European privacy regulators have raised questions over whether all its data processing activities comply with regional data protection rules, responding to growing concern around election integrity and data-fuelled online manipulation of voters.

The European parliament had used NationBuilder as a data processor for a public engagement campaign to promote voting in the spring election, which was run via a website called thistimeimvoting.eu.

The website collected personal data from more than 329,000 people interested in the EU election campaign — data that was processed on behalf of the parliament by NationBuilder.

The European Data Protection Supervisor (EDPS), which started an investigation in February 2019, acting on its own initiative — and “taking into account previous controversy surrounding this company” as its press release puts it — found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.

The sub-processors in question are not named. (We’ve asked for more details.)

The parliament received a second reprimand from the EDPS after it failed to publish a compliant Privacy Policy for the thistimeimvoting website within the deadline set by the EDPS. Although the regulator says it acted in line with its recommendations in the case of both sanctions.

The EDPS also has an ongoing investigation into whether the Parliament’s use of the voter mobilization website, and related processing operations of personal data, were in accordance with rules applicable to EU institutions (as set out in Regulation (EU) 2018/1725).

The enforcement actions had not been made public until a hearing earlier this week — when assistant data protection supervisor, Wojciech Wiewiórowski, mentioned the matter during a Q&A session in front of MEPs.

He referred to the investigation as “one of the most important cases we did this year”, without naming the data processor. “Parliament was not able to create the real auditing actions at the processor,” he told MEPs. “Neither control the way the contract has been done.”

“Fortunately nothing bad happened with the data but we had to make this contract terminated the data being erased,” he added.

When TechCrunch asked the EDPS for more details about this case on Tuesday a spokesperson told us the matter is “still ongoing” and “being finalized” and that it would communicate about it soon.

Today’s press release looks to be the upshot.

Provided canned commentary in the release Wiewiórowski writes:

The EU parliamentary elections came in the wake of a series of electoral controversies, both within the EU Member States and abroad, which centred on the the threat posed by online manipulation. Strong data protection rules are essential for democracy, especially in the digital age. They help to foster trust in our institutions and the democratic process, through promoting the responsible use of personal data and respect for individual rights. With this in mind, starting in February 2019, the EDPS acted proactively and decisively in the interest of all individuals in the EU to ensure that the European Parliament upholds the highest of standards when collecting and using personal data. It has been encouraging to see a good level of cooperation developing between the EDPS and the European Parliament over the course of this investigation.

One question that arises is why no firmer sanction has been issued to the European parliament — beyond a (now public) reprimand, some nine months after the investigation began.

Another question is why the matter was not more transparently communicated to EU citizens.

The EDPS’ PR emphasizes that its actions “are not limited to reprimands”, without explaining why the two enforcements thus far didn’t merit tougher action. (At the time of writing the EDPS had not responded to questions about why no fines have so far been issued.)

There may be more to come, though.

The regulator says it will “continue to check the parliament’s data protection processes” — revealing that the European Parliament has finished informing individuals of a revised intention to retain personal data collected by the thistimeimvoting website until 2024.

“The outcome of these checks could lead to additional findings,” it warns, adding that it intends to finalise the investigation by the end of this year.

Asked about the case, a spokeswoman for the European parliament told us that the thistimeimvoting campaign had been intended to motivate EU citizens to participate in the democratic process, and that it used a mix of digital tools and traditional campaigning techniques in order to try to reach as many potential voters as possible. 

She said NationBuilder had been used as a customer relations management platform to support staying in touch with potential voters — via an offer to interested citizens to sign up to receive information from the parliament about the elections (including events and general info).

Subscribers were also asked about their interests — which allowed the parliament to send personalized information to people who had signed up.

Some of the regulatory concerns around NationBuilder have centered on how it allows campaigns to match data held in their databases (from people who have signed up) with social media data that’s publicly available, such as an unlocked Twitter account or public Facebook profile.

In 2017 in France, after an intervention by the national data watchdog, NationBuilder suspended this data matching tool in the market.

The same feature has attracted attention from the UK’s Information Commissioner — which warned last year that political parties should be providing a privacy notice to individuals whose data is collected from public sources such as social media and matched. Yet aren’t.

“The ICO is concerned about political parties using this functionality without adequate information being provided to the people affected,” the ICO said in the report, while stopping short of ordering a ban on the use of the matching feature.

Its investigation confirmed that up to 200 political parties or campaign groups used NationBuilder during the 2017 UK general election.

 


0

Only a few 2020 US presidential candidates are using a basic email security feature

19:59 | 27 November

Just one-third of the 2020 U.S. presidential candidates are using an email security feature that could prevent a similar attack that hobbled the Democrats’ during the 2016 election.

Out of the 21 presidential candidates in the race according to Reuters, seven Democrats and one Republican candidate are using and enforcing DMARC, an email security protocol that verifies the authenticity of a sender’s email and rejects spoofed emails, which hackers often use to try to trick victims into opening malicious links from seemingly known individuals.

It’s a marked increase from April, where only Elizabeth Warren’s campaign had employed the technology. Now, the Democratic campaigns of Joe Biden, Kamala Harris, Michael Bloomberg, Amy Klobuchar, Cory Booker, Tulsi Gabbard, and Republican candidate Steve Bullock have all improved their email security.

The remaining candidates, including presidential incumbent Donald Trump, are not rejecting spoofed emails. Another seven candidates are not using DMARC at all.

That, experts say, puts their campaigns at risk from foreign influence campaigns and cyberattacks.

“When a campaign doesn’t have the basics in place, they are leaving their front door unlocked,” said Armen Najarian, chief identity officer at Agari, an email security company. “Campaigns have to have both email authentication set at an enforcement policy of reject and advanced email security in place to be protected against socially-engineered covert attacks,” he said.

DMARC, which is free and fairly easy to implement, can prevent attackers from impersonating a candidate’s campaign but also prevent the same kind of targeted phishing attacks against the candidate’s network that resulted in the breach and theft of thousands of emails from the Democrats.

In the run-up to the 2016 presidential election, Russian hackers sent an email to Hillary Clinton campaign manager John Podesta, posing as a Google security warning. The phishing email, which was published by WikiLeaks along the rest of the email cache, tricked Podesta into clicking a link that took over his account, allowing hackers to steal tens of thousands of private emails.

A properly enforced DMARC policy would have rejected the phishing email from Podesta’s inbox altogether, though DMARC does not protect against every kind of highly sophisticated cyberattack. The breach was bruising for the Democrats, one that led to high-profile resignations and harmed public perceptions of the Clinton presidential campaign — one she ultimately lost.

“It’s perplexing that the campaigns are not aggressively jumping on this issue,” said Najarian.

 


0

Brexit ad blitz data firm paid by Vote Leave broke privacy laws, watchdogs find

14:08 | 27 November

joint investigation by watchdogs in Canada and British Columbia has found that Cambridge Analytica-linked data firm, Aggregate IQ, broke privacy laws in Facebook ad-targeting work it undertook for the official Vote Leave Brexit campaign in the UK’s 2016 EU referendum.

A quick reminder: Vote Leave was the official leave campaign in the referendum on the UK’s membership of the European Union. While Cambridge Analytica is the (now defunct) firm at the center of a massive Facebook data misuse scandal which has dented the company’s fortunes and continues to tarnish its reputation.

Vote Leave’s campaign director, Dominic Cummings — now a special advisor to the UK prime minister — wrote in 2017 that the winning recipe for the leave campaign was data science. And, more specifically, spending 98% of its marketing budget on “nearly a billion targeted digital adverts”.

Targeted at Facebook users.

The problem is, per the Canadian watchdogs’ conclusions, AIQ did not have proper legal consents from UK voters for disclosing their personal information to Facebook for the Brexit ad blitz which Cummings ordered.

Either for “the purpose of advertising to those individuals (via ‘custom audiences’) or for the purpose of analyzing their traits and characteristics in order to locate and target others like them (via ‘lookalike audiences’)”.

Oops.

Last year the UK’s Electoral Commission also concluded that Vote Leave breached election campaign spending limits by channeling money to AIQ to run the targeting political ads on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave. So there’s a full sandwich of legal wrongdoings stuck to the brexit mess that UK society remains mired in, more than three years later.

Meanwhile, the current UK General Election is now a digital petri dish for data scientists and democracy hackers to run wild experiments in microtargeted manipulation — given election laws haven’t been updated to take account of the outgrowth of the adtech industry’s tracking and targeting infrastructure, despite multiple warnings from watchdogs and parliamentarians.

Data really is helluva a drug.

The Canadian investigation cleared AIQ of any wrongdoing in its use of phone numbers to send SMS messages for another pro-Brexit campaign, BeLeave; a purpose the watchdogs found had been authorized by the consent provided by individuals who gave their information to that youth-focused campaign.

But they did find consent problems with work AIQ undertook for various US campaigns on behalf of Cambridge Analytica affiliate, SCL Elections — including for a political action committee, a presidential primary campaign and various campaigns in the 2014 midterm elections.

And, again — as we know — Facebook is squarely in the frame here too.

“The investigation finds that the personal information provided to and used by AIQ comes from disparate sources. This includes psychographic profiles derived from personal information Facebook disclosed to Dr. Aleksandr Kogan, and onward to Cambridge Analytica,” the watchdogs write.

“In the case of their work for US campaigns… AIQ did not attempt to determine whether there was consent it could rely on for its use and disclosure of personal information.”

The investigation also looked at AIQ’s work for multiple Canadian campaigns — finding fewer issues related to consent. Though the report states that in: “certain cases, the purposes for which individuals are informed, or could reasonably assume their personal information is being collected, do not extend to social media advertising and analytics”.

AIQ also gets told off for failing to properly secure the data it misused.

This element of the probe resulted from a data breach reported by UpGuard after it found AIQ running an unsecured GitLab repository — holding what the report dubs “substantial personal information”, as well as encryption keys and login credentials which it says put the personal information of 35 million+ people at risk.

Double oops.

“The investigation determined that AIQ failed to take reasonable security measures to ensure that personal information under its control was secure from unauthorized access or disclosure,” is the inexorable conclusion.

Turns out if an entity doesn’t have a proper legal right to people’s information in the first place it may not be majorly concerned about where else the data might end up.

The report flows from an investigation into allegations of unauthorized access and use of Facebook user profiles which was started by the Office of the Information and Privacy Commissioner for BC in late 2017. A separate probe was opened by the Office of the Privacy Commissioner of Canada last year. The two watchdogs subsequently combined their efforts.

The upshot for AIQ from the joint investigation’s finding of multiple privacy and security violations is a series of, er, “recommendations”.

On the data use front it is suggested the company take “reasonable measures” to ensure any third-party consent it relies on for collection, use or disclosure of personal information on behalf of clients is “adequate” under the relevant Canadian and BC privacy laws.

“These measures should include both contractual measures and other measures, such as reviewing the consent language used by the client,” the watchdogs suggest. “Where the information is sensitive, as with political opinions, AIQ should ensure there is express consent, rather than implied.”

On security, the recommendations are similarly for it to “adopt and maintain reasonable security measures to protect personal information, and that it delete personal information that is no longer necessary for business or legal purposes”.

“During the investigation, AIQ took steps to remedy its security breach. AIQ has agreed to implement the Offices’ recommendations,” the report adds.

The upshot of political ‘data science’ for Western democracies? That’s still tbc. Buckle up.

 


0

Instagram founders join $30M raise for Loom work video messenger

19:39 | 26 November

Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.'”

Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45. It’s also hired away former head of growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.

Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. As long as it has the head start, it needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”

 


0

Facebook prototypes Favorites for close friends microsharing

00:30 | 23 November

Facebook is building its own version of Instagram Close Friends, the company confirms to TechCrunch. There’s a lot people that don’t share on Facebook because it can feel risky or awkward since its definition of “friends” has swelled to include family, work colleagues, and distant acquaintances. No one wants their boss or grandma seeing their weekend partying or edgy memes. There are whole types of sharing, like Snapchat’s Snap Map-style live location tracking, that feel creepy to expose to such a wide audience.

The social network needs to get a handle on microsharing. Yet Facebook has tried and failed over the years to get people to build Friend Lists for posting to different subsets of their network.

Back in 2011 Facebook said that 95 percent of users hadn’t made a single list. So it tried tried auto-grouping people into Smart Lists like High School Friends and Co-Workers, and offered manual always-see-in-feed Close Friends and only-see-important-updates Acquaintances lists. But they too saw little traction and few product updates in the past 8 years. Facebook ended up shutting down Friend Lists Feeds last year for viewing what certain sets of friends shared.

Then a year ago, Instagram made a breakthrough. Instead of making a complicated array of Friend Lists you could never remember who was on, it made a single Close Friends list with a dedicated button for sharing to them from Stories. Instagram’s research had found 85% of a user’s Direct messages go to the same 3 people, so why not make that easier for Stories without pulling everyone into a group thread? Last month I wrote that “I’m surprised Facebook doesn’t already have its own Close Friends feature, and it’d be smart to build one.”

How Facebook Favorites Works

Now Facebook is in fact prototyping its version of Instagram Close Friends called Favorites. It lets users designate certain friends as Favorites, and then instantly post their Story from Facebook or Messenger to just those people instead of all their friends as is the default.

The feature was first spotted inside Messenger by reverse engineering master and frequent TechCrunch tipster

. Buried in the Android app is the code that let Wong generate the screenshots above of this unreleased feature. They show how when users go to share a Story from Messenger, Facebook offers to let users to post it to Favorites, and edit who’s on that list or add to it from algorithmic suggestions. Users in that Favorites list would then be the only recipients of that post within Stories, like with Instagram Close Friends.

 

A Facebook spokesperson confirmed to me that this feature is a prototype that the Messenger team created. It’s an early exploration of the microsharing opportunity, and the feature isn’t officially testing internally with employees or publicly in the wild. The spokesperson describes the Favorites feature as a type of shortcut for sharing to a specific set of people. They tell me that Facebook is always exploring new ways to share, and as discussed at its F8 conference this year, Facebook is focused on improving the experience of sharing with and staying more connected to your closest friends.

Unlocking Creepier Sharing

There are a ton of benefits Facebook could get from a Favorites feature if it ever launches. First, users might share more often if they can make content visible to just their best pals since those people wouldn’t get annoyed by over-posting. Second, Facebook could get new, more intimate types of content shared, from the heartfelt and vulnerable to the silly and spontaneous to the racy and shocking — stuff people don’t want every single person they’ve ever accepted a friend request from to see. Favorites could reduce self-censorship.

“No one has ever mastered a close friends graph and made it easy for people to understand . . . People get friend requests and they feel pressure to accept” Instagram director of product Robby Stein told me when it launched Close Friends last year. “The curve is actually that your sharing goes up and as you add more people initially, as more people can respond to you. But then there’s a point where it reduces sharing over time.” Google+, Path, and other apps have died chasing this purposefully selective microsharing behavior.

Facebook Favorites could stimulate lots of sharing of content unique to its network, thereby driving usage and ad views. After all, Facebook said it in April that it had 500 million daily Stories users across Facebook and Messenger, the same number as Instagram Stories and WhatsApp Status.

Before Instagram launched Close Friends, it actually tested the feature under the name Favorites and allowed you to share feed posts as well as Stories to just that subset of people. And last month Instagram launched the Close Friends-only messaging app Threads that lets you share your Auto-Status about where or what you’re up to.

Facebook Favorites could similarly unlock whole new ways to connect. Facebook can’t follow some apps like Snapchat down more privacy-centric product paths because it knows users are already uneasy about it after 15 years of privacy scandals. Apps built for sharing to different graphs than Facebook have been some of the few social products that have succeeded outside its empire, from Twitter’s interest graph, to TikTok’s fandoms of public entertainment, to Snapchat’s messaging threads with besties.

Instagram Threads

A competent and popular Facebook Favorites could let it try products in location, memes, performances, Q&A, messaging, livestreaming, and more. It could build its own take on Instagram Threads, let people share exact location just with Favorites instead of just what neighborhood they’re in with Nearby Friends, or create a dedicated meme resharing hub like the LOL experiment for teens it shut down. At the very least, it could integrate with Instagram Close Friends so you could syndicate posts from Instagram to your Facebook Favorites.

The whole concept of Favorites aligns with Facebook CEO Mark Zuckerberg’s privacy-focused vision for social networking. “Many people prefer the intimacy of communicating one-on-one or with just a few friends” he writes. Facebook can’t just be the general purpose catch-all social network we occasionally check for acquaintances’ broadcasted life updates. To survive another 15 years, it must be where people come back each day to get real with their dearest friends. Less can be more.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 1532

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short