Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 3000

Study of YouTube comments finds evidence of radicalization effect

01:21 | 29 January

Research presented at the ACM FAT 2020 conference in Barcelona today supports the notion that YouTube’s platform is playing a role in radicalizing users via exposure to far-right ideologies.

The study, carried out by researchers at Switzerland’s Ecole polytechnique fédérale de Lausanne and the Federal University of Minas Gerais in Brazil, found evidence that users who engaged with a middle ground of extreme right-wing content migrated to commenting on the most fringe far-right content.

A March 2018 New York Times article by sociologist, Zeynep Tufekci, set out the now widely reported thesis that YouTube is a radicalization engine. While follow up reporting by journalist Kevin Roose told a compelling tale of the personal experience of an individual, Caleb Cain, who described falling down an “alt right rabbit hole” on YouTube. But researcher Manoel Horta Ribeiro, who was presenting the paper today, said the team wanted to see if they could find auditable evidence to support such anecdotes.

Their paper, called Auditing radicalization pathways on YouTube, details a large scale study of YouTube looking for traces of evidence — in likes, comments and views — that certain right-leaning YouTube communities are acting as gateways to fringe far-right ideologies.

Per the paper, they analyzed 330,925 videos posted on 349 channels — broadly classifying the videos into four types: Media, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right — and using user comments as a “good enough” proxy for radicalization (their data-set included 72 million comments).

The findings suggest a pipeline effect over a number of years where users who started out commenting on alt-lite/IDW YouTube content shifted to commenting on extreme far-right content on the platform over time.

While the rate of overlap between consumers of Media content and the alt-right was found to be far lower.

“A significant amount of commenting users systematically migrates from commenting exclusively on milder content to commenting on more extreme content,” they write in the paper. “We argue that this finding provides significant evidence that there has been, and there continues to be, user radicalization on YouTube, and our analyses of the activity of these communities… is consistent with the theory that more extreme content ‘piggybacked’ on the surge in popularity of I.D.W. and Alt-lite content… We show that this migration phenomenon is not only consistent throughout the years, but also that it is significant in its absolute quantity.”

The researchers were unable to determine the exact mechanism involved in migrating YouTube users from consuming ‘alt lite’ politics to engaging with the most fringe and extrene far right ideologies — citing a couple of key challenges on that front: Limited access to recommendation data; and the study not taking into account personalization (which can affect a user’s recommendations on YouTube).

But even without personalization they say they were “still able to find a path in which users could find extreme content from large media channels”.

During a conference Q&A after presenting the paper, Horta Ribeiro was asked what evidence they had that the radicalization effect the study identifies had occurred through YouTube, rather than via some external site — or because the people in question were more radicalized to begin with (and therefore more attracted to extreme ideologies) vs the notion of YouTube itself being an active radicalization pipeline.

He agreed it’s difficult to make an absolute claim that YouTube is to blame. But also argued that, as host to these communities, the platform is responsible.

“We do find evident traces of user radicalization, and I guess the question asks why is YouTube responsible for this? And I guess the answer would be because many of these communities they live on YouTube and they have a lot of their content on YouTube and that’s why YouTube is so deeply associated with it,” he said.

“In a sense I do agree that it’s very hard to make the claim that the radicalization is due to YouTube or due to some recommender system or that the platform is responsible for that. It could be that something else is leading to this radicalization and in that sense I think that the analysis that we make it shows there is this process of users going from milder channels to more extreme ones. And this solid evidence towards radicalization because people that were not exposed to this radical content become exposed. But it’s hard to make strong causal claims — like YouTube is responsible for that.”

We reached out to YouTube for a response to the research but the company did not reply to our questions.

The company has tightened its approach towards certain far right and extremist content in recent years, in the face of growing political and public pressure over hate speech, targeted harassment and radicalization risks.

It has also been experimenting with reducing algorithmic amplification of certain types of potentially damaging nonsense content that falls outside its general content guidelines — such as malicious conspiracy theories and junk science.

 


0

Five reasons you (really) don’t want to miss TechCrunch’s AI and Robotics show on March 3

00:00 | 29 January

TechCrunch’s fourth Robotics and AI show is coming up on March 3 at UC Berkeley’s Zellerbach Hall. If past experience is any guide, the show is sure to draw a big crowd (cheap student rates here!) but there’s still time to grab a pass. If you’re wondering why you want to take a day out to catch a full day of interviews and audience Q&A with the world’s top robotics and AI experts, read on.  

It’s the software / AI,  stupid. So said (in so many words) the legendary surgical robotics founder Dr. Frederic Moll at Disrupt SF last year. And this year’s agenda captures that reality from many angles. UC Berkeley’s Stuart Russell will discuss his provocative book on AI – Human Compatible, and the deeply important topic of AI ‘explainability’ will be front and center with SRI’s Karen Myers, Fiddler Labs’ Krishna Gade and UC Berkeley’s Trevor Darrell. Then there is the business of developing and sustaining robots, whether at startups, which is where Freedom Robotics’ Joshua Wilson comes in, or at large enterprises, with Vicarious’ D. Scott Phoenix

Robotics founders have more fun. That’s why we have a panel of the three top founders in agricultural robotics as well as another three on construction robotics and two on human assistive robotics, plus a pitch competition featuring five additional founders, each carefully chosen from a large pool of applicants. We’ll also bring a few of those founders back for a separate audience Q&A. Meet tomorrow’s big names in robotics today!

Big companies do robots too. No one knows that better Amazon’s top roboticist, Tye Brady, who already presides over 100,000 warehouse robots. The editors are eager to hear what’s next in Amazon’s ambitious automation plans. Toyota’s robotics’ focus is mobility,  and Toyota Research Institute’s TRI-AD CEO James Kuffner and TRI VP of Robotics Max Bajracharya will discuss projects they plan to roll out at the Tokyo Olympics.  And if that’s not enough, Maxar Technologies’ Lucy Condakchian will show off Maxar’s robotic arm that will travel to Mars aboard the fifth Mars Rover mission later this year. 

Robotics VCs are chill (once you get to know them). We will have three check writers on stage for the big talk about where they see the best investments emerging –  Eric Migicovsky (Y Combinator), Kelly Chen (DCVC) and Dror Berman (Innovation Endeavors) plus two separate audience Q&A sessions, one with notable robotics / AI VCs, Rob Coneybeer (Shasta) and  Aaron Jacobson (NEA) and a second with corporate VCs Quinn Li (Qualcomm) and
Kass Dawson (Softbank).

Network, recruit, repeat. Last year there were 1500 attendees at this show, and they were the cream of the robotics world – founders, investors, technologists, executives and engineering students. Expect nothing less this year. TechCrunch’s CrunchMatch mobile app makes meeting folks super easy, plus the event is in UC Berkeley’s Zellerbach Hall – a sunny happy place that naturally spins up great conversations. Don’t miss out.

Our Early Bird Ticket sale ends this Friday – book your tickets today and save $150 before prices increase. Students can book a super-discounted ticket for just $50 right here.

 


0

Modified HoloLens helps teach kids with vision impairment to navigate the social world

22:59 | 28 January

Growing up with blindness or low vision can be difficult for kids, not just because they can’t read the same books or play the same games as their sighted peers; Vision is also a big part of social interaction and conversation. This Microsoft research project uses augmented reality to help kids with vision impairment “see” the people they’re talking with.

The challenge people with vision impairment encounter is, of course, that they can’t see the other people around them. This can prevent them from detecting and using many of the nonverbal cues sighted people use in conversation, especially if those behaviors aren’t learned at an early age.

Project Tokyo is a new effort from Microsoft in which its researchers are looking into how technologies like AI and AR can be useful to all people, including those with disabilities. That’s not always the case, though it must be said that voice-powered virtual assistants are a boon to many who can’t as easily use a touchscreen or mouse and keyboard.

The team, which started as an informal challenge to improve accessibility a few years ago, began by observing people traveling to the Special Olympics, then followed that up with workshops involving the blind and low vision community. Their primary realization was of the subtle context sight gives in nearly all situations.

“We, as humans, have this very, very nuanced and elaborate sense of social understanding of how to interact with people — getting a sense of who is in the room, what are they doing, what is their relationship to me, how do I understand if they are relevant for me or not,” said Microsoft researcher Ed Cutrell. “And for blind people a lot of the cues that we take for granted just go away.”

In children this can be especially pronounced, as having perhaps never learned the relevant cues and behaviors, they can themselves exhibit antisocial tendencies like resting their head on a table while conversing, or not facing a person when speaking to them.

To be clear, these behaviors aren’t “problematic” in themselves, as they are just the person doing what works best for them, but they can inhibit everyday relations with sighted people, and it’s a worthwhile goal to consider how those relations can be made easier and more natural for everyone.

The experimental solution Project Tokyo has been pursuing involves a modified HoloLens — minus the lens, of course. The device is also a highly sophisticated imaging device that can identify objects and people if provided with the right code.

The user wears the device like a high-tech headband, and a custom software stack provides them with a set of contextual cues:

  • When a person is detected, say four feet away on the right, the headset will emit a click that sounds like it is coming from that location.
  • If the face of the person is known, a second “bump” sound is made and the person’s name announced (again, audible only to the user).
  • If the face is not known or can’t be seen well, a “stretching” sound is played that modulates as the user directs their head towards the other person, ending in a click when the face is centered on the camera (which also means the user is facing them directly).
  • For those nearby, an LED strip shows a white light in the direction of a person who has been detected, and a green light if they have been identified.

Other tools are being evaluated, but this set is a start, and based on a case study with a game 12-year-old named Theo, they could be extremely helpful.

Microsoft’s post describing the system and the team’s work with Theo and others is worth reading for the details, but essentially Theo began to learn the ins and outs of the system and in turn began to manage social situations using cues mainly used by sighted people. For instance, he learned that he can deliberately direct his attention at someone by turning his head towards them, and developed his own method of scanning the room to keep tabs on those nearby — neither one possible when one’s head is on the table.

That kind of empowerment is a good start, but this is definitely a work in progress. The bulky, expensive hardware isn’t exactly something you’d want to wear all day, and naturally different users will have different needs. What about expressions and gestures? What about signs and menus? Ultimately the future of Project Tokyo will be determined, as before, by the needs of the communities who are seldom consulted when it comes to building AI systems and other modern conveniences.

 


0

RealityEngines launches its autonomous AI service

20:00 | 28 January

RealityEngines.AI, an AI and machine learning startup founded by a number of former Google executives and engineers, is coming out of stealth today and announcing its first set of products.

When the company first announced its $5.25 million seed round last year, CEO Bindu Reddy wasn’t quite ready to disclose RealityEngines’ mission beyond saying that it planned to make machine learning easier for enterprises. With today’s launch, the team is putting this into practice by launching a set of tools that specifically tackle a number of standard enterprise use cases for ML, including user churn predictions, fraud detection, sales lead forecasting, security threat detection and cloud spend optimization. For use cases that don’t fit neatly into these buckets, the service also offers a more general predictive modeling service.

Before co-founding RealiyEngines, Reddy was the head of product for Google Apps and general manager for AI verticals at AWS. Her co-founders are Arvind Sundararajan (formerly at Google and Uber) and Siddartha Naidu (who founded BigQuery at Google). Investors in the company include Eric Schmidt, Ram Shriram, Khosla Ventures and Paul Buchheit.

As Reddy noted, the idea behind this first set of products from RealityEngines is to give businesses an easy entry into machine learning, even if they don’t have data scientists on staff.

Besides talent, another issue that businesses often face is that they don’t always have massive amounts of data to train their networks effectively. That has long been a roadblock for many companies that want to see what AI can do for them but that didn’t have the right resources to do so. RealityEngines overcomes this by creating realistic synthetic data that it can then use to augment a company’s existing data. In its tests, this creates models that are up to 15 percent more accurate than models that were trained without the synthetic data.

“The most prominent use of generative adversarial networks  — GANS — has been to create deep fakes,” said Reddy. “Deepfakes have captured the public’s imagination by highlighting how easy it to spread misinformation with these doctored videos and images. However, GANS can also be applied to productive and good use. They can be used to create synthetic datasets which when then be combined with the original data, to produce robust AI models even when a business doesn’t have much training data.”

RealityEngines currently has about 20 employees, most of whom have a deep background in ML/AI, both as researchers and practitioners.

 

 


0

72 hours left to save $150 on tickets to TC Sessions: Robotics + AI 2020

19:45 | 28 January

We’re counting the days (35 to be precise) until TC Sessions: Robotics + AI 2020 takes place on March 3 in Berkeley, Calif. But we’re also counting the days that you can save on the price of admission. The early-bird pricing ends in just three days on January 31. Buy your ticket right here before that bird flies south, and you’ll save $150.

This single-day conference features interviews, panel discussions, Q&As and demos with the leaders, founders and investors focused on the future of robotics and AI. TechCrunch editors will interview the people making it happen, explore the promise, expose the hype and address the challenges of these revolutionary industries.

The lineup, as impressive as ever, also includes workshops and demos because who doesn’t want to see robots in action? From autonomous cars and assistive robotics to advances in agriculture and outer space, our conference agenda covers the leading edges of the complex and exciting world of robots and AI.

Here’s a taste of what we’re serving.

Engineering for the Red Planet: Maxar Technologies has been involved with U.S. space efforts for decades and is about to send its sixth robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian, general manager of robotics at Maxar, will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

Investing in Robotics and AI — Lessons from the Industry’s VCs: Leading investors will discuss the rising tide of venture capital funding in robotics and AI. Dror Berman, founding partner at Innovation Endeavors; Kelly Chen, partner at Data Collective DCVC; and Eric Migicovsky, general partner at Y Combinator, bring a combination of early stage investing and corporate venture capital expertise, sharing a fondness for the wild world of robotics and AI investing.

We’ve added a new, exciting element this year. It’s Pitch Night, a sort of mini Startup Battlefield. The night before the conference, 10 teams will pitch to an audience of VCs and other influencers at a private event. Judges will choose five finalists, and those teams will pitch again from the Main Stage at the conference. We’re taking applications until February 1, so apply right here. It’s free, and a great way to showcase your startup to the people who can supercharge your startup dreams.

Don’t miss your chance to learn from, share with and pitch to the brightest minds, makers, investors and researchers in robotics and AI. And don’t miss out on serious savings. Buy an early-bird ticket to TC Sessions: Robotics + AI 2020 — before prices go up on January 31 — and you’ll keep $150 in your wallet.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.

 


0

ServiceNow acquires conversational AI startup Passage AI

17:24 | 28 January

ServiceNow announced this morning that it has acquired Passage AI, a startup that helps customers build chatbots, something that should come in handy as ServiceNow continues to modernize its digital service platform. The companies did not share terms of the deal.

With Passage AI, ServiceNow gets a bushel of AI talent, which in itself has value, but it also gets AI technology, which should fit in nicely with ServiceNow’s mission. For starters, the company’s chatbot solutions gives ServiceNow an automated way to respond to customer/user inquiries.

Even more interesting for ServiceNow, Passage includes an IT automation component that uses ” a conversational interface to submit tickets, handle queries, and take direct action through APIs,” according to the company website. It also gets an HR automation piece, giving the company an intelligent tool it could incorporate across its Now Platform in tools like ServiceNow Virtual Agent and Service Portal, Workspaces in multiple languages.

The multi-lingual support was an aspect of the deal that appeals to Debu Chatterjee, senior director of AI Engineering at ServiceNow. “Building deep learning, conversational AI capabilities into the Now Platform will enable a work request initiated in German or a customer inquiry initiated in Japanese to be solved by Virtual Agent,” he said in a statement.

Companies are increasingly looking for ways to solve common customer problems using chatbots, while only bringing humans into the loop when the bot can’t answer the query. Passage AI gives ServiceNow much deeper knowledge in this growing area.

Passage AI, which launched in 2016, has raised $10.3 million, according to Crunchbase data. The company website lists a variety of large customers including MasterCard, Shell, Mercedes Benz and SoftBank. The acquisition comes less than a week after the company purchased another AI-focused startup, Loom Systems, one that concentrates on automating operations data.

The deal is expected to close this quarter. ServiceNow will be announcing earnings on Wednesday afternoon.

 


0

Pecan.ai launches with $11M Series A to automate machine learning

16:24 | 28 January

Pecan.ai, a startup that wants to help business analysts build machine learning models in an automated fashion, emerged from stealth today and announced an $11 million Series A.

The round was led by Dell Technologies Capital and S Capital. Along with a previously unannounced $4 million seed round, the company has raised a total of $15 million.

CEO Zohar Bronfman says he and co-founder Noam Brezis, whom he has known for more than a decade, started the company with the goal of building an automated machine learning platform. They observed that much of the work involved in building machine learning models is about getting data in a form that the algorithm can consume, something they’ve automated in Pecan.

“The innovative thing about Pecan is that we do all of the data preparation and data, engineering, and data processing, and [complete the] various technical steps [for you],” Bronfman explained.

The target user is a business analyst using business intelligence and analytics tools, who wants to bring the power of machine learning to their data analysis, but lacks the skills to do it. “The business analyst knows the data very well, knows the business problem very well and speaks directly to the business owner of the problem — and they are currently conducting basic analytics,” he said.

Pecan includes a series of templates designed to answer common business questions. They divide these into two main categories. The first is customer questions like how much churn do we have, and the second is business operations questions related to things like risk or fraud. If the question doesn’t fall into one of these categories, it is possible to build your own template, but Bronfman says that is really for more advanced users.

After you select the template and point to a data source such as a database, data lake or CRM repository, Pecan does the work of connecting to the source and pulling data into a dashboard. You can also export the algorithm for use in an external service or application, or Pecan can automatically update a data repository with data the algorithm is measuring such as churn rate.

The founders have been building this platform since 2016 when they founded the company, and have been working with beta customers for the last 18 months or so. Today, they emerge from stealth and bring Pecan to market in earnest.

Bronfman plans to move to New York City and open a sales and marketing office in the US, while Brezis will remain in Tel Aviv and oversee engineering. It’s early days for this startup, but with $11 million in capital, it has a chance to take the product to market and see what happens.

 


0

Persona raises $17.5M for an identify verification platform the goes beyond user IDs and passwords

13:19 | 28 January

The rising number of data breaches based on leaked passwords have revealed the holes in simple password and memorable-information-based verification systems. Today a startup called Persona, which has built a platform to make it easier for organisations to implement more watertight methods based on third-party documentation and more to verify users, is announcing a round, speaking to the shift in the market.

The startup has raised $17.5 million in a Series A round of funding from a list of investors that include Coatue and First Round Capital, money that  plans to use to double down on its core product: a platform that businesses and organisations can access by way of an API, which lets them use a variety of documents, from government-issued IDs through to biometrics, to verify that customers are who they say they are.

Current customers include Rippling, Petal, UrbanSitter, Branch, Brex, Postmates, Outdoorsy, Rently, SimpleHealth and Hipcamp, and the list extends to any company involved in any kind of online financial transaction to verify for regulatory compliance, fraud prevention and for trust and safety.

(The company is young and is not disclosing valuation. Previously, the company had raised an undisclosed amount of funding from Kleiner Perkins and FirstRound, according to data from PitchBook. Angels in the company have included Zach Perret and William Hockey (co-founders of Plaid), Dylan Field (founded Figma), Scott Belsky (Behance) and Tony Xu (DoorDash).)

Founded by Rick Song and Charles Yeh, respectively former engineers from Square and Dropbox (companies that will have had their own struggles and concerns with identity verification), Persona’s main premise is that most companies are not security companies and therefore lack the people, skills, time and money to build strong authentication and verification services, much less to keep up with the latest developments on what is best practice.

And on top of that, there have been too many breaches that have laid bare the problem with companies holding too much information on users, collected for identification purposes but then sitting there waiting to be hacked.

The name of the game for Persona is to provide services that are easy to use for customers — for those who can’t or don’t access the code of their apps or websites for registration flows, they can even verify users by way of email-based links.

“Digital identity is one of the most important things to get right, but there is no silver bullet,” Song, who is the CEO, said in an interview. “I believe longer term we’ll see that it’s not a one-size-fits-all approach.” Not least because malicious hackers have an ever-increasing array of tools to get around every system that gets put into place. (The latest is the rise of deep-fakes to mimic people, putting into question how to get around that in, say, a video verification system.)

At Persona, the company currently gives customers the option to ask for social security numbers, biometric verification such as fingerprints or pictures, or government ID uploads and phone lookups, some of which (like biometrics) is built by Persona itself and some of which is accessed via third-party partnerships. Added to that are other tools like quizzes and video-based interactions. Song said the list is expanding, and the company is looking at ways of using the AI engine that it’s building — which actually performs the matching — to also potentially suggest the best tools for each and every transaction.

The key point is that in every case, information is accessed from other databases, not kept by the customer itself.

This is a moving target, and one that is becoming increasingly harder to focus on, given not just the rise in malicious hacking, but also regulation that limits how and when data can be accessed and used by online businesses. Persona notes a McKinsey forecast that the personal identify and verification market will be worth some $20 billion by 2022, which is not a surprising figure when you consider the nearly $9 billion that Google has been fined so far for GDPR violations, or the $700 million Equifax paid out, or the $50 million Yahoo (a sister company now) paid out for its own user-data breach.

 


0

4 days left to save $150 on tickets to TC Sessions: Robotics + AI 2020

18:30 | 27 January

The countdown to savings continues, and you have just four days left to score the best price on tickets to TC Sessions: Robotics + AI 2020. Join 1,500 of the brightest minds and innovators in robotics and machine learning — technologists, founders, investors, engineers and researchers. Buy an early-bird ticket now before prices go up on January 31, and you’ll keep $150 in your pocket. Why spend more when you don’t have to?

Get ready for a full day focused on the future of two technologies with the potential to change everything about the way we live. We have an outstanding line up of speakers, interviews and panel discussions covering a range of topics. And of course, plenty of demos, too.

We won’t just parrot the hype, either. Our editors will ask the hard questions, and the conference agenda includes discussions about the ethics and ramifications inherent with these potent technologies.

Here’s a just sample of what’s on tap.

  • Saving Humanity from AI: Stuart Russell, a UC Berkeley professor and AI authority argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Bringing Robots to Life: This summer’s Tokyo Olympics will be a huge proving ground for TRI-AD (Toyota Research Institute – Advanced Development). TRI-AD’s CEO James Kuffner and its VP of Robotics, Max Bajracharya will join us to discuss the department’s plans for assistive robots and self-driving cars.

There’s plenty more waiting for you, including the finalists of our first Pitch Night. This group of intrepid robotics and AI startup founders made the cut (10 teams will pitch the night before the conference at a private event). The finalists will pitch again at the conference from the Main Stage. Think your startup has what it takes to throw down in a pitch-off? We’re accepting applications until February 1. Talk about a once-in-a-lifetime opportunity for focused exposure — apply right here today!

TC Sessions: Robotics + AI 2020 draws the top people in the industry, which makes it prime networking territory. Whether you’re looking for funding, hunting for the perfect startup to add to your portfolio or searching for the next generation of engineers, this is where you need to be. Come work it to your advantage.

TC Sessions: Robotics + AI 2020 takes place in Berkeley on March 3, and we’ve packed a lot of value and opportunity into one day. Make the most of it and remember, you’ll save $150 if you buy an early bird ticket before prices go up on January 31.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.

 

 


0

A.I.-powered voice transcription app Otter raises $10M, including from new strategic investor NTT DOCOMO

18:18 | 27 January

Otter.ai, an A.I.-powered transcription app and note-takers’ best friend, has received a strategic investment from Japan’s leading mobile operator and new Otter partner, NTT DOCOMO Inc. The two companies are teaming up to support Otter’s expansion into the Japanese market where DOCOMO will be integrating Otter with its own A.I.-based translation service subsidiary, Mirai Translation, in order to provide accurate English transcripts which are then translated into Japanese.

The investment was made by DOCOMO’s wholly-owned subsidiary, NTT DOCOMO Ventures, Inc., but the size was undisclosed. However, the new round was $10 million in total, we’re told. To date, Otter has raised $23 million in funding from NTT DOCOMO Ventures, Fusion Fund, GGV, Draper Dragon, Duke University Innovation Fund, Harris Barton Asset Management, Slow Ventures, and others.

Otter launched its service in 2018, offering a way for users to search voice conversations as easily as they can today search their email or their text. Otter CEO and founder Sam Liang, along with a team hailing from Google, Facebook, Nuance, Yahoo as well as Stanford, Duke, M.I.T., and Cambridge, developed a technology specifically designed to capture conversations — like meetings, interviews, presentations, lectures, and more. This is a different sort of technology that what’s used in today’s voice assistants, like Google Assistant, Siri and Alexa, as it’s focused on transcribing longer, human-to-human conversations, which are spoken naturally.

The product itself creates automated transcriptions in real-time, as speakers are talking. The resulting transcript is searchable, and identifies the different speakers and key phrases. You can also upload photos alongside the recording.

Since launch, Otter has expanded its product to millions of users and now offers both an Otter for Teams and enterprise tier. 

With the new NTT DOCOMO partnership, the goal is to bring the Otter enterprise collaboration services to the Japanese market, explains Liang, the former Google architect who later sold his location startup Alohar Mobile to Alibaba.

“DOCOMO and other large companies have a large international workforce who communicate in English for their international conference calls,” says Liang. “They will use Otter to take automatic meeting notes, and improve meeting and communication effectiveness…The goal is to further enhance communication and collaboration on top of Otter‘s automatic English meeting note services,” he adds.

Otter.ai has similar partnerships with U.S. businesses, including Zoom Video Communications and Dropbox.

As a result of the new partnership, Otter’s Voice Meeting Notes application is being used on a trial basis in Berlitz Corporation’s English language classes in Japan. Students are using Otter to transcribe and review their lessons, click on sections of text, and initiate voice playback. DOCOMO, Otter.ai and Berlitz are also expanding their collaboration in language education to verify Otter’s effectiveness in the study of English, the company says.

The Japanese market values high-quality detailed meeting notes, and Otter’s highly accurate A.I.-powered note-taker overcomes language barriers and improves the operating efficiency of Japanese companies with global operations,” said Tomoyoshi Oono, Senior Vice President and General Manager of the Innovation Management Department in the R&D Innovation Division at DOCOMO, in a statement about the deal. “There is a large business market opportunity for Otter.ai and DOCOMO’s translation service.”

DOCOMO is also featuring Otter during demonstrations at DOCOMO Open House 2020 taking place in the Tokyo Big Sight exhibition complex January 23 and 24, 2020. Here, Otter will transcribe the English-language presentations in real-time which will then be translated into Japanese using DOCOMO’s machine translation technology. Both the English transcription and Japanese translation will be projected on a large screen for attendees to read.

While Otter’s transcriptions aren’t perfect in real-world scenarios, like where there’s background noise or muffled speaking, it does better when it can be connected directly to the audio source, like at big events. (TechCrunch, for example, used Otter’s service to transcribe audio at TechCrunch Disrupt in the past).

Otter’s new funding will also used to hire more engineers and further enhance its A.I. technologies in speech recognition, diarization, speaker identification, and automatic summarization, Liang tells TechCrunch. And the team will work to accelerate Otter’s adoption by enterprise customers in professional services, media, and education.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 3000

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short