Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Virtual assistant

<< Back Forward >>
Topics from 1 to 10 | in all: 68

Compound’s Mike Dempsey on virtual influencers and AI characters

19:27 | 18 January

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 1 of 3: the investor perspective

In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.

 


0

Sonos acquires voice assistant startup Snips, potentially to build out on-device voice control

00:36 | 21 November

Sonos revealed during its quarterly earnings report that it has acquired voice assistant startup Snips in a $37 million cash deal, Variety reported on Wednesday. Snips, which had been developing dedicated smart device assistants that can operate primarily locally, instead of relying on consistently round-tripping voice data to the cloud, could help Sonos set up a voice control option for its customers that has “privacy in mind” and is focused more narrowly on music control than on being a general-purpose smart assistant.

Sonos has worked with both Amazon and Google and their voice assistants, providing support for either on their more recent products, including the Sonos Beam and Sonos One smart speakers. Both of these require an active cloud connection to work, however, and have received scrutiny from consumers and consumer protection groups recently for how they handle the data they collect form users. They’ve introduced additional controls to help users navigate their own data sharing, but Sonos CEO Patrick Spence noted that one of the things the company can do in building its own voice features is developing them “with privacy in mind” in an interview with Variety.

Notably, Sonos has introduced a version of its Sonos One that leave out the microphone hardware altogether – the Sonos One SL introduced earlier this fall. The fact that they saw opportunity in a mic-less second version of the Sonos One suggests it’s likely there are a decent number of customers who like the option of a product that’s not round-tripping any information with a remote server. Spence also seemed quick to point out that Sonos wouldn’t seek to compete with its voice assistant partners, however, since anything they build will be focused much more specifically on music.

You can imagine how local machine learning would be able to handle commands like skipping, pausing playback and adjusting volume (and maybe even more advanced feature like playing back a saved playlist), without having to connect to any kind of cloud service. It seems like what Spence envisions is something like that which can provide basic controls, while still allowing the option for a customer to enable one of the more full-featured voice assistants depending on their preference.

Meanwhile, partnerships continue to prove lucrative for Sonos: Its team-up with Ikea resulted in 30,000 speakers sold on launch day, the company also shared alongside its earnings. That’s a lot to move in one day, especially in this category.

 


0

Salesforce wants to bring voice to the workplace

16:00 | 19 November

At its annual Dreamforce mega-conference in San Francisco, Salesforce today introduced the next steps in its Einstein Voice project, which it first announced last year. Einstein Voice is the company’s AI voice assistant. You can think of it as Salesforce’s Alexa or Google Assistant, but with a more focused mission.

During a briefing ahead of the event, Salesforce Chief Product Officer Bret Taylor showed off an Einstein and Alexa enabled Einstein speaker (Salesforce chairman and co-CEO Marc Benioff was supposed to be at the meeting, too, but for unknown reasons, he didn’t show) — and yes, it looked like Salesforce’s Einstein cartoon figure and its voluminous white hair lit up when it responded to queries. The company isn’t planning on making these devices available to the public, but it does show off the work the company has done with Amazon to integrate the service (though is by no means an Amazon -exclusive since the company is also working to bring Einstein to Google devices).

The theory here, as Taylor explained, is that having access to Salesforce data through voice will enable salespeople to quickly enter data into Salesforce when they are on the go and to ask the system questions about their data. The company argues that while voice assistants have found a place in the home, there are a lot of upsides to bringing it to businesses as well. That means a system has to account for the security needs of enterprises, too, as well as the fact that there is a wide range of different user personas it has to account for.

“We’re really excited about the idea of voice in businesses — the idea that every business can have an AI guide to their business decisions,” Taylor said. “I view it as part of this progression of technology. Computers and software started in the terminal with a keyboard, thanks to Xerox Parc moved to a mouse and graphic user interface, and then thanks to Steve Jobs, moved to a touchscreen, which I think is probably the dominant form factor for computers nowadays. And voice is really that next step.”

This next step, Taylor argues, will allow companies to rethink how people interact with software and data. With voice, Einstein, which is Salesforce’s catch-all name for its AI products, has a “seat at the table,” he noted because you can simply as the system a question if you need additional data during a conversation. But the real mission here is to bring these tools to every business — not just to Salesforce’s executive meetings.

To enable this, Salesforce is launching a tool that will allow anybody within a company to quickly build basic Einstein skills to pull up data from Salesforce. These skills focus on data input and relatively basic queries, for now. During a demo ahead of the event, the team showed off how easy it would be to enable a manager to ask about the current sales performance of his team, for example. By now means, though, is this tool as rich as products like Google’s DialogFlow or Microsoft’s Azure Bot Service. It’s nowhere near as flexible yet, but the team notes that it’s still early days and that it is working on enabling the ability to have more complex dialogs with Einstein in the future, for example.

To be honest, it’s hard not to look at this as a bit of a gimmick. There are probably real use cases here, that every company will have to define for itself. Maybe there are salespeople who indeed want to use a voice interface to update their CRM system after a customer meeting, for example. Or they may want to ask about the value of an account while they are in the car. In many ways, though, this feels like a technology looking for a problem, despite Salesforce’s protestations that customers are asking for this.

Some of the other uses cases here, which the company didn’t really highlight all that much in its briefing, seem far more compelling. It’s using Einstein Voice to coach call center agents by analyzing calls to pull out insights and trends from sales call transcripts. It’s also launching Service Cloud Voice, which integrates telephony inside the company’s Service Cloud. Using a built-in transcription service, Einstein can listen to the call in real time and proactively provide sales teams and call center agents with relevant information. Those use cases may not be quite as exciting, but in the end, they may generate for more value for companies than having yet another voice assistant for which they have to build their own skills, using what is, at least for the time being, a rather limited tool.

 


0

The time is right for Apple to buy Sonos

19:20 | 26 September

It’s been a busy couple of months for smart speakers – Amazon released a bunch just this week, including updated versions of its existing Echo hardware and a new Echo Studio with premium sound. Sonos also introduced its first portable speaker with Bluetooth support, the Sonos Move, and in August launched its collaboration collection with Ikea. Meanwhile, Apple didn’t say anything about the HomePod at its latest big product event – an omission that makes it all the more obvious the smart move would be for Apple to acquire someone who knows what they’re doing in this category: Sonos.

Highly aligned

From an outsider perspective, it’s hard to find two companies who seem more philosophically aligned than Sonos and Apple when it comes to product design and business model. Both are clearly focused on delivering premium hardware (at a price point that’s generally at the higher end of the mass market) and both use services to augment and complement the appeal of their hardware, even if Apple’s been shifting that mix a bit with a fast-growing services business.

Sonos, like Apple, clearly has a strong focus and deep investment in industrial design, and puts a lot of effort into truly distinctive product look and feel that stands out from the crowd and is instantly identifiable once you know what to look for. Even the company’s preference for a mostly black and white palette feels distinctly Apple – at least Apple leading up to the prior renaissance of multicolour palettes for some of its more popular devices, including the iPhone.

airplay2 headerThen from a technical perspective, Apple and Sonos seem keen to work together – and the results of their collaboration has been great for consumers who use both ecosystems. AirPlay 2 support is effectively standard on all modern Sonos hardware, and really Sonos is essentially the default choice already for anyone looking to do AirPlay 2-based multiform audio, thanks to the wide range of options available in different form factors and at different price points. Sonos and Apple also offer an Apple Music integration for Sonos’ controller app, and now you can use voice control via Alexa to play Apple Music, too.

Competitive moves

The main issue that an Apple-owned Sonos hasn’t made much sense before now, at least from Sonos’ perspective, is that the speaker maker has reaped the benefits of being a platform that plays nice with all the major streaming service providers and virtual assistants. Recent Sonos speakers offer both Amazon Alexa and Google Assistant support, for instance, and Sonos’ software has connections with virtually every major music and audio streaming service available.

What’s changed, especially in light of Amazon’s slew of announcements this week, is that competitors like Amazon are looking more like they want to own more of the business that currently falls within Sonos’ domain. Amazon’s Echo Studio is a new premium speaker that directly competes with Sonos in a way that previous Echos really haven’t, and the company has consistently been releasing better-sounding versions of its other, more affordable Echos. It’s also been rolling out more feature-rich multi-room audio features, including wireless surround support for home theater use – all things squarely in the Sonos wheelhouse.

alexa echo amazon 9250064

For now, Sonos and Amazon seem to be comfortably in ‘frenemy’ territory, but increasingly, it doesn’t seem like Amazon is content to leave them their higher-end market segment when it comes to the speaker hardware category. Amazon still probably will do whatever it can to maximize use of Alexa, on both its own and third-party devices, but it also seems to be intent on strengthening and expanding its own first-party device lineup, with speakers as low-hanging fruit.

Other competitors, including Google and Apple, don’t seem to have had as much success with their products that line up as direct competitors to Sonos, but the speaker-maker also faces perennial challenges from hi-fi and audio industry stalwarts, and also seems likely to go up against newer device makers with audio ambitions and clear cost advantages like Anker, too.

Missing ingredients/work to be done

Of course, there are some big challenges and potential red flags that stand in the way of Apple ever buying Sonos, or of that resulting union working out well for consumers. Sonos works so well because it’s service-agnostic, for instance, and they key to its success with recent products seems to also be integration with the smart home assistants that people seem to actually want to use most – namely Alexa and Google Assistant.

Under Apple ownership, it’s highly possible that Apple Music would at least get preferential treatment, if not become the lone streaming service on offer. It’s probable that Siri would replace Alexa and Assistant as the only virtual voice service available, and almost unthinkable that Apple would continue to support competing services if it did make this buy.

That said, there’s probably significant overlap between Apple and Sonos customers already, and as long as there was some service flexibility (in the same way there is for streaming competitors on iOS devices, including Spotify) then being locked into Siri probably wouldn’t sting as much. And it would serve to give Siri the foothold at home that the HomePod hasn’t managed to provide. Apple would also be better incentivized to work on improving Siri’s performance as a general home-based assistant, which would ultimately be good for Apple ecosystem customers.

Another smart adjacency

Apple’s bigger acquisitions are few and for between, but the ones it does make are typically obviously adjacent to its core business. A Sonos acquisition has a pretty strong precedent in the Beats purchase Apple made in 2014, albeit without the strong motivator of providing the underlying product and relationship basis for launching a streaming service.

What Sonos is, however, is an inversion of the historical Apple model of using great services to sell hardware. The Sonos ecosystem is a great, easy to use, premium-feel means of making the most of Apple’s music and video streaming services (and brand new games subscription offering), all of which are more important than ever to the company as it diversifies from its monolithic iPhone business.

I’m hardly the first to suggest an Apple-Sonos deal makes sense: J.P. Morgan analyst Samik Chatterjee suggested it earlier this year, in fact. From my perspective, however, the timing has never been better for this acquisition to take place, and the motivations never stronger for either party involved.

Disclosure: I worked briefly for Apple in its communications department in 2015-2016, but the above analysis is based entirely on publicly available information, and I hold no stock in either company.

 


0

Is Amazon’s Alexa ready to leave home and become a wearable voice assistant?

23:52 | 25 September

Amazon’s device event today played host to a dizzying number of product announcements, of all stripes – but notably, there are three brand new ways to wear Alexa on your body. Amazon clearly wants to give you plenty of options to take Alexa with you when you leave the house, the only place it’s really held sway so far – but can Amazon actually convince people that it’s the voice interface for everywhere, and not just for home?

Among the products Amazon announced at its Seattle event, Echo Frames, Echo Loop and Echo Buds all provide ways to take Alexa with you wherever you go. What’s super interesting – and telling – about this is that Amazon went with three different vectors to try to convince people to wear Alexa, instead of focusing its efforts on just one. That indicates a stronger than ever desire to break Alexa out of its home environment.

alexa echo amazon 9250082

The company has tried to get this done in different ways before. Alexa has appeared in Bluetooth speakers and headphones, in some cars (including now GM, as of today) and via Amazon’s own car accessory – and though the timing didn’t line up, it would’ve been a lock for Amazon’s failed Fire Phone.

Notice that none of these existing examples have helped Amazon gain any apparent significant market share when it comes to Alexa use on the go. While we don’t have great stats on how well-adopted Alexa is in car, for instance, it stands to reason that we’d be hearing a lot more about its success if it was indeed massively successful – in the same way we hear often about Alexa’s prevalence in the home.

Amazon lacks a key vector that other voice assistants got for free: Being the default option on a smartphone. Google Assistant manages this through both Google’s own, and third-party Android phones. Apple’s Siri isn’t often celebrated for its skill and performance, but there’s no question that it benefits from just being the only really viable option on iOS when it comes to voice assistant software.

Amazon had to effectively invent a product category to get Alexa any traction at all – the Echo basically created the smart speaker category, at least in terms of significant mass market uptake. Its success with its existing Echo devices proves that this category served a market need, and Amazon has reaped significant reward as a result.

But for Amazon, a virtual assistant that only operates in the confines of the home covers only a tiny part of the picture when it comes to building more intelligent and nuanced customer profiles, which is the whole point of the endeavour to begin with.  While Americans seem to be spending more time at home than ever before, a big percentage of peoples’ days is still spent outside, and this is largely invisible to Alexa.

The thing is, the only reliable and proven way to ensure you’re with someone throughout their entire day is to be on their smartphone. Alexa is, via Amazon’s own app, but that’s a far cry from being a native feature of the device, and just a single tap or voice command away. Amazon’s own smartphone ambitions deflated pretty quickly, so now it’s casting around for alternatives – and Loop, Frames and Buds all represent its most aggressive attempts yet.

alexa echo amazon 9250074

A smart spread of bets, each with their own smaller pool of penetration among users vs. a general staple like a smartphone, might be Amazon’s best way to actually drive adoption – especially if they’re not concerned with the overall economics of the individual hardware businesses attached to each.

The big question will be whether A) these products can either offer enough value on their own to justify their continued use while Alexa catches up to out-of-home use cases from a software perspective, or B) Amazon’s Alexa team can interate the assistant’s feature set quick enough to make it as useful on the go as it is at home, which hasn’t seemed like something it’s been able to do to date (not having direct access to smartphone functions like texting and calling is probably a big part of that).

Specifically for these new products, I’d put the Buds at the top of the list as the most likely to make Alexa a boon companion for a much greater number of people. The buds themselves offer a very compelling price point for their feature set, and Alexa coming along for the ride is likely just bonus for a large percent of their addressable market. Both the Frames and the Loop seem a lot more experimental, but Amazon’s limited release go-to-market strategy suggest its planned for that as well.

In the end, these products are interesting and highly indicative of Amazon’s direction and ambition with Alexa overall, but I don’t think this is the watershed moment for the digital assistant beyond the home. Still, it’s probably among the most interesting spaces in tech to watch, because of how much is at stake for both winners and losers.

 


0

Apple is turning Siri audio clip review off by default and bringing it in house

18:01 | 28 August

The top line news is that Apple is making changes to the way that Siri audio review, or ‘grading’ works across all of its devices. First, it is making audio review an explicitly opt-in process in an upcoming software update. This will be applicable for every current and future user of Siri.

Second, only Apple employees, not contractors, will review any of this opt-in audio in an effort to bring any process that uses private data closer to the company’s core processes.

Apple has released a blog post outlining some Siri privacy details that may not have been common knowledge as they were previously described in security white papers.

Apple apologizes for the issue.

“As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes…”

It then outlines three changes being made to the way Siri grading works.

  • First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.
  • Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
  • Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

Apple is not implementing any of these changes, nor is it lifting the suspension on the Siri grading process that it halted until the software update becomes available for its operating systems that will allow users to opt in. Once people update to the new versions of its OS, they will have the chance to say yes to the grading process that uses audio recordings to help verify requests that users make of Siri. This effectively means that every user of Siri will be opted out of this process once the update goes live and is installed.

Apple says that it will continue using anonymized computer generated written transcripts of your request to feed its machine learning engines with data, in a fashion similar to other voice assistants. These transcripts may be subject to Apple employee review.

Amazon and Google had previous revelations that their assistants were being helped along by human review of audio, and they have begun putting opt-ins in place as well.

Apple is making changes to the grading process itself as well, noting that, for example, “the names of the devices and rooms you setup in the Home app will only be accessible by the reviewer if the request being graded involves controlling devices in the home.”

A story in The Guardian in early August outlined how Siri audio samples were sent to contractors Apple had hired to evaluate the quality of responses and transcription that Siri produced for its machine learning engines to work on. The practice is not unprecedented, but it certainly was not made as clear as it should have been in Apple’s privacy policies that humans were involved in the process. There was also the matter that contractors, rather than employees, were being used to evaluate these samples. One contractor described as containing sensitive and private information that, in some cases, may have been able to be tied to a user, even with Apple’s anonymizing processes in place.

In response, Apple halted the grading process worldwide while it reviewed the process. This post and updates to its process are the result of that review.

Apple says that around 0.2% of all Siri requests got this audio treatment in the first place, but given that there are 15B requests per month, the quick maths tell us that though it is statistically insignificant, the raw numbers could be quite high.

The move away from contractors was signaled by Apple releasing employees in Europe, as noted by Alex Hearn earlier on Wednesday.

Apple is also publishing an FAQ on how Siri’s privacy controls fit in with its grading process, you can read that in full here.

The blog post from Apple and the FAQ provide some details to consumers about how Apple handles the grading process, how it is minimizing the data given to data reviewers in the grading process and how Siri privacy is preserved.

 


0

The BBC is developing a voice assistant, code named ‘Beeb’

16:41 | 27 August

The BBC — aka, the British Broadcasting Corporation, aka the Beeb, aka Auntie — is getting into the voice assistant game.

The Guardian reports the plan to launch an Alexa rival, which has been given the working title ‘Beeb’, and will apparently be light on features given the Corp’s relatively slender developer resources vs major global tech giants.

The BBC’s own news site says the digital voice assistant will launch next year without any proprietary hardware to house it. Instead the corporation is designing the software to work on “all smart speakers, TVs and mobiles”.

Why is a publicly funded broadcaster ploughing money into developing an AI when the market is replete with commercial offerings — from Amazon’s Alexa to Google’s Assistant, Apple’s Siri and Samsung’s Bixby to name a few? The intent is to “experiment with new programmes, features and experiences without someone else’s permission to build it in a certain way”, a BBC spokesperson told BBC news.

The corporation is apparently asking its own staff to contribute voice data to help train the AI to understand the country’s smorgasbord of regional accents.

“Much like we did with BBC iPlayer, we want to make sure everyone can benefit from this new technology, and bring people exciting new content, programmes and services — in a trusted, easy-to-use way,” the spokesperson added. “This marks another step in ensuring public service values can be protected in a voice-enabled future.”

While at first glance the move looks reactionary and defensive, set against the years of dev already ploughed into cutting edge commercial voice AIs, the BBC has something those tech giant rivals lack: Not just regional British accents on tap — but easy access to a massive news and entertainment archive to draw on to design voice assistants that could serve up beloved personalities as a service.

Imagine being able to summon the voice of Tom Baker, aka Doctor Who, to tell you what the (cosmic) weather’s like — or have the Dad’s Army cast of characters chip in to read out your to-do list. Or get a summary of the last episode of The Archers from a familiar Ambridge resident.

Or what about being able to instruct ‘Beeb’ to play some suitably soothing or dramatic sound effects to entertain your kids?

On one level a voice AI is just a novel delivery mechanism. The BBC looks to have spotted that — and certainly does not lack for rich audio content that could be repackaged to reach its audience on verbal command and extend its power to entertain and delight.

When it comes to rich content, the same cannot be said of the tech giants who have pioneered voice AIs.

There have been some attempts to force humor (AIs that crack bad jokes) and/or shoehorn in character — largely flat-footed. As well as some ethically dubious attempts to pass off robot voices as real. All of which is to be expected, given they’re tech companies not entertainers. Dev not media is their DNA.

The BBC is coming at the voice assistant concept from the other way round: Viewing it as a modern mouthpiece for piping out more of its programming.

So while Beeb can’t hope to compete at the same technology feature level as Alexa and all the rest, the BBC could nonetheless show the tech giants a trick or two about how to win friends and influence people.

At the very least it should give their robotic voices some much needed creative competition.

It’s just a shame the Beeb didn’t tickle us further by christening its proto AI ‘Auntie’. A crisper two syllable trigger word would be hard to utter…

 


0

Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe

13:42 | 14 August

Facebook’s lead privacy regulator in Europe is now asking the company for detailed information about the operation of a voice-to-text feature in Facebook’s Messenger app and how it complies with EU law.

Yesterday Bloomberg reported that Facebook uses human contractors to transcribe app users’ audio messages — yet its privacy policy makes no clear mention of the fact that actual people might listen to your recordings.

A page on Facebook’s help center also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.

A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Google, Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”

Bloomberg’s report follows similar revelations about AI assistant technologies offered by other tech giants, including Apple, Amazon, Google and Microsoft — which have also attracted attention from European privacy regulators in recent weeks.

What this tells us is that the hype around AI voice assistants is still glossing over a far less high tech backend. Even as lashings of machine learning marketing guff have been used to cloak the ‘mechanical turk’ components (i.e. humans) required for the tech to live up to the claims.

This is a very old story indeed. To wit: A full decade ago, a UK startup called Spinvox, which had claimed to have advanced voice recognition technology for converting voicemails to text messages, was reported to be leaning very heavily on call centers in South Africa and the Philippines… staffed by, yep, actual humans.

Returning to present day ‘cutting-edge’ tech, following Bloomberg’s report Facebook said it suspended human transcriptions earlier this month — joining Apple and Google in halting manual reviews of audio snippets for their respective voice AIs. (Amazon has since added an opt out to the Alexa app’s settings.)

We asked Facebook where in the Messenger app it had been informing users that human contractors might be used to transcribe their voice chats/audio messages; and how it collected Messenger users’ consent to this form of data processing — prior to suspending human reviews.

The company did not respond to our questions. Instead a spokesperson provided us with the following statement: “Much like Apple and Google, we paused human review of audio more than a week ago.”

Facebook also described the audio snippets that it sent to contractors as masked and de-identified; said they were only collected when users had opted in to transcription on Messenger; and were only used for improving the transcription performance of the AI.

It also reiterated a long-standing rebuttal by the company to user concerns about general eavesdropping by Facebook, saying it never listens to people’s microphones without device permission nor without explicit activation by users.

How Facebook gathers permission to process data is a key question, though.

The company has recently, for example, used a manipulative consent flow in order to nudge users in Europe to switch on facial recognition technology — rolling back its previous stance, adopted in response to earlier regulatory intervention, of switching the tech off across the bloc.

So a lot rests on how exactly Facebook has described the data processing at any point it is asking users to consent to their voice messages being reviewed by humans (assuming it’s relying on consent as its legal basis for processing this data).

Bundling consent into general T&Cs for using the product is also unlikely to be compliant under EU privacy law, given that the bloc’s General Data Protection Regulation requires consent to be purpose limited, as well as fully informed and freely given.

If Facebook is relying on legitimate interests to process Messenger users’ audio snippets in order to enhance its AI’s performance it would need to balance its own interests against any risk to people’s privacy.

Voice AIs are especially problematic in this respect because audio recordings may capture the personal data of non-users too — given that people in the vicinity of a device (or indeed a person on the other end of the phone line who’s leaving you a message) could have their personal data captured without ever having had the chance to consent to Facebook contractors getting to hear it.

Leaks of Google Assistant snippets to the Belgian press recently highlighted both the sensitive nature of recordings and the risk of reidentification posed by such recordings — with journalists able to identify some of the people in the recordings.

Multiple press reports have also suggested contractors employed by tech giants are routinely overhearing intimate details captured via a range of products that include the ability to record audio and stream this personal data to the cloud for processing.

 


0

Amazon’s lead EU data regulator is asking questions about Alexa privacy

17:17 | 9 August

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.

 


0

Amazon develops a new way to help Alexa answer complex questions

21:54 | 31 July

Amazon’s Alexa AI team had developed a new training method for the virtual assistant that could greatly improve its ability to handle tricky questions. In a blog post, team lead Abdalghani Abujabal details the new method, which combines both text-based search and a custom-built knowledge graph, two methods which normally compete.

Abujabal suggests the following scenario: You ask Alexa “Which Nolan films won an Oscar but missed a Golden Globe?” The answer to this question asks a lot – you need to identify that the ‘Nolan’ referred to is director Christopher Nolan, figure out which movie he’s directed (even his role as ‘director’ for the resulting list needs to be inferred) and then cross-reference those which have one an Oscar with a list of those which have also won a Golden Globe, and identify those that are present on List A but not on List B.

Amazon’s method to provide a better answer to this difficult question opts for first gathering the most complete data set possible, and then automatically building a curated knowledge graph out of an initially high volume and very noisy (ie., filled with unnecessary data) data set using algorithms that the research team custom-created to deal with cutting the chaff and arriving at mostly meaningful results.

The system devised by Amazon is actually relatively simple on its face – or rather, it combines two relatively simple methods, including a basic web search, that essentially just crawls the web for results using the full text of the question asked – just like if you’d typed “Which Nolan films won an Oscar but missed a Golden Globe?” into Google, for instance (researchers used multiple web engines in reality). The system then grabs the top ten ranked pages and breaks them down into identified names and grammar units.

On top of that resulting data set, Alexa AI’s approach then looks for clues in the structure of sentences to flag and weight significant sentences in the top texts, like “Nolan directed Inception,” and discounts the rest. This builds the ad-hoc knowledge graph, which they then asses to identify “cornerstones” within. A cornerstone is basically dead ringers for words in the original search string (ie., “Which Nolan films won an Oscar but missed a Golden Globe?”) and take those out, focusing instead of looking at the information in between as the source fo the actual answers to that question.

With some final weighting and sorting of the remaining data, the algorithm correctly returns “Inception” as the answer, and Amazon’s team found that this method actually beat out state-of-the-art approaches that were much more involved but that focused on just text search, or just building a curated knowledge graph in isolation. Still, they think they can tweak their approach to be even better, which is good news for Alexa users hoping their smart speakers will be able to settle heated debates about advanced Trival Pursuit questions.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 68

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short