Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 30

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 24

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Виктор Иванович

Виктор Иванович, 39

Joined: 29 January 2014

About myself: Жизнь удалась!

Interests: Куба и Панама

Alexey Geno

Alexey Geno, 6

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Emotions

<< Back Forward >>
Topics from 1 to 10 | in all: 16

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0

New Affectiva cloud API helps machines understand emotions in human speech

16:00 | 13 September

Affectiva, the startup that spun out of the MIT Media Lab several years ago with tools designed to understand facial emotions, announced a new cloud API today that can detect a range of emotion in human speech.

When we speak, our voices offer subtle and not so subtle cues about our emotions. Whether our voices are tight or loud or soft can give valuable clues about our feelings. Humans can sometimes (although not always) detect those emotions, but traditionally computers have not been very good at it.

Alexa isn’t terribly funny because the technology doesn’t understand humor or tone, and can’t understand when you’re joking versus asking a genuine question. Using Affectiva’s new tech, voice assistants, bots and other devices that operate using artificial intelligence might soon be able to hear and understand our emotions — and be able to derive more meaning from our requests, company CEO and co-founder Dr. Rana el Kaliouby told TechCrunch.

“Amazon [and other companies] knows if it wants it to be persuasive to try a product or route, it needs to have a relationship [with you]. To have a relationship, it needs to understand your emotional state, which is what humans do, have a real-time understanding of an emotional state. Are you annoyed, frustrated, confused?,” Kaliouby explained.

Amazon isn’t alone. Car makers are interested in knowing your emotional state behind the wheel, and that of your passengers. These factors could have an impact on your safety in the car. Any company could use a better understanding of customers calling into their call centers or dealing with a customer service bot (they would find me often annoyed).

About a year ago, the company decided to begin studying how a machine might be able to detect an emotion based on the quality of the spoken voice. This is no easy task. There are different languages and a variety of cultural cues, which aren’t necessarily standard from country to country and culture to culture.

The company has been collecting data in the public domain and from its own data sets related to the emotional facial recognition research from around the world. They have teams of people listening to each test subject and identifying the emotion. To avoid bias, each labeler goes through a training program, and for each item in the test set, at least three of five testers have to agree on the emotional state, she said.

Affectiva understands that the data they have gathered to this point is only the beginning. Today’s announcement around the API is also about getting partners to help push this work further along. “We are starting with a crowd-based API because we are looking for data partners interested in partnering around data and emotion classifiers,” she said.

All of this research is great in theory, but there are also many ethical questions related to machines detecting our emotions in our faces and our speech, and Kaliouby understands this. They have strict guidelines about gathering information and how they use it.

They are also running a one-day Emotion AI Summit today in Cambridge at the MIT Media Lab where a variety of speakers will discuss the implications for this kind of technology on society.

Featured Image: Flashpop/Getty Images

 


0

Curious to crack the neural code? Listen carefully

22:30 | 10 July

Perhaps for the first time ever, our noggins have stolen the spotlight from our hips. The media, popular culture, science and business are paying more attention to the brain than ever. Billionaires want to plug into it.

Scientists view neurodegenerative disease as the next frontier after cancer and diabetes, hence increased efforts in understanding the physiology of the brain. Engineers are building computer programs that leverage “deep learning” which mimic our limited understanding of the human brain. Students are flocking to universities to learn how to build artificial intelligence for everything from search to finance to robotics to self-driving cars.

Public markets are putting premiums on companies doing machine learning of any kind. Investors are throwing bricks of cash at startups building or leveraging artificial intelligence in any way.

Silicon Valley’s obsession with the brain has just begun. Futurists and engineers are anticipating artificial intelligence surpassing human intelligence. Facebook is talking about connecting to users through their thoughts. Serial entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with AI by pouring tens of millions of their own money into Kernel and Neuralink, respectively.

Bryan and Elon have surrounded themselves with top scientists, including Ted Berger and Philip Sabes, to start by cracking the neural code. Traditionally, scientists seeking to control machines through our thoughts have done so by interfacing with the brain directly, which means surgically implanting probes into the brain.

Scientists Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts through implanted probes, also known as electrode arrays. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which would translate individual neuron activity to the movements of a nearby robotic arm.

The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab treats. These arrays were later implanted in quadriplegics to enable them to control cursors on screens and robotic arms. The original motivation, and funding, for this research was intended for treating neurological diseases such as epilepsy, Parkinson’s and Alzheimer’s.

Curing brain disease requires an intimate understanding of the brain at the brain-cell level.  Controlling machines was a byproduct of the technology being built, in addition to access to patients with severe epilepsy who already have their skulls opened.

Science-fiction writers imagine a future of faster-than-light travel and human batteries, but assume that you need to go through the skull to get to the brain. Today, music is a powerful tool for conveying emotion, expression and catalyzing our imagination. Powerful machine learning tools can predict responses from auditory stimulation, and create rich, Total-Recall-like experiences through our ears.

If the goal is to upgrade the screen and keyboard (human<->machine), and become more telepathic (human<->human), do we really need to know what’s going on with our neurons? Let’s use computer engineering as an analogy: physicists are still discovering elementary particles; but we know how to manipulate charge good enough to add numbers, trigger pixels on a screen, capture images from a camera and capture inputs from a keyboard or touchscreen. Do we really need to understand quarks? I’m sure there are hundreds of analogies where we have pretty good control over things without knowing all the underlying physics, which begs the question: Do we really have to crack the neural code to better interface with our brains?

In fact, I would argue that humans have been pseudo-telepathic since the beginning of civilization, and that has been in the art of expression. Artists communicate emotions and experiences by stimulating our senses, which already have a rich, wide-band connection to our brains. Artists stimulate our brains with photographs, paintings, motion pictures, song, dance and stories. The greatest artists have been the most effective at invoking certain emotions; love, fear, anger and lust to name a few. Great authors empower readers to live though experiences by turning the page, creating richer experiences than what’s delivered through IMAX 3D.

Creative media is a powerful tool of expression to invoke emotion and share experiences. It is rich, expensive to produce and distribute, and disseminated through screens and speakers. Although virtual reality promises an even more interactive and immersive experience, I envision a future where AI can be trained to invoke powerful, personal experiences through sound. Images courtesy of Apple and Samsung.

Unfortunately those great artists are few, and heavy investment goes into a production which needs to be amortized over many consumers (1->n). Can content be personalized for EVERY viewer as opposed to a large audience (1->1)?  For example, can scenes, sounds and smells be customized for individuals? To go a step further; can individuals, in real time, stimulate each other to communicate emotions, ideas and experiences (1<->1)?

We live our lives constantly absorbing from our senses. We experience many fundamental emotions such as desire, fear, love, anger and laughter early in our lifetimes. Furthermore, our lives are becoming more instrumented by cheap sensors and disseminated through social media. Can a neural net trained on our experiences, and emotional responses to them, be able to create audio stimulation that would invoke a desired sensation? This “sounds” far more practical than encoding that experience in neural code and physically imparting it to our neurons.

Powerful emotions have been inspired by one of the oldest modes of expression: music. Humanity has invented countless instruments to produce many sounds, and, most recently, digitally synthesized sounds that would be difficult or impractical to generate with a physical instrument. We consume music individually through our headphones, and experience energy, calm, melancholy and many other emotions. It is socially acceptable to wear headphones; in fact Apple and Beats by Dre made it fashionable.

If I want to communicate a desired emotion with a counterpart, I can invoke a deep net to generate sounds that, as per a deep net sitting on my partners’ device, invokes the desired emotion. There are a handful of startups using AI to create music with desired features that fit in a certain genre. How about taking it a step further to create a desired emotion on behalf of the listener?

The same concept can be applied for human-to-machine interfaces, and vice-versa. Touch screens have already almost obviated keyboards for consuming content; voice and NLP will eventually replace keyboards for generating content.

Deep nets will infer what we “mean” from what we “say.” We will “read” through a combination and hearing through a headset and “seeing” on a display. Simple experiences can be communicated through sound, and richer experiences through a combination of sight and sound that invoke our other senses; just as how the sight of a stadium invokes the roar of a crowd and the taste of beer and hot dogs.

Behold our future link to each other and our machines: the humble microphone-equipped earbud. Initially, it will tether to our phones for processing, connectivity and charge. Eventually, advanced compute and energy harvesting would obviate the need for phones, and those earpieces will be our bridge to every human being on the planet, and access corpus of human knowledge and expression.

The timing of this prediction may be ironic given Jawbone’s expected shutdown. I believe that there is a magical company to be built around what Jawbone should have been: an AI-powered telepathic interface between people, the present, past and what’s expected to be the future.

Featured Image: Ky/Flickr UNDER A CC BY 2.0 LICENSE

 


0

Accenture, can I ask you a few questions?

21:30 | 12 April

Hey Accenture, are you aware that your PR firm is pitching your latest corporate beta for a creepy face and emotion-monitoring algorithm as a party trick?

Do you know what a party is? Have you read the definition?

Would you call requiring people to download an app before they can get into an event a party-time kind of thing to do? Would you say that demanding that people scan their faces so they can be recognized by an algorithm fun times?

Does having that same algorithm watch and record every interaction that happens in the Austin bar or event space you’ve filled with some light snacks and a bunch of free liquor count as rockin like Dokken?

Do good hosts require people to become lab rats in the latest attempt to develop HAL?

Is monitoring patients’ faces in hospitals really the best way to apply the technology outside of your PartyBOT’s par-tays? Or is that also a creepy and intrusive use of technology, when other solutions exist to track actual vital signs?

What do your own office parties look like? Does Sam from the front desk have to try out the newest accounting software to get a drink from the punch bowl? Does Diane have to swear allegiance to Watson before grabbing that tuna roll? Do you throw them at Guy’s American Kitchen & Bar?

Does your soul die a little when you turn people into test subjects for the AI apocalypse?

Maybe after reading this press release, it should?

With the rise of AI and voice recognition, customer experiences can be curated to the next level. Imagine Amazon’s Alexa, but with more emotion, depth and distinction. Accenture Interactive’s PartyBOT is not simply a chatbot – it is equipped with the latest technologies in facial recognition that allow the bot to recognize user feelings through facial expressions, and words resulting in more meaningful conversations.

Featured at this year’s SXSW, the PartyBOT delivered an unparalleled party experience for our guests – detecting their preferences from favorite music, beverages and more. The PartyBOT went so far as to check in on every attending guest at the party, curating tailored activities based on their preferences. Link to video:

(pw: awesome)

But the PartyBOT goes much further than facilitating great party experiences. Its machine learning applications can apply to range of industries from business to healthcare – acting as an agent to support patient recognition and diagnosis in hospitals that can recognize patient distress and seek the appropriate help from doctors.

If you would like to learn more about the PartyBOT, I’m happy to put you in touch with our executives to discuss the applications of our technology and potentially schedule time to see this in our studios.

Featured Image: Rosenfeld Media/Flickr UNDER A CC BY 2.0 LICENSE

 


0

Immigrant eyes

18:10 | 19 March

Over the past few years we’ve seen a lot of anger. We’ve seen a seemingly sane country cut itself off from mainland Europe and we’ve seen a seemingly beneficent border police force turn angry. We’ve heard that immigrants steal our jobs, kill our people, and bring in drugs and terror.

This is wrong.

On the pro-side we know that immigrants, as a whole, commit less crime than US born citizens. On the side against immigration we get a few vignettes of terror that pretend to paint a whole picture. Further, we must understand the plight of the worker in America and Britain. Immigrants do take jobs and the white-collar world can’t see the effects. Antonio Garcia-Martinez, author of Chaos Monkeys, said it best when he wrote:

Blue Staters ridicule working-class Red Staters’ support for Trump, and his rhetoric around building a wall. Of course, white collar workers already have their wall: it’s called the H1 visa, and it means they don’t have to compete with every graduate of IIT or Tsinghua for a job. Imagine for a moment if there were a scrum of diligent and capable Chinese and Indian engineers in front of Facebook and Google (as there is a scrum of Mexicans outside every Home Depot in the US), each ready to take the job for less than the coddled American inside stuffing his face with the free ribs. What would that engineer’s opinion on illegal immigration be then?

What the Red Staters are asking for is essentially the same guarantees around a limited flow of outside labor that the Blue Staters already enjoy.

As always, it’s not enlightened ideals that define political opinions, it’s power and self-interest. White collar workers have the views they do because they’ve already gotten theirs, and they don’t care about the high-school-educated plumber or construction worker in Louisville or Des Moines, who sees that Mexican outside Home Depot (rightly) as a threat to his livelihood.

Everyone is right and no one is. High tech is blind to the plight of the migrant worker and depends on the largesse of governments for programmers and outsourced data centers. Everyone, from the guy in a truck at Home Depot to the woman in an Uber on the Golden Gate, must take stock and try to help.

Ultimately immigration is a change agent and a necessity. Populations age. Cultures shift. New technologies will fix old failures. And throughout it all the sane and accepted migration of the skilled and unskilled, refugees and expats, will get us through. To blame our ills on those different from us is a failure of humanity and this failure has repeated itself over and over again since the Dark Ages. We fear – but need – the Other. That fear must be excised.

In the end I can only recommend one or two things to fix this in the short term. First, understand your own roots and your own path and try to help others in your same course. My grandparents were Polish and Hungarian. I’ve tried, in my own way, to boost those countries as much as I can, knowing full well that their economies and ecosystems are in full bloom and they don’t need much help. But what they need is investment and attention. I can give them that.

We can also help spread the vision of entrepreneurship throughout the world. A few weeks ago I met a group of Cuban entrepreneurs who were visiting Denver and they stopped in at Boomtown to see what the accelerator experience was like. These were men and women who, desire the odds and political machinations, were building their country’s Internet infrastructure. They loved seeing how the accelerator helped young companies grow and I think their eyes were opened to new possibilities. They’ll be attending Disrupt in May, as well. It was the least I could do for them.

But the thing that struck me most was how similar they were to every entrepreneur I meet from Zagreb to Alameda. All had a clearness of vision, all were ready to work and change. All were ready to move forward despite all odds. They aren’t here to steal jobs, they’re here to make jobs. Their not here to break the body politic but to strengthen the good and cleave away the bad. They are here to fix things at home and abroad.

They are not immigrants. They are people. They move from place to place, bringing some bad but more good. They will never stop. They will never give up. It’s best to harness that energy than to stifle it.

I was at a wedding in Pittsburgh a few years ago when I heard a story from an older friend who heard about my Polish roots. Three girls, eight, ten, and fourteen, came over from Warsaw by way of Gdansk in about 1900. They sailed with their father, a blacksmith and drinker, and arrived in New York shaken and sick. They took an overland route to Coal Country and settled outside of Pittsburgh. Their father went to work in the mines and the girls went to school. One afternoon, when they came home, their father was gone.

He had gone back to Poland, leaving the girls alone. They didn’t know why he left – he had a half-baked plan to bring his wife over – and he din’t leave much money or even a note. The girls took jobs sewing and cleaning and the oldest took care of the middle child who took care of the youngest. All went to school when they could. The neighborhood, made up mostly of immigrants, tried to helped and the girls made a life for themselves. Unbeknownst to them, their parents died back in Poland which is why their letters home went unanswered. It is a tale of penury, perdition, and fear, of babes in the woods and, potentially, failure.

But it didn’t end in a nightmare.

The girls grew and married. They built lives in a new world into which they were thrust, like babies dipped howling into the baptismal font in Pittsburgh’s Kościół Matki Boskiej, the shock of the future coming over them in a rush. And they made it. Their father, for all his faults, knew they would be safe. The world where his daughters now lived was not nearly as dangerous as the world he left and the potential for survival was far greater. He trusted his girls to the tide of immigration and he was not wrong.

“That was my grandmother, the youngest one,” said my friend. He is now an engineer in Pittsburgh, an American.

We forget that we all came from afar, with nothing. And we cannot fear those who follow behind us.

Featured Image: jvoves/Flickr UNDER A CC BY 2.0 LICENSE

 


0

The Emotion Journal performs real-time sentiment analysis on your most personal stories

16:06 | 4 December

Andrew Greenstein, an app developer from San Francisco, started journaling a few months ago. He tries to write for five minutes every day, but it’s challenging to set aside the time. Still, he’s read that journaling reduces stress and can help with goal-setting, so he’s trying to make it a habit.

At the Disrupt London Hackathon, Greenstein and his team built The Emotion Journal, a voice journaling app that performs real-time emotional analysis to detect the user’s feelings and chart their emotional state over time. (A small aside here: when you check out The Emotion Journal’s website, you might get a security warning — this is because the site is built to have a secure https connection but the developers haven’t paid for a certificate yet.)

By day, Greenstein is the CEO of SF AppWorks, a digital agency. But he and his co-founder, Darius Zagrean, have become increasingly interested in artificial intelligence and how it can be used to address mental health issues.

“I’d like to continue working on it because this stuff to me, the connection between human and computer, is so fascinating,” Greenstein said.

The idea to apply AI to mental health challenges first came to Greenstein and his team during an internal AppWorks hackathon, during which one team built an app to roleplay anxiety-inducing situations and improve the user’s response to his or her anxiety triggers. “We wanted to get into AI as much as we could. It’s so clear that’s where the world is going. This one had the emotional factor that really grabbed me,” Greenstein explained.

He decided to continue exploring the intersection of AI and emotion at the London Hackathon and built The Emotion Journal using IBM Watson to perform the real-time sentiment analysis. A user talks about their day and The Emotion Journal reacts quickly, changing color in response to the user’s feelings.

The Emotion Journal also stores and color-codes logs over time, so a user can see at a glance what their recent emotions have been. “We talked a lot about how journaling needs to be more indexable, more searchable. Maybe you can learn something from a past session,” Greenstein said.

Although The Emotion Journal is a promising merger between AI and journaling that might teach us more about our own mental health, it might not be around forever — Greenstein hopes to keep the project up in a demo state. So go check it out now while you still can.

 


0

Emotionally intelligent computers may already have a higher EQ than you

04:00 | 3 December

Andrew Thomson Crunch Network Contributor

Andrew Thomson is founder and CEO of VentureRadar, a London-based technology scouting startup that uses big data to connect companies with clients.

More posts by this contributor:
  • 7 Unexpected Virtual Reality Use Cases
  • Using The Blockchain To Fight Crime And Save Lives
How to join the network

From I, Robot to Ex Machina to Morgan, the idea of creating robots that can understand, compute and respond to human emotions has been explored in movies for decades. However, a common misconception is that the challenge of creating emotionally intelligent computing systems is too great to be met any time soon. In reality, computers are already demonstrating they can augment — or even replace — human emotional intelligence (EQ).

Perhaps, surprisingly, it is the lack of emotion in computing systems that places them in a such a good position to be emotionally intelligent — unlike humans, who aren’t always particularly good at reading others, and are prone to missing emotional signals or being fooled by lies.

According to Tomas Chamorro-Premuzic, “robots do not need to be able to feel in order to act in an emotionally intelligent manner. In fact, contrary to what people think, even in humans high EQ is associated with lower rather than higher emotionality. [High EQ] is about controlling one’s impulses and inhibiting strong emotions in order to act rationally and minimize emotional interference.”

In the field of affective computing, sensors and other devices are also getting very good at observing and interpreting facial features, body posture, gestures, speech and physical states, another key ingredient in emotional intelligence. Innovative companies across a range of industries are now using computing systems that can augment, and even improve on, human emotional intelligence.

Management

In the high pressure environment of Wall Street, stock traders hold power over millions of dollars of their employer’s money, and split-second decisions can make or break careers.

The emotional state of employees can determine if they are at an increased risk of making a costly mistake, or if they have just made one. Historically, the management culture in some industries hasn’t always optimized for considering the emotional well-being of employees.
However, leading banks like JPMorgan Chase and Bank of America are now working with tech companies to put systems in place to monitor worker emotions and boost performance and compliance.

According to Bloomberg, a number of banks have partnered with Humanyze, a startup founded by MIT graduates that produces sensor-laden badges that transmit in real time data on speech, activity and stress patterns. While it may sound like a scene from Orwell’s 1984, the badges also contain microphones and proximity sensors that can help employers improve productivity in teams by analyzing behavioral data. The devices would allow managers to assist employees who are “out of their depth” and take decisive action, and also to highlight positive behavior, which can be used in team training.

Considerate driving

If you’ve ever sat with white knuckles in the back of a taxi as your driver swerves through traffic, then you’re probably quite excited about the prospect of “self-driving” cars that are programmed to follow the rules and be safe. As autonomous vehicles begin to replace manned vehicles, our robot drivers may be a whole lot more responsive to how you feel about their driving.

BRAIQ is a startup that is teaching autonomous vehicles how to read the comfort level of its passengers and learn to drive the way they prefer. This personalization is intended to both help increase passenger comfort as well as foster trust in self-driving technology.

Off-the-shelf in-cabin sensors provide data on how the passengers feel about the car’s actions — such as acceleration, braking and steering. The collected biometric data is aggregated and analyzed, resulting in an AI whose driving style is responsive to a passenger’s comfort. BRAIQ’s software is effectively adding a layer of emotional intelligence on top of artificial intelligence.

Computers are already demonstrating they can augment — or even replace — human emotional intelligence.

New tech is also being created that will teach self-driving cars to communicate their intentions. To replace a wave of the hand to let someone pass in front or your car, or a flash of lights on the highway to let others know you are going to pass, Drive.ai has created a deep learning AI for driver-less cars that allows vehicles to signal their intentions to humans through lights, sounds and movement.

The new tech uses deep learning programming to assess what is going on around the car via sensors, and react appropriately to the situation. To effectively interact with pedestrians and other drivers, the cars could learn to use movements and sounds to indicate their next actions; for example, flashing lights to allow someone to pass, or rocking back and forward to indicate it will move forward.

Customer service

Cogito analyzes speaking patterns and conversational dynamics between call center agents and customers, providing real-time guidance to help phone professionals better engage and connect with customers.

Agents are guided to speak with more empathy, confidence, professionalism and efficiency, depending on the emotion detected through callers’ speech, while early signs of customer frustration and intent to purchase help improve service and close deals. Real-time dashboards enable supervisors to monitor and proactively intervene in live calls. Supervisors are automatically alerted to calls in which a customer is having a poor experience.

Cogito’s analytics provide objective insight into agents’ speaking behavior and customer experience on every call, and live customer experience scores help identify actionable best practices and trends for future training exercises.

Law enforcement

Law enforcement and government agencies around the world still use polygraph “lie detectors.” However, many experts question the continued use of the technology, arguing that the polygraph machines are inaccurate and can be tricked.

Related Articles

Affectiva and Uber want to brighten your day with machine learning and emotional intelligence Affectiva raises $14 million to bring apps, robots emotional intelligence Nanit knows more about how your baby sleeps than you do

Nuralogix has created technology that reads emotional reactions that aren’t notable to the human eye. Using a mix of Transdermal Optical Imaging and advanced machine learning algorithms, the technology assesses facial blood flow information to reveal hidden human emotions. In a law enforcement setting, officials would be able to ask direct questions and then assess the respondent’s true emotions based on an element they cannot physically control — the blood flow in their faces.

In a similar vein, researchers at MIT just announced EQ-Radio, a device that, the creators claim, can assess a user’s feeling at that moment with an accuracy of 87 percent. The device reflects wireless signals off a person’s body, then uses algorithms to document individual heartbeats and breathing patterns, and the levels of brain arousal, according to a report. To date, the technology has only been used to assess whether a participant is happy, sad or angry, but, as the technology develops, it could be trained to be used in a similar way to a polygraph test.

Although it might be creepy to think that in the future we will be monitored by machines that can detect our emotions, computing systems with emotional intelligence are already surpassing human capabilities. Far from being stuck in the realm of science fiction, they could soon be a reality in our homes, cars and offices.

Featured Image: Bryce Durbin/TechCrunch

 


0

TVision raises $6.8M to take on Nielsen with thermal eye and emotion tracking tech

16:36 | 26 October

Another startup out of MIT built on computer vision and focused on eye-tracking technology has raised funding to build out its business: Boston-based TVision Insights, which tracks who is watching what on TV and how they are reacting to it, and then works with advertisers and broadcasters to provide them with that data to have better insights into their programming, has raised $6.8 million.

The funding — which brings the total raised by TVision to $9.65 million — comes from Accomplice (formerly known as Atlas Venture), along with Golden Venture Partners, Jump Capital, and ITOCHU Technology Ventures (which has backed the likes of Box but also Fab, among many more startups).

There are a lot of startups right now gaining attention for the way that they are using advances in computer vision and machine learning to track what your eyes are doing. Just yesterday, it was announced that Google acquired Eyefluence, most likely to boost its efforts in emerging areas like virtual reality. Another startup that came out of MIT, Affectiva, started out focusing on emotional responsiveness to online videos and has more recently made some interesting inroads into robotics and automotive applications.

TVision is doing something a little different from these and has been built specifically to address the gap in how TV viewing is measured, CRO Dan Schiffman — who co-founded the company with CEO Yan Liu, Pongpun Pong Laosettanun, Alex Amis and Raymond Fu — explained to me.

The problem that TVision is solving is a well-known one in the TV world. There are a number of companies like Nielsen that already measure TV viewing, but many of them simply monitor when the TV is on, relying on the users themselves to indicate who is watching and when, and who is actually watching the TV rather than sitting on the sofa and playing on their phones instead. Variables like these can result in data that is not completely accurate.

And at a time when digital platforms are all about providing viewing data, and many users are already migrating away from tradition TV viewing, that reporting shortfall could eventually lead to advertising declines in a medium that has dominated advertising for decades but is facing a lot of competition from newer platforms like social media, mobile and streamed video.

“TVision provides an important solution for next-level analysis to an industry that is desperate for new and better ways to measure audience attention,” said Ryan Moore, partner and founder at Accomplice, in a statement. “Yan and his team offer the TV industry and advertisers a solution to make high-value programming and advertising decisions based on data they simply did not have before.”

The company starts with a small device that sits on top of and works with your ordinary television. It does not read the world as we see it with an optical camera alone; it uses lasers and thermal infra-red so that it can pick up more data even when lighting conditions are low (as they often are when you are watching TV). Its sensors and algorithms are capable of identifying not just who is watching in a family group, but also who is just sitting in the room but not watching TV (instead playing on, say, a mobile phone), and what viewers’ reactions are to the show that is on at the time.

As Schiffman describes it, “we than translate all that data into ones and zeros, and figure out how to make sense of it.”

TVision’s devices are currently installed in some 7,000 homes in the U.S. and Japan as part of an opt-in, Nielsen-style panel, Schiffman said. The idea is to use some of the funding for business development to bring that number up to 15,000.

The startup already provides data to three of the largest broadcasters in the U.S., as well as many major advertsiers — although these are under NDA and so the names cannot be disclosed, Schiffman said.

Some of the funding will also go towards hiring more talent to expand beyond its current 19 employees, as well as for R&D. Schiffman told me that TVision already has applications in for two utility patents, one for its computer vision algorithm and another around its analytics.

 


0

Science and technology will make mental and emotional wellbeing scalable, accessible, and cheap

13:30 | 17 October

Nichol Bradford Crunch Network Contributor

Nichol Bradford is the CEO and Founder of Willow Group, a Transformative Technology Company, and the co-founder and executive director of the Transformative Technology Lab at Sofia University.

How to join the network

Ask yourself a simple question.  If it were an issue for you, what would you pay to get rid of stress, anxiety, loneliness, sadness, physical and mental pain, or depression?  Or conversely, what would you pay to be happy, feel connected, feel understood, change bad habits into better ones, have a healthy brain as you age, or to fulfill your full potential and really thrive?

Posing that question is the key to decoding one of the biggest new tech markets.  We call this sector Transformative Technology, or TransTech.  Transformative Technology is science-based technology that has a significant impact on mental and emotional wellbeing.  

The consumer demand for TransTech is huge, but few are tracking it because it is hidden within other markets.

The market for mental and emotional support is large and proven.  Some of the relevant market sizes as of 2015 are as follows: Meditation ($1BN US), Yoga ($27BN US), Fitness & Mind Body ($446BN WW), Preventative Health ($432BN WW), Workplace Wellness ($40BN US),  Complementary & Alternative Health ($186BN WW), Depression, Stress and Anxiety drugs ($22BN WW), Weight Loss ($150BN WW), Addiction Treatment ($35BN US), and Self Help ($10BN US).  

When I look at these markets, I see people trying to find alternative solutions to the emotional problems that still bedevil modern society.  

The annual cost of stress in the US alone is estimated to be $300BN in healthcare and lost productivity.  Stress is accelerating — across countries, cultures, socio-economic levels, and ages.  No one is exempt.  Given the high impact of stress on health care costs, individuals and corporations are aggressively seeking solutions that are affordable, scalable, and accessible — something technology does best.

TransTech leverages mobile, IOT, cheap sensor and wearable proliferation, massive data sets, cheap networks and computing power, machine learning and AI alongside advances in digital medicine and neuroscience, biology and bioinformatics, and AR/VR to support our mental and emotional wellbeing.  

For example, your phone probably knows more about you than your mom right now.  It knows when you wake up and when you go to sleep, who you call and how often, what you read online and what catches your eye by measuring scroll speed.  By combining emotion recognition software with data from your wearables, it’s going to be able to know how you feel, when you are calm, and when your heart (or mind) is racing.  

The ability to micro-monitor our behaviors and biosignals and the accumulation of massive amounts of data will allow for the development of predictive models around psychological health, and programs supporting greater wellbeing.

So why is this happening now?  The rise of TransTech is the result of 1) need and desire for positive mental and emotional outcomes fueling demand for new, cheap, solutions, 2) the confluence of exponential technology and advances in medicine and biology driving down sensor and platform prices and raising utility, and 3) social trends such as US millennials prioritizing well-being so much that they spend ¼ of their disposable income on wellbeing or baby boomers being willing to pay anything to maintain cognitive levels.  

Even the growth of meditation as a “cool” trend is a factor.  As early adopter tech entrepreneurs and investors begin practicing meditation, many have become interested in turning their talents towards delivering wellbeing digitally.  Given this, TransTech activity is erupting in startup ecosystems worldwide from San Francisco to Tel Aviv, and from Northern Europe to Shenzhen.

There’s precedent here.  Just as tech and investor interest in the Quantified Self movement helped fuel the development of the wearable market (80MN units/2015 and estimated to be 245MN units/2019), their interest in TransTech will result in the launch of many companies.  

The rise of TransTech will mean that both thriving outcomes and solving basic problems that once were considered an inner journey often dependent on willpower, luck, birth, wealth or some other special, non-measurable element, will become more objectively approached, measured, and supported.   Whether we are seeking to feel happy, connected, understood, change bad habits into better ones, have a healthy brain as we age, and fulfill our full potential or if we are solving problems like stress, anxiety, loneliness, sadness, or depression, we will have a technology to support us.

Today, TransTech companies, projects, and researchers are not visible as a single sector yet.   To see the venture-backed companies you’d have to look at several market maps and piece them together.  

Early well-known examples include Lumosity, Fitbit, Happify, Thalmic Labs, Thync, Headspace, Alkili, Ginger.io, Interaxon/Muse, or Beyond Verbal. To see the applicable research you’d have to search for scientists working on Affective Computing, Positive Computing, Behavior Change, Addiction, Neurosignaling, Neuroplasticity, Neurosensing, Attention Network Training, Cognitive Training, Epigenetics, HRV, GSR, Artificial Intelligence and more.  

To see the up and coming entrepreneurs, you’d have to cover a lot of ground including meet-ups and online lists.  You’d have to sift through accelerators and Kickstarters worldwide and search through an extensive network.   It’s a lot of work, so we will do it for you.  Stay tuned.

Featured Image: agsandrew/Shutterstock

 


0

Affectiva and Uber want to brighten your day with machine learning and emotional intelligence

02:13 | 14 September

Your phone doesn’t know how you’re feeling — but you may want it to if that capability came with a few fringe benefits. Affectiva makes emotion-detection software, and CEO Rana el Kaliouby was full of ideas today at Disrupt SF as to how it could be deployed, from gifs to Ubers.

“We’re obsessed with emotional AI, we wake up thinking about it,” said Rana el Kaliouby. “I imagine a future where every device has a little emotion chip and can read your emotions, just like it’s touchscreen enabled or GPS enabled.”

If you aren’t creeped out by that, you might see a few of the benefits. As el Kaliouby points out, technology is getting personal.

“The way we relate to our devices and apps, it’s becoming very relational,” she said. “The way we act with technology is very like the way we act with each other. All these devices, whether it’s the Uber app or a calendar, they need to build a rapport with the consumer.”

Humans with higher emotional intelligence are more likable, so why shouldn’t the same be true for devices? To spur discovery, Affectiva just opened up its SDK and APIs for free use, so app makers will surely try integrating them soon.

And the more people that use the app, the more data the company has to work with. The system is already working from a database of 5 million faces

“That’s allowed our system to learn the difference between a Japanese smile and a Brazilian smile,” el Kaliouby said, “or that women express emotions differently than men. Humans are still the ground truth, but in some cases the algorithms are better than an average human — it depends on the emotion too.”

Of course, emotional intelligence isn’t just recognizing facial expressions and extracting moods. Also on stage was Danny Lange, head of machine learning at Uber. He was excited about the possibilities of — what else? — machine learning.

“We’re seeing this major change where we move from newton, who thought he could calculate everything about this world, past present and future, to a Heisenberg model,” he said, waxing historical. “It’s more about predictions and probabilities.”

Deep learning systems do their best work with lots of data, and fortunately Uber vehicles are racking up millions of miles, pickup locations, traffic problems, and so on. And UberEats provides another data set that can be cross-referenced and interesting correlations uncovered. A long history with both apps could put a car outside the moment you walk out the door, then recommend an alternative to the lunch you’ve chosen since traffic will prevent it from being delivered in a timely fashion — an alternative based on analyzing your previous orders, of course.

The strange thing about these machine learning systems, though, is how opaque they are to analysis. The results are great — but the process is obscure.

“It’s often hard to explain why they come up with the predictions they come up with,” Lange admitted. “You can’t look back into it. It’s almost impossible to explain whey you get the outcome you get.”

  1. danny-lange

  2. danny-lange2

  3. danny-lange1

  4. danny-lange

  5. rana-el-kallouby-and-danny-lange5

  6. rana-el-kallouby-and-danny-lange4

  7. rana-el-kallouby-and-danny-lange3

  8. rana-el-kallouby-and-danny-lange2

  9. rana-el-kallouby-and-danny-lange1

  10. rana-el-kallouby-and-danny-lange

  11. rana-el-kallouby

  12. rana-el-kalloubi3

  13. rana-el-kalloubi2

  14. rana-el-kalloubi1

  15. rana-el-kalloubi

 View Slideshow
Previous Next Exit

That means that sometimes these processes, both Affectiva’s and Uber’s, can produce unexpected results. To keep those within acceptable bounds, you need to control the data. When asked if the systems could discriminate — causing skewed results for certain genders, races, or the like — el Kaliouby grew briefly grave.

“It could,” she said, “and we take that very seriously. When we train our models, we make sure the data is balanced.”

People of all shapes, sizes, colors, and genders must be consulted — and are, el Kaliouby said. Lange concurred: “We have to be very cautious about that. It’s our responsibility to be careful what data we put in there.”

“Emotional data” seems like a contradiction in terms, but clearly your innermost thoughts, feelings, and habits are of great interest to many in the tech world. Get ready to have your mind read and like it.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 16

Site search


Last comments

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

An Interview With Shaquille O’Neal: Businessman, Investor And Video Game Star
Anna K
Shaquilee is a mogul! I see him the Gold bond commercials and think that he's doing something right…
Anna K