Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 30

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 24

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Виктор Иванович

Виктор Иванович, 39

Joined: 29 January 2014

About myself: Жизнь удалась!

Interests: Куба и Панама

Alexey Geno

Alexey Geno, 6

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Neuroscience

<< Back Forward >>
Topics from 1 to 10 | in all: 23

Watch a thought race across the surface of the brain

02:58 | 18 January

Although neuroscientists have a general idea of what parts of the brain do what, catching them in the act is a difficult proposition. But UC Berkeley researchers have managed to do it, visualizing based on direct measurement the path of a single thought (or at least thread) through the brain.

I should mention here that as someone who studied this stuff (a long time back, but still) and is extremely skeptical about the state of brain-computer interfaces, I’m usually the guy at TechCrunch who rains on parades like this. But this is the real thing, and you can tell because not only is it not flashy, but it was accomplished only through unusually gory means.

Just want to see the thought go? Watch below. But continue reading if you want to hear why this is so cool.

Normal scalp-based electroencephalography (EEG) is easy to do, but it really can only get a very blurry picture of brain activity near the surface, because it has to detect all that through your hair, skin, skull, etc.

What if you could take all that stuff out of the way and put the electrodes right on the brain? That’d be great, except who would volunteer for such an invasive procedure? Turns out, a handful of folks who were already getting open-brain surgery did.

Sixteen epilepsy patients whose brains needed to be examined closely to determine the source of seizures took part in this experiment. Hundreds of electrodes were attached directly to the brain’s surface in a technique called electrocorticography, and the people were asked to do one of several tasks while their brains were being closely monitored.

In the video (and gif) above, the patient was asked to repeat the word “humid.” You can see that first active is a part of the brain responsible for perceiving the word (the yellow dots, in one of the language centers); then, almost immediately afterwards a bit of cortex lights up (in blue) corresponding to planning the response, before that response is even fully ready; meanwhile, the prefrontal cortex does a bit of processing on the word (in red) in order to inform that response. It’s all slowed down a bit; the whole thing takes place in less than a second.

Essentially you’re watching a single thought process — “hear and repeat this word” — form and execute. Sure, it’s just colored dots, but really, most science is dots — what were you expecting, lightning bolts zooming along axons on camera?

In a more complex example (above), the person was asked to come up with the opposite of the word they hear. In this case, as you see below, considerably longer is spent by the prefrontal cortex in processing the word and formulating a response, since it must presumably consult memory and so on before sending that information to the motor regions for execution.

“We are trying to look at that little window of time between when things happen in the environment and us behaving in response to it,” explained lead author Avgusta Shestyuk in the Berkeley news release. “This is the first step in looking at how people think and how people come up with different decisions; how people basically behave.”

Ultimately the research was about characterizing the role of the prefrontal cortex in planning and coordinating brain activity, and it appears to have been successful in doing so — but the neuro-inclined among our readership will surely have already intuited this and will want to seek out the paper for details. It was published today in Nature Human Behavior.

 


0

Revising the results of last year’s study, Posit Science still effectively combats dementia

19:30 | 16 November

Revising the results of their headline-grabbing, decade-long study, researchers from top universities still believe that a brain training exercise from app developer Posit Science can actually reduce the risk of dementia among older adults.

The groundbreaking study still has incredible implications for the treatment of neurological disorders associated with aging — and helps to validate the use of application-based therapies as a form of treatment and preventative intervention.

Even though dementia rates are dropping in the U.S., between four and five million Americans were being treated for dementia last year.

And the disease is the most expensive in America — a 2010 study from the National Institute on Aging, cited by The New York Times estimated that in 2010, dementia patients cost the health care system up to $215 billion per year — more than heart disease or cancer, which cost $102 billion and $77 billion respectively.

Now, the results of the revised study have been published in Alzheimer’s & Dementia: Translational Research & Clinical Interventions, a peer-reviewed journal of the Alzheimer’s Association. And they indicate that brain training exercises conducted in a classroom setting can significantly reduce the risk of dementia in older patients.

Image: SEBASTIAN KAULITZKI/Getty Images

Researchers from the Indiana University, Pennsylvania State University, the University of South Florida, and Moderna Therapeutics conducted the ten year study, which tracked 2,802 healthy older adults (with an average age of 74) as they went through three different types of of cognitive training.

The randomized study placed one group in a classroom and taught them different memory enhancement strategies; another group received classroom training on basic reasoning skills; while a third group received individualized, computerized brain training in class. A control group received no training at all.

People who were in the cognitive training groups had 10 hour-long training sessions that were conducted over the first five weeks of the study. They were then tested after six weeks with follow on tests after years 1, 2, 3, 5 and 10.

A small subset of those participants received smaller booster sessions in the weeks leading up to the first and third year assessments.

After ten years, the researchers didn’t find a difference in the incidence of dementia between participants in the control group and the reasoning or memory strategy groups. But the brain training group showed marked differences, with researchers finding that the group who received the computerized training had a 29 percent lower incidence of dementia.

“Relatively small amounts of training resulted in a decrease in risk of dementia over the 10-year period of 29 percent, as compared to the control,” said Dr. Jerri Edwards, lead author of the article and a Professor at the University of South Florida, College of Medicine in a statement. “And, when we looked at dose-response, we saw that those who trained more got more protective benefit.”

To put the study in context, Florida researchers compared the risk reduction from dementia associated with the brain training to the risk reductions blood pressure medications have for heart failure, heart disease or stroke. What they found was that the brain training exercises were two-to-four times more effective by comparison.

“No health professional would suggest that any person with hypertension forego the protection offered by prescribed blood pressure medication,” said Dr. Henry Mahncke, the chief executive of Posit Science, in a statement. “We expect these results will cause the medical community to take a much closer look at the many protective benefits of these exercises in both older and clinical populations.”

The study was conducted in part from funding from the National Institutes of Health as part of its Advanced Cognitive Training for Independent and Vital Elderly Study and refines work from an earlier study published last year.

Posit Science’s brain training exercise was initially developed by Dr. Karlene Ball of the University of Alabama at Birmingham and Dr. Dan Roenker of Western Kentucky University.

The company has the exclusive license for the exercise which requires a users to identify objects in the center of their field of vision and their peripheral vision that are displayed simultaneously. As the user identifies objects correctly, the speed with which images are presented accelerates.

This isn’t the first study to show how Posit Science’s brain training techniques can help with neurological conditions.

In October, a report published in the Journal of Clinical Psychiatry, revealed research that brain training can improve cognition for people with bipolar disorder. Based on research from Harvard Medical School and McLean Hospital, the stud found that using Posit Science’s BrainHQ app drove improvements in measurements of overall cognitive ability among bipolar patients.

“Problems with memory, executive function, and processing speed are common symptoms
of bipolar disorder, and have a direct and negative impact on an individual’s daily
functioning and overall quality of life,” said lead investigator Dr. Eve Lewandowski, director
of clinical programming for one of McLean’s schizophrenia and bipolar disorder programs
and an assistant professor at Harvard Medical School, at the time. “Improving these cognitive
dysfunctions is crucial to helping patients with bipolar disorder improve their ability to
thrive in the community.”

Both studies are significant and both bring credence to an argument that computer-based therapies and interventions can play a role in treating and preventing neurological disorders.

They also help to combat a real problem with pseudoscientific solutions and hucksters making false claims about other brain training products that have come onto the market. For instance, the brain game maker Lumosity had to pay a $2 million fine for false advertising from the Federal Trade Commission.

But applications like MindMate, which is working with the National Health Service in the UK, or Game On, which was developed by researchers at Cambridge are showing real promise in helping alleviate or counteract the onset of dementia.

“There are now well over 100 peer-reviewed studies on the benefits of our brain exercises and assessments across varied populations,” said Dr. Mahncke. “The neuroplasticity-based mechanisms that drive beneficial changes across the brain from this type of training are well-documented, and are increasingly understood even by brain scientists not directly involved in their development. This type of training harnesses plasticity to engage the brain in an upward spiral toward better physical and functional brain health.”

Featured Image: Bryce Durbin/TechCrunch

 


0

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0

DARPA awards $65 million to develop the perfect, tiny two-way brain-computer interface

02:31 | 11 July

With $65 million in new funding, DARPA seeks to develop neural implants that make it possible for the human brain to speak directly to computer interfaces. As part of its Neural Engineering System Design (NESD) program, the agency will fund five academic research groups and one small San Jose-based company to further its goals.

For a taste of what DARPA is interested in, the Brown team will work on creating an interface that would weave together a vast network of “neurograins” that could be worn as implants on top of or in the cerebral cortex. These sensors would be capable of real-time electrical communication with the goal of understanding how the brain processes and decodes spoken language — a brain process so complex and automatic that aspects of it still elude researchers.

Among the six recipients, four are interested in visual perception, with the remaining two examining auditory perception and speech. MIT Technology Review reports that Paradromics, the only company included in the funding news, will receive around $18 million. Similar to the Brown team, Paradromics will use the funding to develop a prosthetic capable of decoding and interpreting speech.

The recipients have a lofty list of goals to aspire to. Foremost is DARPA’s desire to develop “high resolution” neural implants that record signals from as many as one million neurons at once. On top of that, it requests that the device be capable of two-way communication — receiving signals as well as transmitting them back out. And it wants that capability in a package no larger than two nickels stacked on top of one another.

“By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function,” founding NESD Program Manager Phillip Alvelda said in the announcement.

The full list of NESD grant recipients:

  • Paradromics, Inc. (Dr. Matthew Angle)
  • Brown University (Dr. Arto Nurmikko)
  • Columbia University (Dr. Ken Shepard)
  • Fondation Voir et Entendre (Dr. Jose-Alain Sahel and Dr. Serge Picaud)
  • John B. Pierce Laboratory (Dr. Vincent Pieribone)
  • University of California, Berkeley (Dr. Ehud Isacoff)

Over the course of the four-year program, the research teams will coordinate with the FDA on the long-term safety implications of installing DARPA’s dream implant in and on the human brain.

When cracked, the technology, most often called a brain-computer interface (BCI), will break open a world of possibilities. From rehabilitation from traumatic brain injuries to typing a WhatsApp message using just your thoughts, BCIs have the potential to revolutionize every aspect of modern technology. But even as the money flows in, the challenges of developing this kind of tech remain myriad: How will the hardware be small and noninvasive enough to be worn in everyday life? Considering the privacy nightmare of creating a direct link to the human brain, how will we secure them?

Crafting a viable brain-computer interface is a challenge that weaves together some of tech’s trickiest software problems with its most intractable hardware ones. And while DARPA certainly isn’t the only deep-pocketed entity interested in building the bridge to the near future of bi-directional brain implants, with its defense budget and academic connections, it’s definitely the bionic horse we’d bet on.

DARPA

Featured Image: majcot/Shutterstock

 


0

Curious to crack the neural code? Listen carefully

22:30 | 10 July

Perhaps for the first time ever, our noggins have stolen the spotlight from our hips. The media, popular culture, science and business are paying more attention to the brain than ever. Billionaires want to plug into it.

Scientists view neurodegenerative disease as the next frontier after cancer and diabetes, hence increased efforts in understanding the physiology of the brain. Engineers are building computer programs that leverage “deep learning” which mimic our limited understanding of the human brain. Students are flocking to universities to learn how to build artificial intelligence for everything from search to finance to robotics to self-driving cars.

Public markets are putting premiums on companies doing machine learning of any kind. Investors are throwing bricks of cash at startups building or leveraging artificial intelligence in any way.

Silicon Valley’s obsession with the brain has just begun. Futurists and engineers are anticipating artificial intelligence surpassing human intelligence. Facebook is talking about connecting to users through their thoughts. Serial entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with AI by pouring tens of millions of their own money into Kernel and Neuralink, respectively.

Bryan and Elon have surrounded themselves with top scientists, including Ted Berger and Philip Sabes, to start by cracking the neural code. Traditionally, scientists seeking to control machines through our thoughts have done so by interfacing with the brain directly, which means surgically implanting probes into the brain.

Scientists Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts through implanted probes, also known as electrode arrays. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which would translate individual neuron activity to the movements of a nearby robotic arm.

The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab treats. These arrays were later implanted in quadriplegics to enable them to control cursors on screens and robotic arms. The original motivation, and funding, for this research was intended for treating neurological diseases such as epilepsy, Parkinson’s and Alzheimer’s.

Curing brain disease requires an intimate understanding of the brain at the brain-cell level.  Controlling machines was a byproduct of the technology being built, in addition to access to patients with severe epilepsy who already have their skulls opened.

Science-fiction writers imagine a future of faster-than-light travel and human batteries, but assume that you need to go through the skull to get to the brain. Today, music is a powerful tool for conveying emotion, expression and catalyzing our imagination. Powerful machine learning tools can predict responses from auditory stimulation, and create rich, Total-Recall-like experiences through our ears.

If the goal is to upgrade the screen and keyboard (human<->machine), and become more telepathic (human<->human), do we really need to know what’s going on with our neurons? Let’s use computer engineering as an analogy: physicists are still discovering elementary particles; but we know how to manipulate charge good enough to add numbers, trigger pixels on a screen, capture images from a camera and capture inputs from a keyboard or touchscreen. Do we really need to understand quarks? I’m sure there are hundreds of analogies where we have pretty good control over things without knowing all the underlying physics, which begs the question: Do we really have to crack the neural code to better interface with our brains?

In fact, I would argue that humans have been pseudo-telepathic since the beginning of civilization, and that has been in the art of expression. Artists communicate emotions and experiences by stimulating our senses, which already have a rich, wide-band connection to our brains. Artists stimulate our brains with photographs, paintings, motion pictures, song, dance and stories. The greatest artists have been the most effective at invoking certain emotions; love, fear, anger and lust to name a few. Great authors empower readers to live though experiences by turning the page, creating richer experiences than what’s delivered through IMAX 3D.

Creative media is a powerful tool of expression to invoke emotion and share experiences. It is rich, expensive to produce and distribute, and disseminated through screens and speakers. Although virtual reality promises an even more interactive and immersive experience, I envision a future where AI can be trained to invoke powerful, personal experiences through sound. Images courtesy of Apple and Samsung.

Unfortunately those great artists are few, and heavy investment goes into a production which needs to be amortized over many consumers (1->n). Can content be personalized for EVERY viewer as opposed to a large audience (1->1)?  For example, can scenes, sounds and smells be customized for individuals? To go a step further; can individuals, in real time, stimulate each other to communicate emotions, ideas and experiences (1<->1)?

We live our lives constantly absorbing from our senses. We experience many fundamental emotions such as desire, fear, love, anger and laughter early in our lifetimes. Furthermore, our lives are becoming more instrumented by cheap sensors and disseminated through social media. Can a neural net trained on our experiences, and emotional responses to them, be able to create audio stimulation that would invoke a desired sensation? This “sounds” far more practical than encoding that experience in neural code and physically imparting it to our neurons.

Powerful emotions have been inspired by one of the oldest modes of expression: music. Humanity has invented countless instruments to produce many sounds, and, most recently, digitally synthesized sounds that would be difficult or impractical to generate with a physical instrument. We consume music individually through our headphones, and experience energy, calm, melancholy and many other emotions. It is socially acceptable to wear headphones; in fact Apple and Beats by Dre made it fashionable.

If I want to communicate a desired emotion with a counterpart, I can invoke a deep net to generate sounds that, as per a deep net sitting on my partners’ device, invokes the desired emotion. There are a handful of startups using AI to create music with desired features that fit in a certain genre. How about taking it a step further to create a desired emotion on behalf of the listener?

The same concept can be applied for human-to-machine interfaces, and vice-versa. Touch screens have already almost obviated keyboards for consuming content; voice and NLP will eventually replace keyboards for generating content.

Deep nets will infer what we “mean” from what we “say.” We will “read” through a combination and hearing through a headset and “seeing” on a display. Simple experiences can be communicated through sound, and richer experiences through a combination of sight and sound that invoke our other senses; just as how the sight of a stadium invokes the roar of a crowd and the taste of beer and hot dogs.

Behold our future link to each other and our machines: the humble microphone-equipped earbud. Initially, it will tether to our phones for processing, connectivity and charge. Eventually, advanced compute and energy harvesting would obviate the need for phones, and those earpieces will be our bridge to every human being on the planet, and access corpus of human knowledge and expression.

The timing of this prediction may be ironic given Jawbone’s expected shutdown. I believe that there is a magical company to be built around what Jawbone should have been: an AI-powered telepathic interface between people, the present, past and what’s expected to be the future.

Featured Image: Ky/Flickr UNDER A CC BY 2.0 LICENSE

 


0

IBM is working with the Air Force Research Lab on a TrueNorth 64-chip array

15:06 | 23 June

IBM is working with the U.S. Air Force to improve its TrueNorth line of chips designed to optimize the performance of machine learning models at the hardware level. The new 64-chip array will consist of four boards, each with 16 chips. IBM’s chips are still too experimental to be used in mass production,  but they’ve shown promise in running a special type of neural network called a spiking neural network.

Though the technology is still in its infancy, IBM believes the low power consumption of its chips could some day bring value in constrained applications like mobile phones and self-driving cars. In an Air Force context, this could include applications in satellites and unmanned aerial vehicles (UAVs).

The chips are designed in such a way that researchers can run a single neural net on multiple data sets or run multiple neural nets on a single data set. Dharmendra Modha, IBM’s Chief Scientist for brain-inspired computing, told me that the U.S. Air Force was the first organization to take an early, small single chip, board from IBM’s lab.

The TrueNorth project began as a the DARPA SyNAPSE project back in mid-2008. Since IBM began working on the project, it has been able to increase the number of neurons per system from 256 to 64 million.

IBM TrueNorth unit

Similar to other experimental computing hardware like chips enabling quantum annealing, IBM’s TrueNorth approach has drawn criticism from some leaders of the field who say it offers limited advantages over more conventional custom chips, FPGAs and GPUs.

Back in 2014, Yann LeCun, Facebook’s Director of AI Research published a public Facebook post expressing skepticism at TrueNorth’s ability to deliver value in a real world application. He went on to argue that IBM’s chips are designed for spiking neural networks, a type of network that hasn’t shown as much promise as convolutional neural networks on common tasks like object recognition. IBM cites a paper it published showing that CNNs can be mapped efficiently to TrueNorth — but clearly more benchmarking work is needed.

In practice, TrueNorth has been put to use in over 40 institutions, including both universities and government agencies. If you put yourself in the shoes of researchers, why would you not want access to experimental hardware?

We haven’t fully explored all the potential applications of this type of computing, so while it’s very reasonable to be conservative, researchers have little incentive to completely disregard the potential of the project. This means sales for IBM and more, much needed, research in the space.

 


0

Accenture, can I ask you a few questions?

21:30 | 12 April

Hey Accenture, are you aware that your PR firm is pitching your latest corporate beta for a creepy face and emotion-monitoring algorithm as a party trick?

Do you know what a party is? Have you read the definition?

Would you call requiring people to download an app before they can get into an event a party-time kind of thing to do? Would you say that demanding that people scan their faces so they can be recognized by an algorithm fun times?

Does having that same algorithm watch and record every interaction that happens in the Austin bar or event space you’ve filled with some light snacks and a bunch of free liquor count as rockin like Dokken?

Do good hosts require people to become lab rats in the latest attempt to develop HAL?

Is monitoring patients’ faces in hospitals really the best way to apply the technology outside of your PartyBOT’s par-tays? Or is that also a creepy and intrusive use of technology, when other solutions exist to track actual vital signs?

What do your own office parties look like? Does Sam from the front desk have to try out the newest accounting software to get a drink from the punch bowl? Does Diane have to swear allegiance to Watson before grabbing that tuna roll? Do you throw them at Guy’s American Kitchen & Bar?

Does your soul die a little when you turn people into test subjects for the AI apocalypse?

Maybe after reading this press release, it should?

With the rise of AI and voice recognition, customer experiences can be curated to the next level. Imagine Amazon’s Alexa, but with more emotion, depth and distinction. Accenture Interactive’s PartyBOT is not simply a chatbot – it is equipped with the latest technologies in facial recognition that allow the bot to recognize user feelings through facial expressions, and words resulting in more meaningful conversations.

Featured at this year’s SXSW, the PartyBOT delivered an unparalleled party experience for our guests – detecting their preferences from favorite music, beverages and more. The PartyBOT went so far as to check in on every attending guest at the party, curating tailored activities based on their preferences. Link to video:

(pw: awesome)

But the PartyBOT goes much further than facilitating great party experiences. Its machine learning applications can apply to range of industries from business to healthcare – acting as an agent to support patient recognition and diagnosis in hospitals that can recognize patient distress and seek the appropriate help from doctors.

If you would like to learn more about the PartyBOT, I’m happy to put you in touch with our executives to discuss the applications of our technology and potentially schedule time to see this in our studios.

Featured Image: Rosenfeld Media/Flickr UNDER A CC BY 2.0 LICENSE

 


0

Neurable nets $2 million to build brain-controlled software for AR and VR

02:21 | 22 December

As consumers get their first taste of voice-controlled home robots and motion-based virtual realities, a quiet swath of technologists are thinking big picture about what comes after that. The answer has major implications for the way we’ll interact with our devices in the near future.

Spoiler alert: We won’t be yelling or waving at them; we’ll be thinking at them.

That answer is something the team of Boston-based startup Neurable spends a lot of time, yes, thinking about. Today, the recent Ann Arbor-to-Cambridge transplant is announcing $2 million in a seed round led by Brian Shin of BOSS Syndicate, a Boston-based alliance of regionally focused angel investors. Other investors include PJC, Loup Ventures and NXT Ventures. Previously, the company took home more than $400,000 after bagging the second-place prize at the Rice Business Plan Competition.

Neurable, founded by former University of Michigan student researchers Ramses Alcaide, Michael Thompson, James Hamet and Adam Molnar, is committed to making nuanced brain-controlled software science fact rather than science fiction, and really the field as a whole isn’t that far off.

“Our vision is to make this the standard human interaction platform for any hardware or software device,” Alcaide told TechCrunch in an interview. “So people can walk into their homes or their offices and take control of their devices using a combination of their augmented reality systems and their brain activity.”

Unlike other neuro-startups like Thync and Interaxon’s Muse, Neurable has no intention to build its own hardware, instead relying on readily available electroencephalography (EEG) devices, which usually resemble a cap or a headband. Equipped with multiple sensors that can detect and map electrical activity in the brain, EEG headsets record neural activity which can then be interpreted by custom software and translated into an output. Such a system is known as a brain computer interface, or BCI. These interfaces are best known for their applications for people with severe disabilities, like ALS and other neuromuscular conditions. The problem is that most of these systems are really slow; it can take 20 seconds for a wearer to execute a simple action, like choosing one of two symbols on a screen.

Building on a proof of concept study that Alcaide published in the Journal of Neural Engineering, Neurable’s core innovation is a machine learning method that could cut down the processing wait so that user selection happens in real time. The same new analysis approach will also tackle the BCI signal to noise issue, amplifying the quality of the data to yield a more robust data set. The company’s mission on the whole is an extension of Alcaide’s research at the University of Michigan, where he pursued his Ph.D. in neuroscience within the school’s Direct Brain Interface Laboratory.

“A lot of technology that’s out there right now focuses more on meditation and concentration applications,” Alcaide said. “Because of this they tend to be a lot slower when it comes to an input for controlling devices.” These devices often interpret specific sets of brainwaves (alpha, beta, gamma, etc.) to determine if a user is in a state of focus, for example.

Instead of measuring specific brainwaves, Neurable’s software is powered by what Alcaide calls a “brain shape.” Measuring this shape — really a pattern of responsive brain activity known as an event-related potential — is a way to gauge if a stimulus or other kind of event is important to the user. This brain imaging notion, roughly an observation of cause and effect, has actually been around in some form for at least 40 years.

The company’s committed hardware agnosticism places a bet that in a few generations, all major augmented and virtual reality headsets will come built-in with EEG sensors. Given that the methodology is reliable and well-tested from decades of medical use, EEG is indeed well-positioned to grow into the future of consumer technology input. Neurable is already in talks with major AR and VR hardware makers, though the company declined to name specific partners.

“For us we’re primarily focused right now on developing our software development kit,” Alcaide said. “In the long game, we want to become that piece of software that runs on every hardware and software application that allows you to interpret brain activity. That’s really what we’re trying to accomplish.”

Instead of using an Oculus Touch controller or voice commands, thoughts alone look likely to steer the future of user interaction. In theory, if and when this kind of thing pans out on a commercial level, brain-monitored inputs could power a limitless array of outputs: anything from making an in-game VR avatar jump to turning off a set of Hue lights. The big unknown is just how long we’ll wait for that future to arrive.

Featured Image: Neurable

 


0

Humans can now move complex robot arms just by thinking

19:55 | 16 December

Most robotic arm systems required a very complex and very invasive brain implant… until now. Researchers at the University of Minnesota have created a new system that requires only a sexy helmet and a bit of thinking, paving the way to truly mind-controlled robotic tools.

“This is the first time in the world that people can operate a robotic arm to reach and grasp objects in a complex 3D environment using only their thoughts without a brain implant,” said Bin He, biomedical engineering professor and lead researcher on the study. “Just by imagining moving their arms, they were able to move the robotic arm.”

The system requires an EKG helmet and some training. Whereas this sort of technology has been around for a while the researchers have finally perfected the control of complex systems using the motor cortex. When you think about a movement the neurons in the motor cortex react by lighting up new sets of neurons. By sorting and reading these neurons the brain-computer interface can simulate and translate the motion of your real arm into commands for the robot arm.

“This is exciting as all subjects accomplished the tasks using a completely noninvasive technique. We see a big potential for this research to help people who are paralyzed or have neurodegenerative diseases to become more independent without a need for surgical implants,” said He.

You can read He’s journal article here.

In previous experiments of this sort a patient who lost both arms in an electrical accident was able to control two robotic arms simultaneously thanks to systems jacked into his nervous system. This new system from He and his team promises to reduce the invasiveness of this sort of robotic control and let anyone control robot arms with their minds.

 


0

Researchers use machine learning to pull interest signals from readers’ brain waves

19:57 | 14 December

How will people sift and navigate information intelligently in the future, when there’s even more data being pushed at them? Information overload is a problem we struggle with now, so the need for better ways to filter and triage digital content is only going to step up as the MBs keep piling up.

Researchers in Finland have their eye on this problem and have completed an interesting study that used EEG (electroencephalogram) sensors to monitor the brain signals of people reading the text of Wikipedia articles, combining that with machine learning models trained to interpret the EEG data and identify which concepts readers found interesting.

Using this technique the team was able to generate a list of keywords their test readers mentally flagged as informative as they read — which could then, for example, be used to predict other relevant Wikipedia articles to that person.

Or, down the line, help filter a social media feed, or flag content that’s of real-time interest to a user of augmented reality, for example.

“We’ve been exploring this idea of involving human signals in the search process,” says researcher Tuukka Ruotsalo. “And now we wanted to take the extreme signal — can we try to read the interest or intentions of the users directly from the brain?”

The team, from the Helsinki Institute for Information Technology (HIIT), reckon it’s the first time researchers have been able to demonstrate the ability to recommend new information based on directly extracting relevance from brain signals.

“There’s a whole bunch of research about brain-computer interfacing but typically… the major area they work on is making explicit commands to computers,” continues Ruotsalo. “So that means that, for example, you want to control the lights of the room and you’re making an explicit pattern, you’re trying explicitly to do something and then the computer tries to read it from the brain.”

“In our case, it evolved naturally — you’re just reading, we’re not telling you to think of pulling your left or right arm whenever you hit a word that interests you. You’re just reading — and because something in the text is relevant for you we can machine learn the brain signal that matches this event that the text evokes and use that,” he adds.

You’re just reading and the computer is able to pick up the words that are interesting or relevant for what you’re doing.

“So it’s purely passive interaction in a sense. You’re just reading and the computer is able to pick up the words that are interesting or relevant for what you’re doing.”

While it’s just one study, with 15 test subjects and an EEG cap that no one would be inclined to put on outside a research lab, it’s an interesting glimpse of what might be possible in future — once there are less cumbersome, higher quality EEG sensors in play (smart thinking cap wearables, anyone?), which could be feasibly combined with machine learning software trained to be capable of a little lightweight mind-reading.

“If you look at the pure signal you don’t see anything. That’s what makes it challenging,” explains Ruotsalo, noting the team was not interpreting interest by tracking any physical body movements such as eye movements. Their understanding of relevance is purely based on their machine learning model parsing the EEG brain waves.

“It’s a really challenging machine learning task. You have to train the system to detect this. There are much easier things like movements or eye movements… that you can actually see in the signal. This one you really have to do the science to reveal it from noise,.”

Ruotsalo says the team trained their model on a pretty modest amount of data — with just six documents of an average of 120 words each used to build the model for each test subject. The experiment did also involve a small amount of supervised learning initially — using the first six sentences of each Wikipedia article. In a future study they would like to see if they could achieve results without any supervised learning, according to Ruotsalo.

And while the concept of ‘interest’ is a pretty broad one, and a keyword could be being mentally flagged by a reader for all sorts of different reasons, he argues people have effectively been trained to navigate information in this way — because they’ve got used to using digital services that function via a language of just such interest signals.

“This is what we are doing now in the digital world. We are doing thumbs up or we are clicking links and the search engines, for example, whenever we click they think now there is something there. This makes it possible without any of this explicit action — so you really read it from the brain,” he adds.

The implications of being able to take interest signals from a person’s mind as they derive meaning from text are pretty sizable — and potentially a little dystopic, if you consider how marketing messages could be tailored to mesh with a person’s interests as they engage with the content. So, in other words, targeting advertising that’s literally reading your intentions, not just stalking your clicks…

Ruotsalo hopes for other, better commercial uses for the technology in future.

“For example work tasks where you have lots of information coming in and you need to control many things, you need to remember things — this could serve as a sort of backing agent type of software that annotates ‘ok, this was important for the user’ and then could remind the user later on: ‘remember to check these things that you found interesting’,” he suggests. “So sort of user modeling for auto-extracting what has been important in a really information intensive task.

“Even the search type of scenario… you’re interacting with your environment, you have a digital content on the projector and we can see that you’re interested in it — and it could automatically react… and be annotated for you… or to personalize content.”

“We are already leaving all kinds of traces in the digital world. We are researching the documents we have seen in the past, we maybe paste some digital content that we later want to get back to — so all this we could record automatically. And then we express all kinds of preferences for different services, whether it’s by rating them somehow or pressing the ‘I like this’. It seems that all this is now possible by reading it from the brain,” he adds.

It’s not the first time the team has been involved in trying to tackle the search and info overload problem. Ruotsalo was also one of the researchers who helped build a visual discovery search interface called SciNet, covered by TC back in 2015, that was spun out as a commercial company called Etsimo.

“Information retrieval or recommendation it’s a sort of filtering problem, right? So we’re trying to filter the information that is, in the end, interesting or relevant for you,” he says, adding: “I think that’s one of the biggest problems now, with all these new systems, they are just pushing us all kinds of things that we don’t necessarily want.”

Featured Image: Bernhard Lang/Getty Images

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 23

Site search


Last comments

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

An Interview With Shaquille O’Neal: Businessman, Investor And Video Game Star
Anna K
Shaquilee is a mogul! I see him the Gold bond commercials and think that he's doing something right…
Anna K