Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Neuroscience

<< Back Forward >>
Topics from 1 to 10 | in all: 26

Researchers recreate a brain, piece by piece

17:31 | 22 May

Researchers at the the University of Tokyo have created a method for growing and connecting single neurons using geometric patterns to route the neurons more precisely, cell by cell.

The article, “Assembly and Connection of Micropatterned Single Neurons for Neuronal Network Formation,” appeared in Micromachines, a journal of molecular machinery.

Thus far researchers have created simple brain matter using “in vitro cultures,” a process that grows neurons haphazardly in a clump. The connections associated with these cultures are random, thereby making the brain tissue difficult to study.

“In vitro culture models are essential tools because they approximate relatively simple neuron networks and are experimentally controllable,” said study authors Shotaro Yoshida. “These models have been instrumental to the field for decades. The problem is that they’re very difficult to control, since the neurons tend to make random connections with each other. If we can find methods to synthesize neuron networks in a more controlled fashion, it would likely spur major advances in our understanding of the brain.”

Yoshida and the team looked more closely at how neurons behave and found that they could be trained to connect using microscopic plates made of “synthetic neuron-adhesive material.” They look like little frying pans with extra handles and “when placed onto the microplate, a neuron’s cell body settles onto the circle, while the axon and dendrites – the branches that let neurons communicate with each other – grow lengthwise along the rectangles.”

The researchers then connected the neurons, testing if they would fire simultaneously as predicted.

“What was especially important in this system was to have control over how the neurons connected,” Yoshida said. “We designed the microplates to be movable, so that by pushing them around, we could physically move two neurons right next to each other. Once we placed them together, we could then test whether the neurons were able to transmit a signal.”

It worked.

“This is, to the best of our knowledge, the first time a mobile microplate has been used to morphologically influence neurons and form functional connections,” said investigator Shoji Takeuchi. “We believe the technique will eventually allow us to design simple neuron network models with single-cell resolution. It’s an exciting prospect, as it opens many new avenues of research that aren’t possible with our current suite of experimental tools.”

Unfortunately, this is just the first step for this technology, especially considering the millions of neurons necessary to eat, breathe, and sleep (and use the Internet). It is, however, a good start.

 


0

This jolly little robot gets goosebumps

01:30 | 17 May

Cornell researchers have made a little robot that can express its emotions through touch, sending out little spikes when its scared or even getting goosebumps to express delight or excitement. The prototype, a cute smiling creature with rubber skin, is designed to test touch as an I/O system for robotic projects.

The robot mimics the skin of octopi which can turn spiky when threatened.

The researchers, Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman, created the robot to experiment with new methods for robot interaction. They compare the skin to “human goosebumps, cats’ neck fur raising, dogs’ back hair, the needles of a porcupine, spiking of a blowfish, or a bird’s ruffled feathers.”

“Research in human-robot interaction shows that a robot’s ability to use nonverbal behavior to communicate affects their potential to be useful to people, and can also have psychological effects. Other reasons include that having a robot use nonverbal behaviors can help make it be perceived as more familiar and less machine-like,” the researchers told IEEE Spectrum.

The skin has multiple configurations and is powered by a computer-controlled elastomer that can inflate and deflate on demand. The goosebumps pop up to match the expression on the robot’s face, allowing humans to better understand what the robot “means” when it raises its little hackles or gets bumpy. I, for one, welcome our bumpy robotic overlords.

 


0

BrainQ raises $5.3M to treat neurological disorders with the help of AI

16:30 | 15 May

BrainQ, an Israel-based startup that aims to help stroke victims and those with spinal cord injuries treat their injuries with the help of a personalized electromagnetic treatment protocol, today announced that it has raised a $5.3 million funding round on top of the $3.5 million the company previously raised. The company’s investors include Qure Ventures, crowdfunding platform OurCrowd.com, Norma Investments, IT-Farm and a number of angel investors, including Valtech Cardio founder and CEO Amir Gross.

When we last talked to BrainQ earlier this year, the team was working on two human clinical trials for stroke patients in Israel. At that time, the company had closed its first funding round and had also recently started to work with Google’s Launchpad Accelerator, too.

The general idea behind BrainQ is to use the patient’s brainwaves to generate a tailored treatment protocol. No AI company would be complete without data — it’s what drives these algorithms, after all — and the company says it owns one the largest Brain Computer Interface-based EEG databases for motor tasks. It’s that database that allows it to interpret the patient’s brain waves and generate its treatment protocol.

BrainQ EEG reader device

“We are on the verge of a new era where AI- based precision medicine will be used to treat neurodisorders, which do not have a sufficient solution to date,” said BrainQ CEO Yotam Drechsler in today’s announcement. “At BrainQ, we are thrilled by the opportunity to bring this vision to life in the world of neuro-recovery. In a short time, we have already achieved significant results and are looking forward to the opportunity to push our technology and expand our operations, further positioning BrainQ as a leader in the world of BCI-based precision medicine.”

As is typical for Israeli startups, the team’s background is quite impressive and includes former members of the country’s elite intelligence units and academics with a background in AI and neuroscience.

 


0

Watch a thought race across the surface of the brain

02:58 | 18 January

Although neuroscientists have a general idea of what parts of the brain do what, catching them in the act is a difficult proposition. But UC Berkeley researchers have managed to do it, visualizing based on direct measurement the path of a single thought (or at least thread) through the brain.

I should mention here that as someone who studied this stuff (a long time back, but still) and is extremely skeptical about the state of brain-computer interfaces, I’m usually the guy at TechCrunch who rains on parades like this. But this is the real thing, and you can tell because not only is it not flashy, but it was accomplished only through unusually gory means.

Just want to see the thought go? Watch below. But continue reading if you want to hear why this is so cool.

Normal scalp-based electroencephalography (EEG) is easy to do, but it really can only get a very blurry picture of brain activity near the surface, because it has to detect all that through your hair, skin, skull, etc.

What if you could take all that stuff out of the way and put the electrodes right on the brain? That’d be great, except who would volunteer for such an invasive procedure? Turns out, a handful of folks who were already getting open-brain surgery did.

Sixteen epilepsy patients whose brains needed to be examined closely to determine the source of seizures took part in this experiment. Hundreds of electrodes were attached directly to the brain’s surface in a technique called electrocorticography, and the people were asked to do one of several tasks while their brains were being closely monitored.

In the video (and gif) above, the patient was asked to repeat the word “humid.” You can see that first active is a part of the brain responsible for perceiving the word (the yellow dots, in one of the language centers); then, almost immediately afterwards a bit of cortex lights up (in blue) corresponding to planning the response, before that response is even fully ready; meanwhile, the prefrontal cortex does a bit of processing on the word (in red) in order to inform that response. It’s all slowed down a bit; the whole thing takes place in less than a second.

Essentially you’re watching a single thought process — “hear and repeat this word” — form and execute. Sure, it’s just colored dots, but really, most science is dots — what were you expecting, lightning bolts zooming along axons on camera?

In a more complex example (above), the person was asked to come up with the opposite of the word they hear. In this case, as you see below, considerably longer is spent by the prefrontal cortex in processing the word and formulating a response, since it must presumably consult memory and so on before sending that information to the motor regions for execution.

“We are trying to look at that little window of time between when things happen in the environment and us behaving in response to it,” explained lead author Avgusta Shestyuk in the Berkeley news release. “This is the first step in looking at how people think and how people come up with different decisions; how people basically behave.”

Ultimately the research was about characterizing the role of the prefrontal cortex in planning and coordinating brain activity, and it appears to have been successful in doing so — but the neuro-inclined among our readership will surely have already intuited this and will want to seek out the paper for details. It was published today in Nature Human Behavior.

 


0

Revising the results of last year’s study, Posit Science still effectively combats dementia

19:30 | 16 November

Revising the results of their headline-grabbing, decade-long study, researchers from top universities still believe that a brain training exercise from app developer Posit Science can actually reduce the risk of dementia among older adults.

The groundbreaking study still has incredible implications for the treatment of neurological disorders associated with aging — and helps to validate the use of application-based therapies as a form of treatment and preventative intervention.

Even though dementia rates are dropping in the U.S., between four and five million Americans were being treated for dementia last year.

And the disease is the most expensive in America — a 2010 study from the National Institute on Aging, cited by The New York Times estimated that in 2010, dementia patients cost the health care system up to $215 billion per year — more than heart disease or cancer, which cost $102 billion and $77 billion respectively.

Now, the results of the revised study have been published in Alzheimer’s & Dementia: Translational Research & Clinical Interventions, a peer-reviewed journal of the Alzheimer’s Association. And they indicate that brain training exercises conducted in a classroom setting can significantly reduce the risk of dementia in older patients.

Image: SEBASTIAN KAULITZKI/Getty Images

Researchers from the Indiana University, Pennsylvania State University, the University of South Florida, and Moderna Therapeutics conducted the ten year study, which tracked 2,802 healthy older adults (with an average age of 74) as they went through three different types of of cognitive training.

The randomized study placed one group in a classroom and taught them different memory enhancement strategies; another group received classroom training on basic reasoning skills; while a third group received individualized, computerized brain training in class. A control group received no training at all.

People who were in the cognitive training groups had 10 hour-long training sessions that were conducted over the first five weeks of the study. They were then tested after six weeks with follow on tests after years 1, 2, 3, 5 and 10.

A small subset of those participants received smaller booster sessions in the weeks leading up to the first and third year assessments.

After ten years, the researchers didn’t find a difference in the incidence of dementia between participants in the control group and the reasoning or memory strategy groups. But the brain training group showed marked differences, with researchers finding that the group who received the computerized training had a 29 percent lower incidence of dementia.

“Relatively small amounts of training resulted in a decrease in risk of dementia over the 10-year period of 29 percent, as compared to the control,” said Dr. Jerri Edwards, lead author of the article and a Professor at the University of South Florida, College of Medicine in a statement. “And, when we looked at dose-response, we saw that those who trained more got more protective benefit.”

To put the study in context, Florida researchers compared the risk reduction from dementia associated with the brain training to the risk reductions blood pressure medications have for heart failure, heart disease or stroke. What they found was that the brain training exercises were two-to-four times more effective by comparison.

“No health professional would suggest that any person with hypertension forego the protection offered by prescribed blood pressure medication,” said Dr. Henry Mahncke, the chief executive of Posit Science, in a statement. “We expect these results will cause the medical community to take a much closer look at the many protective benefits of these exercises in both older and clinical populations.”

The study was conducted in part from funding from the National Institutes of Health as part of its Advanced Cognitive Training for Independent and Vital Elderly Study and refines work from an earlier study published last year.

Posit Science’s brain training exercise was initially developed by Dr. Karlene Ball of the University of Alabama at Birmingham and Dr. Dan Roenker of Western Kentucky University.

The company has the exclusive license for the exercise which requires a users to identify objects in the center of their field of vision and their peripheral vision that are displayed simultaneously. As the user identifies objects correctly, the speed with which images are presented accelerates.

This isn’t the first study to show how Posit Science’s brain training techniques can help with neurological conditions.

In October, a report published in the Journal of Clinical Psychiatry, revealed research that brain training can improve cognition for people with bipolar disorder. Based on research from Harvard Medical School and McLean Hospital, the stud found that using Posit Science’s BrainHQ app drove improvements in measurements of overall cognitive ability among bipolar patients.

“Problems with memory, executive function, and processing speed are common symptoms
of bipolar disorder, and have a direct and negative impact on an individual’s daily
functioning and overall quality of life,” said lead investigator Dr. Eve Lewandowski, director
of clinical programming for one of McLean’s schizophrenia and bipolar disorder programs
and an assistant professor at Harvard Medical School, at the time. “Improving these cognitive
dysfunctions is crucial to helping patients with bipolar disorder improve their ability to
thrive in the community.”

Both studies are significant and both bring credence to an argument that computer-based therapies and interventions can play a role in treating and preventing neurological disorders.

They also help to combat a real problem with pseudoscientific solutions and hucksters making false claims about other brain training products that have come onto the market. For instance, the brain game maker Lumosity had to pay a $2 million fine for false advertising from the Federal Trade Commission.

But applications like MindMate, which is working with the National Health Service in the UK, or Game On, which was developed by researchers at Cambridge are showing real promise in helping alleviate or counteract the onset of dementia.

“There are now well over 100 peer-reviewed studies on the benefits of our brain exercises and assessments across varied populations,” said Dr. Mahncke. “The neuroplasticity-based mechanisms that drive beneficial changes across the brain from this type of training are well-documented, and are increasingly understood even by brain scientists not directly involved in their development. This type of training harnesses plasticity to engage the brain in an upward spiral toward better physical and functional brain health.”

Featured Image: Bryce Durbin/TechCrunch

 


0

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0

DARPA awards $65 million to develop the perfect, tiny two-way brain-computer interface

02:31 | 11 July

With $65 million in new funding, DARPA seeks to develop neural implants that make it possible for the human brain to speak directly to computer interfaces. As part of its Neural Engineering System Design (NESD) program, the agency will fund five academic research groups and one small San Jose-based company to further its goals.

For a taste of what DARPA is interested in, the Brown team will work on creating an interface that would weave together a vast network of “neurograins” that could be worn as implants on top of or in the cerebral cortex. These sensors would be capable of real-time electrical communication with the goal of understanding how the brain processes and decodes spoken language — a brain process so complex and automatic that aspects of it still elude researchers.

Among the six recipients, four are interested in visual perception, with the remaining two examining auditory perception and speech. MIT Technology Review reports that Paradromics, the only company included in the funding news, will receive around $18 million. Similar to the Brown team, Paradromics will use the funding to develop a prosthetic capable of decoding and interpreting speech.

The recipients have a lofty list of goals to aspire to. Foremost is DARPA’s desire to develop “high resolution” neural implants that record signals from as many as one million neurons at once. On top of that, it requests that the device be capable of two-way communication — receiving signals as well as transmitting them back out. And it wants that capability in a package no larger than two nickels stacked on top of one another.

“By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function,” founding NESD Program Manager Phillip Alvelda said in the announcement.

The full list of NESD grant recipients:

  • Paradromics, Inc. (Dr. Matthew Angle)
  • Brown University (Dr. Arto Nurmikko)
  • Columbia University (Dr. Ken Shepard)
  • Fondation Voir et Entendre (Dr. Jose-Alain Sahel and Dr. Serge Picaud)
  • John B. Pierce Laboratory (Dr. Vincent Pieribone)
  • University of California, Berkeley (Dr. Ehud Isacoff)

Over the course of the four-year program, the research teams will coordinate with the FDA on the long-term safety implications of installing DARPA’s dream implant in and on the human brain.

When cracked, the technology, most often called a brain-computer interface (BCI), will break open a world of possibilities. From rehabilitation from traumatic brain injuries to typing a WhatsApp message using just your thoughts, BCIs have the potential to revolutionize every aspect of modern technology. But even as the money flows in, the challenges of developing this kind of tech remain myriad: How will the hardware be small and noninvasive enough to be worn in everyday life? Considering the privacy nightmare of creating a direct link to the human brain, how will we secure them?

Crafting a viable brain-computer interface is a challenge that weaves together some of tech’s trickiest software problems with its most intractable hardware ones. And while DARPA certainly isn’t the only deep-pocketed entity interested in building the bridge to the near future of bi-directional brain implants, with its defense budget and academic connections, it’s definitely the bionic horse we’d bet on.

DARPA

Featured Image: majcot/Shutterstock

 


0

Curious to crack the neural code? Listen carefully

22:30 | 10 July

Perhaps for the first time ever, our noggins have stolen the spotlight from our hips. The media, popular culture, science and business are paying more attention to the brain than ever. Billionaires want to plug into it.

Scientists view neurodegenerative disease as the next frontier after cancer and diabetes, hence increased efforts in understanding the physiology of the brain. Engineers are building computer programs that leverage “deep learning” which mimic our limited understanding of the human brain. Students are flocking to universities to learn how to build artificial intelligence for everything from search to finance to robotics to self-driving cars.

Public markets are putting premiums on companies doing machine learning of any kind. Investors are throwing bricks of cash at startups building or leveraging artificial intelligence in any way.

Silicon Valley’s obsession with the brain has just begun. Futurists and engineers are anticipating artificial intelligence surpassing human intelligence. Facebook is talking about connecting to users through their thoughts. Serial entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with AI by pouring tens of millions of their own money into Kernel and Neuralink, respectively.

Bryan and Elon have surrounded themselves with top scientists, including Ted Berger and Philip Sabes, to start by cracking the neural code. Traditionally, scientists seeking to control machines through our thoughts have done so by interfacing with the brain directly, which means surgically implanting probes into the brain.

Scientists Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts through implanted probes, also known as electrode arrays. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which would translate individual neuron activity to the movements of a nearby robotic arm.

The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab treats. These arrays were later implanted in quadriplegics to enable them to control cursors on screens and robotic arms. The original motivation, and funding, for this research was intended for treating neurological diseases such as epilepsy, Parkinson’s and Alzheimer’s.

Curing brain disease requires an intimate understanding of the brain at the brain-cell level.  Controlling machines was a byproduct of the technology being built, in addition to access to patients with severe epilepsy who already have their skulls opened.

Science-fiction writers imagine a future of faster-than-light travel and human batteries, but assume that you need to go through the skull to get to the brain. Today, music is a powerful tool for conveying emotion, expression and catalyzing our imagination. Powerful machine learning tools can predict responses from auditory stimulation, and create rich, Total-Recall-like experiences through our ears.

If the goal is to upgrade the screen and keyboard (human<->machine), and become more telepathic (human<->human), do we really need to know what’s going on with our neurons? Let’s use computer engineering as an analogy: physicists are still discovering elementary particles; but we know how to manipulate charge good enough to add numbers, trigger pixels on a screen, capture images from a camera and capture inputs from a keyboard or touchscreen. Do we really need to understand quarks? I’m sure there are hundreds of analogies where we have pretty good control over things without knowing all the underlying physics, which begs the question: Do we really have to crack the neural code to better interface with our brains?

In fact, I would argue that humans have been pseudo-telepathic since the beginning of civilization, and that has been in the art of expression. Artists communicate emotions and experiences by stimulating our senses, which already have a rich, wide-band connection to our brains. Artists stimulate our brains with photographs, paintings, motion pictures, song, dance and stories. The greatest artists have been the most effective at invoking certain emotions; love, fear, anger and lust to name a few. Great authors empower readers to live though experiences by turning the page, creating richer experiences than what’s delivered through IMAX 3D.

Creative media is a powerful tool of expression to invoke emotion and share experiences. It is rich, expensive to produce and distribute, and disseminated through screens and speakers. Although virtual reality promises an even more interactive and immersive experience, I envision a future where AI can be trained to invoke powerful, personal experiences through sound. Images courtesy of Apple and Samsung.

Unfortunately those great artists are few, and heavy investment goes into a production which needs to be amortized over many consumers (1->n). Can content be personalized for EVERY viewer as opposed to a large audience (1->1)?  For example, can scenes, sounds and smells be customized for individuals? To go a step further; can individuals, in real time, stimulate each other to communicate emotions, ideas and experiences (1<->1)?

We live our lives constantly absorbing from our senses. We experience many fundamental emotions such as desire, fear, love, anger and laughter early in our lifetimes. Furthermore, our lives are becoming more instrumented by cheap sensors and disseminated through social media. Can a neural net trained on our experiences, and emotional responses to them, be able to create audio stimulation that would invoke a desired sensation? This “sounds” far more practical than encoding that experience in neural code and physically imparting it to our neurons.

Powerful emotions have been inspired by one of the oldest modes of expression: music. Humanity has invented countless instruments to produce many sounds, and, most recently, digitally synthesized sounds that would be difficult or impractical to generate with a physical instrument. We consume music individually through our headphones, and experience energy, calm, melancholy and many other emotions. It is socially acceptable to wear headphones; in fact Apple and Beats by Dre made it fashionable.

If I want to communicate a desired emotion with a counterpart, I can invoke a deep net to generate sounds that, as per a deep net sitting on my partners’ device, invokes the desired emotion. There are a handful of startups using AI to create music with desired features that fit in a certain genre. How about taking it a step further to create a desired emotion on behalf of the listener?

The same concept can be applied for human-to-machine interfaces, and vice-versa. Touch screens have already almost obviated keyboards for consuming content; voice and NLP will eventually replace keyboards for generating content.

Deep nets will infer what we “mean” from what we “say.” We will “read” through a combination and hearing through a headset and “seeing” on a display. Simple experiences can be communicated through sound, and richer experiences through a combination of sight and sound that invoke our other senses; just as how the sight of a stadium invokes the roar of a crowd and the taste of beer and hot dogs.

Behold our future link to each other and our machines: the humble microphone-equipped earbud. Initially, it will tether to our phones for processing, connectivity and charge. Eventually, advanced compute and energy harvesting would obviate the need for phones, and those earpieces will be our bridge to every human being on the planet, and access corpus of human knowledge and expression.

The timing of this prediction may be ironic given Jawbone’s expected shutdown. I believe that there is a magical company to be built around what Jawbone should have been: an AI-powered telepathic interface between people, the present, past and what’s expected to be the future.

Featured Image: Ky/Flickr UNDER A CC BY 2.0 LICENSE

 


0

IBM is working with the Air Force Research Lab on a TrueNorth 64-chip array

15:06 | 23 June

IBM is working with the U.S. Air Force to improve its TrueNorth line of chips designed to optimize the performance of machine learning models at the hardware level. The new 64-chip array will consist of four boards, each with 16 chips. IBM’s chips are still too experimental to be used in mass production,  but they’ve shown promise in running a special type of neural network called a spiking neural network.

Though the technology is still in its infancy, IBM believes the low power consumption of its chips could some day bring value in constrained applications like mobile phones and self-driving cars. In an Air Force context, this could include applications in satellites and unmanned aerial vehicles (UAVs).

The chips are designed in such a way that researchers can run a single neural net on multiple data sets or run multiple neural nets on a single data set. Dharmendra Modha, IBM’s Chief Scientist for brain-inspired computing, told me that the U.S. Air Force was the first organization to take an early, small single chip, board from IBM’s lab.

The TrueNorth project began as a the DARPA SyNAPSE project back in mid-2008. Since IBM began working on the project, it has been able to increase the number of neurons per system from 256 to 64 million.

IBM TrueNorth unit

Similar to other experimental computing hardware like chips enabling quantum annealing, IBM’s TrueNorth approach has drawn criticism from some leaders of the field who say it offers limited advantages over more conventional custom chips, FPGAs and GPUs.

Back in 2014, Yann LeCun, Facebook’s Director of AI Research published a public Facebook post expressing skepticism at TrueNorth’s ability to deliver value in a real world application. He went on to argue that IBM’s chips are designed for spiking neural networks, a type of network that hasn’t shown as much promise as convolutional neural networks on common tasks like object recognition. IBM cites a paper it published showing that CNNs can be mapped efficiently to TrueNorth — but clearly more benchmarking work is needed.

In practice, TrueNorth has been put to use in over 40 institutions, including both universities and government agencies. If you put yourself in the shoes of researchers, why would you not want access to experimental hardware?

We haven’t fully explored all the potential applications of this type of computing, so while it’s very reasonable to be conservative, researchers have little incentive to completely disregard the potential of the project. This means sales for IBM and more, much needed, research in the space.

 


0

Accenture, can I ask you a few questions?

21:30 | 12 April

Hey Accenture, are you aware that your PR firm is pitching your latest corporate beta for a creepy face and emotion-monitoring algorithm as a party trick?

Do you know what a party is? Have you read the definition?

Would you call requiring people to download an app before they can get into an event a party-time kind of thing to do? Would you say that demanding that people scan their faces so they can be recognized by an algorithm fun times?

Does having that same algorithm watch and record every interaction that happens in the Austin bar or event space you’ve filled with some light snacks and a bunch of free liquor count as rockin like Dokken?

Do good hosts require people to become lab rats in the latest attempt to develop HAL?

Is monitoring patients’ faces in hospitals really the best way to apply the technology outside of your PartyBOT’s par-tays? Or is that also a creepy and intrusive use of technology, when other solutions exist to track actual vital signs?

What do your own office parties look like? Does Sam from the front desk have to try out the newest accounting software to get a drink from the punch bowl? Does Diane have to swear allegiance to Watson before grabbing that tuna roll? Do you throw them at Guy’s American Kitchen & Bar?

Does your soul die a little when you turn people into test subjects for the AI apocalypse?

Maybe after reading this press release, it should?

With the rise of AI and voice recognition, customer experiences can be curated to the next level. Imagine Amazon’s Alexa, but with more emotion, depth and distinction. Accenture Interactive’s PartyBOT is not simply a chatbot – it is equipped with the latest technologies in facial recognition that allow the bot to recognize user feelings through facial expressions, and words resulting in more meaningful conversations.

Featured at this year’s SXSW, the PartyBOT delivered an unparalleled party experience for our guests – detecting their preferences from favorite music, beverages and more. The PartyBOT went so far as to check in on every attending guest at the party, curating tailored activities based on their preferences. Link to video:

(pw: awesome)

But the PartyBOT goes much further than facilitating great party experiences. Its machine learning applications can apply to range of industries from business to healthcare – acting as an agent to support patient recognition and diagnosis in hospitals that can recognize patient distress and seek the appropriate help from doctors.

If you would like to learn more about the PartyBOT, I’m happy to put you in touch with our executives to discuss the applications of our technology and potentially schedule time to see this in our studios.

Featured Image: Rosenfeld Media/Flickr UNDER A CC BY 2.0 LICENSE

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 26

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short