Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 67

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Cognitive science

<< Back Forward >>
Topics from 1 to 10 | in all: 14

New tech lets robots feel their environment

00:30 | 16 October

A new technology from researchers at Carnegie Mellon University will add sound and vibration awareness to create truly context-aware computing. The system, called Ubicoustics, adds additional bits of context to smart device interaction, allowing a smart speaker to know its in a kitchen or a smart sensor to know you’re in a tunnel versus on the open road.

“A smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a person is doing in a kitchen,” said Chris Harrison a researcher at CMU’s Human-Computer Interaction Institute. “But if these devices understood what was happening around them, they could be much more helpful.”

The first implementation of the system uses built-in speakers to create “a sound-based activity recognition.” How they are doing this is quite fascinating.

“The main idea here is to leverage the professional sound-effect libraries typically used in the entertainment industry,” said Gierad Laput, a Ph.D. student. “They are clean, properly labeled, well-segmented and diverse. Plus, we can transform and project them into hundreds of different variations, creating volumes of data perfect for training deep-learning models.”

From the release:

Laput said recognizing sounds and placing them in the correct context is challenging, in part because multiple sounds are often present and can interfere with each other. In their tests, Ubicoustics had an accuracy of about 80 percent — competitive with human accuracy, but not yet good enough to support user applications. Better microphones, higher sampling rates and different model architectures all might increase accuracy with further research.

In a separate paper, HCII Ph.D. student Yang Zhang, along with Laput and Harrison, describe what they call Vibrosight, which can detect vibrations in specific locations in a room using laser vibrometry. It is similar to the light-based devices the KGB once used to detect vibrations on reflective surfaces such as windows, allowing them to listen in on the conversations that generated the vibrations.

This system uses a low-power laser and reflectors to sense whether an object is on or off or whether a chair or table has moved. The sensor can monitor multiple objects at once and the tags attached to the objects use no electricity. This would let a single laser monitor multiple objects around a room or even in different rooms, assuming there is line of sight.

The research is still in its early stages but expect to see robots that can hear when you’re doing the dishes and, depending on their skills, hide or offer to help.

 


0

This tech (scarily) lets video change reality

20:09 | 11 September

Researchers at Carnegie Mellon University have created a method to turn one video into the style of another. While this might be a little unclear at first, take a look at the video below. In it, the researchers have taken an entire clip from John Oliver and made it look like Stephen Colbert said it. Further, they were able to mimic the motion of a flower opening with another flower.

In short, they can make anyone (or anything) look like they are doing something they never did.

“I think there are a lot of stories to be told,” said CMU Ph.D. student Aayush Bansal. He and the team created the tool to make it easier to shoot complex films, perhaps by replacing the motion in simple, well-lit scenes and copying it into an entirely different style or environment.

“It’s a tool for the artist that gives them an initial model that they can then improve,” he said.

The system uses something called generative adversarial networks (GANs) to move one style of image onto another without much matching data. GANs, however, create many artifacts that can mess up the video as it is played.

In a GAN, two models are created: a discriminator that learns to detect what is consistent with the style of one image or video, and a generator that learns how to create images or videos that match a certain style. When the two work competitively — the generator trying to trick the discriminator and the discriminator scoring the effectiveness of the generator — the system eventually learns how content can be transformed into a certain style.

The researchers created something called Recycle-GAN that reduces the imperfections by “not only spatial, but temporal information.”

“This additional information, accounting for changes over time, further constrains the process and produces better results,” wrote the researchers.

Recycle-GAN can obviously be used to create so-called Deepfakes, allowing for nefarious folks to simulate someone saying or doing something they never did. Bansal and his team are aware of the problem.

“It was an eye opener to all of us in the field that such fakes would be created and have such an impact. Finding ways to detect them will be important moving forward,” said Bansal.

 


0

TechCrunch Disrupt SF 2018 dives deep into artificial intelligence and machine learning

19:54 | 23 August

As fields of research, machine learning and artificial intelligence both date back to the 50s. More than half a century later, the disciplines have graduated from the theoretical to practical, real world applications. We’ll have some of the top minds in both categories to discuss the latest advances and future of AI and ML on stage and Disrupt San Francisco in early September.

For the first time, Disrupt SF will be held in San Francisco’s Moscone Center. It’s a huge space, which meant we could dramatically increase the amount of programming offered to attendees. And we did. Here’s the agenda. Tickets are still available even though the show is less than two weeks away. Grab one here.

The show features the themes currently facing the technology world including artificial intelligence and machine learning. Some of the top minds in AI and ML are speaking on several stages and some are taking audience questions. We’re thrilled to be joined by Dr. Kai-Fu Lee, former president of Google China and current CEO of Sinovation Ventures, Colin Angle, co-founder and CEO of iRobots, Claire Delaunay, Nvidia VP of Engineering, and among others, Dario Gil, IBM VP of AI.

Dr. Kai-Fu Lee is the CEO and chairman of Sinovation, a venture firm based in the U.S. and China, and he has emerged as one of the world’s top prognosticators on artificial intelligence and how the technology will disrupt just about everything. Dr. Lee wrote in The New York Times last year that AI is “poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.” Dr. Lee will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Colin Angle co-founded iRobot with fellow MIT grads Rod Brooks and Helen Greiner in 1990. Early on, the company provided robots for military applications, and then in 2002, introduced the consumer-focused Roomba. Angle has plenty to talk about. As the CEO and Chairman of iRobot, he led the company through the sale of its military branch in 2016 so the company can focus on robots in homes. If there’s anyone that knows how to both work with the military and manage consumers’ expectations with household robots, it’s Colin Angle and we’re excited to have him speaking at the event where he will also take questions from the audience on the Q&A stage.

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for roboticists and developers around the world. Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, the startup she co-founded. She was also the robotics program lead at Google and founded two companies, Botiful and Robotics Valley. Delaunay will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Dario Gil, the head of IBM’s AI research efforts and quantum computing program, is coming to Disrupt Sf to talk about the current state of quantum computing. We may even see a demo or two of what’s possible today and use that to separate hype from reality. Among the large tech firms, IBM — and specifically the IBM Q lab — has long been at the forefront of the quantum revolution. Last year, the company showed off its 50-qubit quantum computer and you can already start building software for it using the company’s developer kit.

Sam Liang is the CEO/Co-Founder of AISense Inc, based in Silicon Valley. Funded by Horizons Ventures (DeepMind, Waze, Zoom, Facebook), Tim Draper, David Cheriton of Stanford (first investor in Google), etc. AISense has created Ambient Voice Intelligence™ technologies with deep learning that understands human-to-human conversations. Its Otter.ai product digitizes all voice meetings and video conferences, makes every conversation searchable and also provides speech analytics and insights. Otter.ai is the exclusive provider of automatic meeting transcription for Zoom Video Communications.

Laura Major is the Vice President of Engineering at CyPhy Works, where she leads R&D, product design and development and manages the multi-disciplinary engineering team. Prior to joining CyPhy Works, she worked at Draper Laboratory as a division lead and developed the first human-centered engineering capability and expanded it to included machine intelligence and AI. Laura also grew multiple programs and engineering teams to contribute to the development and expansion of ATAK, which is now in wide use across the military.

Dr. Jason Mars founded and runs Clinc to try to close the gap in conversational AI by emulating human intelligence to interpret unstructured, unconstrained speech. AI has the potential to change everything, but there is a fundamental disconnect between what AI is capable of and how we interface with it. Clinc is currently targeting the financial market, letting users converse with their bank account using natural language without any pre-defined templates or hierarchical voice menus. At Disrupt SF, Mars is set to debut other ways that Clinc’s conversational AI can be applied. Without ruining the surprise, let me just say that this is going to be a demo you won’t want to miss. After the demo, he will take questions on the Q&A stage.

Chad Rigetti, the namesake founder of Rigetti Computing, will join us at Disrupt SF 2018 to explain Rigetti’s approach to quantum computing. It’s two-fold: on one front, the company is working on the design and fabrication of its own quantum chips; on the other, the company is opening up access to its early quantum computers for researchers and developers by way of its cloud computing platform, Forest. Rigetti Computing has raised nearly $70 million to date according to Crunchbase, with investment from some of the biggest names around. Meanwhile, labs around the country are already using Forest to explore the possibilities ahead.

Kyle Vogt co-founded and eventually sold Cruise Automation to General Motors in 2016. He stuck around after the sale and still leads the company today. Since selling the company to GM, Cruise has scaled rapidly and seemed to maintain a scrappy startup feel though now a division of a massive corporation. The company had 30 self-driving test cars on the road in 2016 and later rolled out a high-definition mapping system. In 2017 the company started running an autonomous ride-hailing service for its employees in San Francisco, later announcing its self-driving cars would hit New York City. Recently SoftBank’s Vision Fund invested $2.25 billion in GM Cruise Holdings LLC and when the deal closes, GM will invest an additional $1.1 billion. The investments are expected to inject enough capital into Cruise for the unit to reach commercialization at scale beginning in 2019.

 


0

VR helps us remember

14:21 | 14 June

Researchers at the University of Maryland have found that people remember information better if it is presented in VR vs. on a two dimensional personal computer. This means VR education could be an improvement on tablet or device-based learning.

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” said Amitabh Varshney, dean of the College of Computer, Mathematical, and Natural Sciences at UMD.

The study was quite complex and looked at recall in forty subjects who were comfortable with computers and VR. The researchers was an 8.8 percent improvement in recall.

To test the system they created a “memory palace” where they placed various images. This sort of “spatial mnemonic encoding” is a common memory trick that allows for better recall.

“Humans have always used visual-based methods to help them remember information, whether it’s cave drawings, clay tablets, printed text and images, or video,” said lead researcher Eric Krokos. “We wanted to see if virtual reality might be the next logical step in this progression.”

From the study:

Both groups received printouts of well-known faces–including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe–and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting–Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed.

The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces–and also the location of the image relative to the user’s own body.

Desktop users could perform the feat but VR users performed it statistically better, a fascinating twist on the traditional role of VR in education. The researchers believe that VR adds a layer of reality to the experience that lets the brain build a true “memory palace” in 3D space.

“Many of the participants said the immersive ‘presence’ while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display,” wrote the researchers.

“This leads to the possibility that a spatial virtual memory palace–experienced in an immersive virtual environment–could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” said researcher Catherine Plaisant.

 


0

Empathy technologies like VR, AR, and social media can transform education

00:00 | 23 April

Jennifer Carolan Contributor
Jennifer Carolan is a general partner and co-founder of Reach Capital.

In The Better Angels of Our Nature, Harvard psychologist Steven Pinker makes the case for reading as a “technology for perspective-taking” that has the capacity to not only evoke people’s empathy but also expand it. “The power of literacy,” as he argues  “get[s] people in the habit of straying from their parochial vantage points” while “creating a hothouse for new ideas about moral values and the social order.”

The first major empathy technology was Guttenberg’s printing press, invented in 1440. With the mass production of books came widespread literacy and the ability to inhabit the minds of others. While this may sound trite, it was actually a seismic innovation for people in the pre-industrial age who didn’t see, hear or interact with those outside of their village. More recently, other technologies like television and virtual reality made further advances, engaging more of the senses to deepen the simulated human experience.

We are now on the cusp of another breakthrough in empathy technologies that have their roots in education. Empathy technologies expand our access to diverse literature, allow us to more deeply understand each other and create opportunities for meaningful collaboration across racial, cultural, geographic and class backgrounds. The new empathy technologies don’t leave diversity of thought to chance rather they intentionally build for it.

Demand for these tools originates from educators both in schools and corporate environments who have a mandate around successful collaboration. Teachers who are on the front lines of this growing diversity consider it their job to help students and employees become better perspective-takers.

Our need to expand our circles of empathy has never been more urgent. We as a nation are becoming more diverse, segregated and isolated by the day.

The high school graduating class of 2020 will be majority minority and growing income inequality has created a vast income and opportunity gap. Our neighborhoods have regressed back to higher levels of socio-economic segregation; families from different sides of the track are living in increasing isolation from one another.

Photo courtesy of Flickr/Dean Hochman

These new empathy technologies are very different than social media platforms which once held so much promise to connect us all in an online utopia. The reality is that social media has moved us in the opposite direction. Instead, our platforms have us caught in an echo chamber of our own social filters, rarely exposed to new perspectives.

And it’s not just social media, clickbait tabloid journalism has encouraged mocking and judgment rather than the empathy-building journey of a great piece of writing like Toni Morrison or Donna Tartt. In the rich depth of literature, we empathize with the protagonist, and when their flaws are inevitably revealed, we are humbled and see ourselves in their complex, imperfect lives. Research has since proven that those who read more literary fiction are better at detecting and understanding others’ emotions.

What follows are several examples of empathy technologies in bricks and mortar schools, and online and corporate learning.

Empathy technologies enhance human connection rather than replacing it. Outschool is a marketplace for live online classes which connects K-12 students and teachers in small-groups over video-chat to explore shared interests. Historically online learning has offered great choice and access but at the cost of student engagement and human connection.

Outschool’s use of live video-chat and the small-group format removes the need for that trade-off. Kids and teachers see and hear each other, interacting in real-time like in a school classroom, but with participants from all over the world and from different backgrounds.

Live video chat on Outschool

The intentionally of curating a diverse library of content is a key difference between the new empathy technologies and social media. Newsela is a news platform delivering a bonanza of curated, leveled content to the classroom every day. It’s the antidote to the stale, single source textbook, refreshed once a decade. In the screenshot below, children are exposed to stories about Mexico, gun rights and Black women. Teachers often use Newsela articles as a jumping off point for a rich classroom discussion where respectful discourse skills are taught and practiced.

Newsela’s interface.

Business leaders are increasingly touting empathy as a critical leadership trait and using these technologies in their own corporate education programs for leadership and everyday employees. Google’s Sundar Pichai describes his management style as “the ability to trancend the work and work well with others.” Microsoft’s Satya Nadella believes that empathy is a key source of business innovation and is a prerequisite for one’s ability to “grasp customers’ un-met, unarticulated needs.” Uber’s new CEO Dara Khosrowshahi and Apple’s Tim Cook round out a cohort of leaders who are listeners first and contrast sharply to the stereotypical brash Silicon Valley CEO.

To deepen employees empathy, cutting edge corporations like Amazon are using virtual environments like Mursion to practice challenging interpersonal interactions. Mursion’s virtual simulations are powered by trained human actors who engage in real-time conversations with employees. I tried it out by role-playing a manager discussing mandatory overtime with a line worker who was struggling to keep two part-time jobs. The line worker described to me how last-minute overtime requests threw his schedule into chaos, put his second job at risk and impacted his childcare situation.

 


0

Juni Learning is bringing individualized programming tutorials to kids online

01:08 | 26 January

Juni Learning wants to give every kid access to a quality education in computer programming.

The company, part of Y Combinator’s latest batch of startups, is taking the same approach that turned VIPKID into the largest Chinese employer in the U.S. and a runaway hit in the edtech market — by matching students with vetted and pre-qualified online tutors.

While VIPKID focused on teaching English, Juni wants to teach kids how to code.

So far, the company has taught thousands of kids around the world how to code in Scratch and Python and offered instruction in AP Computer Science A, competition programming and overall web development.

Founded by Vivian Shen and Ruby Lee, Juni Learning was born of the two women’s own frustrations in learning how to code. While both eventually made their way to the computer science department at Stanford (where the two friends first met), it was a long road to get there.

Lee (class of 2013) and Shen (class of 2014) both had to fight to get their computer educations off the ground. Although Shen grew up in Palo Alto, Calif. — arguably ground zero for technology development in the Western world — there was only one computer science class on offer at her high school.

For Lee, who grew up in Massachusetts outside of Boston, the high school she attended was a computer science wasteland… with nothing on offer.

“As public awareness of women gaining engineering roles, we started discussing how to make education more accessible,” says Shen. “I was traveling in China and started hearing about these amazing education companies [like VIPKID].”

Indeed, Cindy Mi, VIPKID chief executive, was an inspiration for both women. “I thought, why couldn’t a model like that bring great computer science education to the U.S.,” Shen says.

The company offers different plans starting with an individual tutoring session running about $250 per month. Customers also can sign up for group classes that are capped at $160 per month.

“When you think about computer science education — since it’s such an important subject for kids — why aren’t they getting the best education they can get?” Shen asks.

The prices were set up after feedback with customers and exists at what Shen said was a sweet spot. VIPKID classes cost around $30 per hour.

The company initially had a soft launch in late August and was just accepted into the recent batch of Y Combinator companies.

Juni Learning recruits its tutors from current and former computer science students at top-tier colleges — primarily in California.

The company charges $250 per month for once-a-week classes and pays its teachers an undisclosed amount based on their previous experience teaching computer science to children.

The company’s product couldn’t come at a better time to reach students in both the U.S. and international markets… like China.

As The Wall Street Journal notes, investing in coding could be the next big thing for Chinese investors in education technology.

“Coding is about the only course that has the potential to become as important as English for students and for the industry,” Zhang Lijun, a venture capitalist who backs Chinese edtech companies, told the WSJ. “But nobody knows when that’s going to happen.”

By using a model that’s already proven successful in China — and resonates with consumers in the U.S., Juni Learning may have gotten ahead of the curve.

  1. cover_photo

  2. session_screenshot_2

    Juni Learning session screenshot
  3. session_screenshot

    Juni Learning session screenshot
 View Slideshow
Previous Next Exit

 


0

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0

Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy

00:28 | 15 September

Geoffrey Hinton is one the most famous researchers in the field of artificial intelligence. His work helped kick off the world of deep learning we see today. He earned his PhD in artificial intelligence back in 1977 and, in the 40 years since, he’s played a key role in the development of back-propagation and Boltzmann Machines. So it was a bit hilarious to learn in a Reddit AMA hosted by the Google Brain Team that Hinton was briefly a Google intern in 2012.

Prompted by a question about age cut-offs for interns on Brain Team, Jeff Dean, a Google Senior Fellow and leader of the research group, explained that his team has no arbitrary rules limiting the age of interns. To drive this point home, he explained that Geoffrey Hinton was technically his intern for a period of time in 2012.

“In 2012, I hosted Geoffrey Hinton as a visiting researcher in our group for the summer, but due to a snafu in how it was set up, he was classified as my intern,” Dean joked in the exchange. “We don’t have any age cutoffs for interns. All we ask is that they be talented and eager to learn, like Geoffrey :).”

Just a year later, Google acquired Hinton’s startup, DNNresearch, to build out its capabilities in deep learning. We still don’t know the price Google paid for DNNresearch, but after only a year of interning, that seems like quite a sweet deal :)

Brain Team is one of Google’s central units for research in deep learning and the home of the popular TensorFlow deep learning framework. If you applied for an internship there and didn’t get it, don’t worry — blame Geoffrey Hinton, he set the bar too high.

Featured Image: Bryce Durbin

 


0

Curious to crack the neural code? Listen carefully

22:30 | 10 July

Perhaps for the first time ever, our noggins have stolen the spotlight from our hips. The media, popular culture, science and business are paying more attention to the brain than ever. Billionaires want to plug into it.

Scientists view neurodegenerative disease as the next frontier after cancer and diabetes, hence increased efforts in understanding the physiology of the brain. Engineers are building computer programs that leverage “deep learning” which mimic our limited understanding of the human brain. Students are flocking to universities to learn how to build artificial intelligence for everything from search to finance to robotics to self-driving cars.

Public markets are putting premiums on companies doing machine learning of any kind. Investors are throwing bricks of cash at startups building or leveraging artificial intelligence in any way.

Silicon Valley’s obsession with the brain has just begun. Futurists and engineers are anticipating artificial intelligence surpassing human intelligence. Facebook is talking about connecting to users through their thoughts. Serial entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with AI by pouring tens of millions of their own money into Kernel and Neuralink, respectively.

Bryan and Elon have surrounded themselves with top scientists, including Ted Berger and Philip Sabes, to start by cracking the neural code. Traditionally, scientists seeking to control machines through our thoughts have done so by interfacing with the brain directly, which means surgically implanting probes into the brain.

Scientists Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts through implanted probes, also known as electrode arrays. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which would translate individual neuron activity to the movements of a nearby robotic arm.

The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab treats. These arrays were later implanted in quadriplegics to enable them to control cursors on screens and robotic arms. The original motivation, and funding, for this research was intended for treating neurological diseases such as epilepsy, Parkinson’s and Alzheimer’s.

Curing brain disease requires an intimate understanding of the brain at the brain-cell level.  Controlling machines was a byproduct of the technology being built, in addition to access to patients with severe epilepsy who already have their skulls opened.

Science-fiction writers imagine a future of faster-than-light travel and human batteries, but assume that you need to go through the skull to get to the brain. Today, music is a powerful tool for conveying emotion, expression and catalyzing our imagination. Powerful machine learning tools can predict responses from auditory stimulation, and create rich, Total-Recall-like experiences through our ears.

If the goal is to upgrade the screen and keyboard (human<->machine), and become more telepathic (human<->human), do we really need to know what’s going on with our neurons? Let’s use computer engineering as an analogy: physicists are still discovering elementary particles; but we know how to manipulate charge good enough to add numbers, trigger pixels on a screen, capture images from a camera and capture inputs from a keyboard or touchscreen. Do we really need to understand quarks? I’m sure there are hundreds of analogies where we have pretty good control over things without knowing all the underlying physics, which begs the question: Do we really have to crack the neural code to better interface with our brains?

In fact, I would argue that humans have been pseudo-telepathic since the beginning of civilization, and that has been in the art of expression. Artists communicate emotions and experiences by stimulating our senses, which already have a rich, wide-band connection to our brains. Artists stimulate our brains with photographs, paintings, motion pictures, song, dance and stories. The greatest artists have been the most effective at invoking certain emotions; love, fear, anger and lust to name a few. Great authors empower readers to live though experiences by turning the page, creating richer experiences than what’s delivered through IMAX 3D.

Creative media is a powerful tool of expression to invoke emotion and share experiences. It is rich, expensive to produce and distribute, and disseminated through screens and speakers. Although virtual reality promises an even more interactive and immersive experience, I envision a future where AI can be trained to invoke powerful, personal experiences through sound. Images courtesy of Apple and Samsung.

Unfortunately those great artists are few, and heavy investment goes into a production which needs to be amortized over many consumers (1->n). Can content be personalized for EVERY viewer as opposed to a large audience (1->1)?  For example, can scenes, sounds and smells be customized for individuals? To go a step further; can individuals, in real time, stimulate each other to communicate emotions, ideas and experiences (1<->1)?

We live our lives constantly absorbing from our senses. We experience many fundamental emotions such as desire, fear, love, anger and laughter early in our lifetimes. Furthermore, our lives are becoming more instrumented by cheap sensors and disseminated through social media. Can a neural net trained on our experiences, and emotional responses to them, be able to create audio stimulation that would invoke a desired sensation? This “sounds” far more practical than encoding that experience in neural code and physically imparting it to our neurons.

Powerful emotions have been inspired by one of the oldest modes of expression: music. Humanity has invented countless instruments to produce many sounds, and, most recently, digitally synthesized sounds that would be difficult or impractical to generate with a physical instrument. We consume music individually through our headphones, and experience energy, calm, melancholy and many other emotions. It is socially acceptable to wear headphones; in fact Apple and Beats by Dre made it fashionable.

If I want to communicate a desired emotion with a counterpart, I can invoke a deep net to generate sounds that, as per a deep net sitting on my partners’ device, invokes the desired emotion. There are a handful of startups using AI to create music with desired features that fit in a certain genre. How about taking it a step further to create a desired emotion on behalf of the listener?

The same concept can be applied for human-to-machine interfaces, and vice-versa. Touch screens have already almost obviated keyboards for consuming content; voice and NLP will eventually replace keyboards for generating content.

Deep nets will infer what we “mean” from what we “say.” We will “read” through a combination and hearing through a headset and “seeing” on a display. Simple experiences can be communicated through sound, and richer experiences through a combination of sight and sound that invoke our other senses; just as how the sight of a stadium invokes the roar of a crowd and the taste of beer and hot dogs.

Behold our future link to each other and our machines: the humble microphone-equipped earbud. Initially, it will tether to our phones for processing, connectivity and charge. Eventually, advanced compute and energy harvesting would obviate the need for phones, and those earpieces will be our bridge to every human being on the planet, and access corpus of human knowledge and expression.

The timing of this prediction may be ironic given Jawbone’s expected shutdown. I believe that there is a magical company to be built around what Jawbone should have been: an AI-powered telepathic interface between people, the present, past and what’s expected to be the future.

Featured Image: Ky/Flickr UNDER A CC BY 2.0 LICENSE

 


0

H2O.ai’s Driverless AI automates machine learning for businesses

19:11 | 6 July

Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning.

At the research level, machine learning problems are complex and unpredictable — combining GANs and reinforcement learning in a never before seen use case takes finesse. But the reality is that a lot of corporates today use machine learning for relatively predictable problems — evaluating default rates with a support vector machine, for example.

But even these relatively straightforward problems are tough for non-technical employees to wrap their heads around. Companies are increasingly working data science into non-traditional sales and HR processes, attempting to train their way to costly innovation.

All of H2O.ai’s products help to make AI more accessible, but Driverless AI takes things a step further by physically automating many of the tough decisions that need to be made when preparing a model. Driverless AI automates feature engineering, the process by which key variables are selected to build a model.

H2O built Driverless AI with popular use cases built-in, but it can’t solve every machine learning problem. Ideally it can find and tune enough standard models to automate at least part of the long tail.

The company alluded to today’s release back in January when it launched Deep Water, a platform allowing its customers to take advantage of deep learning and GPUs.

We’re still in the very early days of machine learning automation. Google CEO Sundar Pichai generated a lot of buzz at this year’s I/O conference when he provided details on the company’s efforts to create an AI tool that could automatically select the best model and characteristics to solve a machine learning problem with trial, error and a ton of compute.

Driverless AI is an early step in the journey of democratizing and abstracting AI for non-technical users. You can download the tool and start experimenting here. 

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 14

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short