Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Cognitive science

<< Back Forward >>
Topics from 1 to 10 | in all: 11

VR helps us remember

14:21 | 14 June

Researchers at the University of Maryland have found that people remember information better if it is presented in VR vs. on a two dimensional personal computer. This means VR education could be an improvement on tablet or device-based learning.

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” said Amitabh Varshney, dean of the College of Computer, Mathematical, and Natural Sciences at UMD.

The study was quite complex and looked at recall in forty subjects who were comfortable with computers and VR. The researchers was an 8.8 percent improvement in recall.

To test the system they created a “memory palace” where they placed various images. This sort of “spatial mnemonic encoding” is a common memory trick that allows for better recall.

“Humans have always used visual-based methods to help them remember information, whether it’s cave drawings, clay tablets, printed text and images, or video,” said lead researcher Eric Krokos. “We wanted to see if virtual reality might be the next logical step in this progression.”

From the study:

Both groups received printouts of well-known faces–including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe–and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting–Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed.

The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces–and also the location of the image relative to the user’s own body.

Desktop users could perform the feat but VR users performed it statistically better, a fascinating twist on the traditional role of VR in education. The researchers believe that VR adds a layer of reality to the experience that lets the brain build a true “memory palace” in 3D space.

“Many of the participants said the immersive ‘presence’ while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display,” wrote the researchers.

“This leads to the possibility that a spatial virtual memory palace–experienced in an immersive virtual environment–could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” said researcher Catherine Plaisant.

 


0

Empathy technologies like VR, AR, and social media can transform education

00:00 | 23 April

Jennifer Carolan Contributor
Jennifer Carolan is a general partner and co-founder of Reach Capital.

In The Better Angels of Our Nature, Harvard psychologist Steven Pinker makes the case for reading as a “technology for perspective-taking” that has the capacity to not only evoke people’s empathy but also expand it. “The power of literacy,” as he argues  “get[s] people in the habit of straying from their parochial vantage points” while “creating a hothouse for new ideas about moral values and the social order.”

The first major empathy technology was Guttenberg’s printing press, invented in 1440. With the mass production of books came widespread literacy and the ability to inhabit the minds of others. While this may sound trite, it was actually a seismic innovation for people in the pre-industrial age who didn’t see, hear or interact with those outside of their village. More recently, other technologies like television and virtual reality made further advances, engaging more of the senses to deepen the simulated human experience.

We are now on the cusp of another breakthrough in empathy technologies that have their roots in education. Empathy technologies expand our access to diverse literature, allow us to more deeply understand each other and create opportunities for meaningful collaboration across racial, cultural, geographic and class backgrounds. The new empathy technologies don’t leave diversity of thought to chance rather they intentionally build for it.

Demand for these tools originates from educators both in schools and corporate environments who have a mandate around successful collaboration. Teachers who are on the front lines of this growing diversity consider it their job to help students and employees become better perspective-takers.

Our need to expand our circles of empathy has never been more urgent. We as a nation are becoming more diverse, segregated and isolated by the day.

The high school graduating class of 2020 will be majority minority and growing income inequality has created a vast income and opportunity gap. Our neighborhoods have regressed back to higher levels of socio-economic segregation; families from different sides of the track are living in increasing isolation from one another.

Photo courtesy of Flickr/Dean Hochman

These new empathy technologies are very different than social media platforms which once held so much promise to connect us all in an online utopia. The reality is that social media has moved us in the opposite direction. Instead, our platforms have us caught in an echo chamber of our own social filters, rarely exposed to new perspectives.

And it’s not just social media, clickbait tabloid journalism has encouraged mocking and judgment rather than the empathy-building journey of a great piece of writing like Toni Morrison or Donna Tartt. In the rich depth of literature, we empathize with the protagonist, and when their flaws are inevitably revealed, we are humbled and see ourselves in their complex, imperfect lives. Research has since proven that those who read more literary fiction are better at detecting and understanding others’ emotions.

What follows are several examples of empathy technologies in bricks and mortar schools, and online and corporate learning.

Empathy technologies enhance human connection rather than replacing it. Outschool is a marketplace for live online classes which connects K-12 students and teachers in small-groups over video-chat to explore shared interests. Historically online learning has offered great choice and access but at the cost of student engagement and human connection.

Outschool’s use of live video-chat and the small-group format removes the need for that trade-off. Kids and teachers see and hear each other, interacting in real-time like in a school classroom, but with participants from all over the world and from different backgrounds.

Live video chat on Outschool

The intentionally of curating a diverse library of content is a key difference between the new empathy technologies and social media. Newsela is a news platform delivering a bonanza of curated, leveled content to the classroom every day. It’s the antidote to the stale, single source textbook, refreshed once a decade. In the screenshot below, children are exposed to stories about Mexico, gun rights and Black women. Teachers often use Newsela articles as a jumping off point for a rich classroom discussion where respectful discourse skills are taught and practiced.

Newsela’s interface.

Business leaders are increasingly touting empathy as a critical leadership trait and using these technologies in their own corporate education programs for leadership and everyday employees. Google’s Sundar Pichai describes his management style as “the ability to trancend the work and work well with others.” Microsoft’s Satya Nadella believes that empathy is a key source of business innovation and is a prerequisite for one’s ability to “grasp customers’ un-met, unarticulated needs.” Uber’s new CEO Dara Khosrowshahi and Apple’s Tim Cook round out a cohort of leaders who are listeners first and contrast sharply to the stereotypical brash Silicon Valley CEO.

To deepen employees empathy, cutting edge corporations like Amazon are using virtual environments like Mursion to practice challenging interpersonal interactions. Mursion’s virtual simulations are powered by trained human actors who engage in real-time conversations with employees. I tried it out by role-playing a manager discussing mandatory overtime with a line worker who was struggling to keep two part-time jobs. The line worker described to me how last-minute overtime requests threw his schedule into chaos, put his second job at risk and impacted his childcare situation.

 


0

Juni Learning is bringing individualized programming tutorials to kids online

01:08 | 26 January

Juni Learning wants to give every kid access to a quality education in computer programming.

The company, part of Y Combinator’s latest batch of startups, is taking the same approach that turned VIPKID into the largest Chinese employer in the U.S. and a runaway hit in the edtech market — by matching students with vetted and pre-qualified online tutors.

While VIPKID focused on teaching English, Juni wants to teach kids how to code.

So far, the company has taught thousands of kids around the world how to code in Scratch and Python and offered instruction in AP Computer Science A, competition programming and overall web development.

Founded by Vivian Shen and Ruby Lee, Juni Learning was born of the two women’s own frustrations in learning how to code. While both eventually made their way to the computer science department at Stanford (where the two friends first met), it was a long road to get there.

Lee (class of 2013) and Shen (class of 2014) both had to fight to get their computer educations off the ground. Although Shen grew up in Palo Alto, Calif. — arguably ground zero for technology development in the Western world — there was only one computer science class on offer at her high school.

For Lee, who grew up in Massachusetts outside of Boston, the high school she attended was a computer science wasteland… with nothing on offer.

“As public awareness of women gaining engineering roles, we started discussing how to make education more accessible,” says Shen. “I was traveling in China and started hearing about these amazing education companies [like VIPKID].”

Indeed, Cindy Mi, VIPKID chief executive, was an inspiration for both women. “I thought, why couldn’t a model like that bring great computer science education to the U.S.,” Shen says.

The company offers different plans starting with an individual tutoring session running about $250 per month. Customers also can sign up for group classes that are capped at $160 per month.

“When you think about computer science education — since it’s such an important subject for kids — why aren’t they getting the best education they can get?” Shen asks.

The prices were set up after feedback with customers and exists at what Shen said was a sweet spot. VIPKID classes cost around $30 per hour.

The company initially had a soft launch in late August and was just accepted into the recent batch of Y Combinator companies.

Juni Learning recruits its tutors from current and former computer science students at top-tier colleges — primarily in California.

The company charges $250 per month for once-a-week classes and pays its teachers an undisclosed amount based on their previous experience teaching computer science to children.

The company’s product couldn’t come at a better time to reach students in both the U.S. and international markets… like China.

As The Wall Street Journal notes, investing in coding could be the next big thing for Chinese investors in education technology.

“Coding is about the only course that has the potential to become as important as English for students and for the industry,” Zhang Lijun, a venture capitalist who backs Chinese edtech companies, told the WSJ. “But nobody knows when that’s going to happen.”

By using a model that’s already proven successful in China — and resonates with consumers in the U.S., Juni Learning may have gotten ahead of the curve.

  1. cover_photo

  2. session_screenshot_2

    Juni Learning session screenshot
  3. session_screenshot

    Juni Learning session screenshot
 View Slideshow
Previous Next Exit

 


0

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0

Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy

00:28 | 15 September

Geoffrey Hinton is one the most famous researchers in the field of artificial intelligence. His work helped kick off the world of deep learning we see today. He earned his PhD in artificial intelligence back in 1977 and, in the 40 years since, he’s played a key role in the development of back-propagation and Boltzmann Machines. So it was a bit hilarious to learn in a Reddit AMA hosted by the Google Brain Team that Hinton was briefly a Google intern in 2012.

Prompted by a question about age cut-offs for interns on Brain Team, Jeff Dean, a Google Senior Fellow and leader of the research group, explained that his team has no arbitrary rules limiting the age of interns. To drive this point home, he explained that Geoffrey Hinton was technically his intern for a period of time in 2012.

“In 2012, I hosted Geoffrey Hinton as a visiting researcher in our group for the summer, but due to a snafu in how it was set up, he was classified as my intern,” Dean joked in the exchange. “We don’t have any age cutoffs for interns. All we ask is that they be talented and eager to learn, like Geoffrey :).”

Just a year later, Google acquired Hinton’s startup, DNNresearch, to build out its capabilities in deep learning. We still don’t know the price Google paid for DNNresearch, but after only a year of interning, that seems like quite a sweet deal :)

Brain Team is one of Google’s central units for research in deep learning and the home of the popular TensorFlow deep learning framework. If you applied for an internship there and didn’t get it, don’t worry — blame Geoffrey Hinton, he set the bar too high.

Featured Image: Bryce Durbin

 


0

Curious to crack the neural code? Listen carefully

22:30 | 10 July

Perhaps for the first time ever, our noggins have stolen the spotlight from our hips. The media, popular culture, science and business are paying more attention to the brain than ever. Billionaires want to plug into it.

Scientists view neurodegenerative disease as the next frontier after cancer and diabetes, hence increased efforts in understanding the physiology of the brain. Engineers are building computer programs that leverage “deep learning” which mimic our limited understanding of the human brain. Students are flocking to universities to learn how to build artificial intelligence for everything from search to finance to robotics to self-driving cars.

Public markets are putting premiums on companies doing machine learning of any kind. Investors are throwing bricks of cash at startups building or leveraging artificial intelligence in any way.

Silicon Valley’s obsession with the brain has just begun. Futurists and engineers are anticipating artificial intelligence surpassing human intelligence. Facebook is talking about connecting to users through their thoughts. Serial entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with AI by pouring tens of millions of their own money into Kernel and Neuralink, respectively.

Bryan and Elon have surrounded themselves with top scientists, including Ted Berger and Philip Sabes, to start by cracking the neural code. Traditionally, scientists seeking to control machines through our thoughts have done so by interfacing with the brain directly, which means surgically implanting probes into the brain.

Scientists Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts through implanted probes, also known as electrode arrays. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which would translate individual neuron activity to the movements of a nearby robotic arm.

The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab treats. These arrays were later implanted in quadriplegics to enable them to control cursors on screens and robotic arms. The original motivation, and funding, for this research was intended for treating neurological diseases such as epilepsy, Parkinson’s and Alzheimer’s.

Curing brain disease requires an intimate understanding of the brain at the brain-cell level.  Controlling machines was a byproduct of the technology being built, in addition to access to patients with severe epilepsy who already have their skulls opened.

Science-fiction writers imagine a future of faster-than-light travel and human batteries, but assume that you need to go through the skull to get to the brain. Today, music is a powerful tool for conveying emotion, expression and catalyzing our imagination. Powerful machine learning tools can predict responses from auditory stimulation, and create rich, Total-Recall-like experiences through our ears.

If the goal is to upgrade the screen and keyboard (human<->machine), and become more telepathic (human<->human), do we really need to know what’s going on with our neurons? Let’s use computer engineering as an analogy: physicists are still discovering elementary particles; but we know how to manipulate charge good enough to add numbers, trigger pixels on a screen, capture images from a camera and capture inputs from a keyboard or touchscreen. Do we really need to understand quarks? I’m sure there are hundreds of analogies where we have pretty good control over things without knowing all the underlying physics, which begs the question: Do we really have to crack the neural code to better interface with our brains?

In fact, I would argue that humans have been pseudo-telepathic since the beginning of civilization, and that has been in the art of expression. Artists communicate emotions and experiences by stimulating our senses, which already have a rich, wide-band connection to our brains. Artists stimulate our brains with photographs, paintings, motion pictures, song, dance and stories. The greatest artists have been the most effective at invoking certain emotions; love, fear, anger and lust to name a few. Great authors empower readers to live though experiences by turning the page, creating richer experiences than what’s delivered through IMAX 3D.

Creative media is a powerful tool of expression to invoke emotion and share experiences. It is rich, expensive to produce and distribute, and disseminated through screens and speakers. Although virtual reality promises an even more interactive and immersive experience, I envision a future where AI can be trained to invoke powerful, personal experiences through sound. Images courtesy of Apple and Samsung.

Unfortunately those great artists are few, and heavy investment goes into a production which needs to be amortized over many consumers (1->n). Can content be personalized for EVERY viewer as opposed to a large audience (1->1)?  For example, can scenes, sounds and smells be customized for individuals? To go a step further; can individuals, in real time, stimulate each other to communicate emotions, ideas and experiences (1<->1)?

We live our lives constantly absorbing from our senses. We experience many fundamental emotions such as desire, fear, love, anger and laughter early in our lifetimes. Furthermore, our lives are becoming more instrumented by cheap sensors and disseminated through social media. Can a neural net trained on our experiences, and emotional responses to them, be able to create audio stimulation that would invoke a desired sensation? This “sounds” far more practical than encoding that experience in neural code and physically imparting it to our neurons.

Powerful emotions have been inspired by one of the oldest modes of expression: music. Humanity has invented countless instruments to produce many sounds, and, most recently, digitally synthesized sounds that would be difficult or impractical to generate with a physical instrument. We consume music individually through our headphones, and experience energy, calm, melancholy and many other emotions. It is socially acceptable to wear headphones; in fact Apple and Beats by Dre made it fashionable.

If I want to communicate a desired emotion with a counterpart, I can invoke a deep net to generate sounds that, as per a deep net sitting on my partners’ device, invokes the desired emotion. There are a handful of startups using AI to create music with desired features that fit in a certain genre. How about taking it a step further to create a desired emotion on behalf of the listener?

The same concept can be applied for human-to-machine interfaces, and vice-versa. Touch screens have already almost obviated keyboards for consuming content; voice and NLP will eventually replace keyboards for generating content.

Deep nets will infer what we “mean” from what we “say.” We will “read” through a combination and hearing through a headset and “seeing” on a display. Simple experiences can be communicated through sound, and richer experiences through a combination of sight and sound that invoke our other senses; just as how the sight of a stadium invokes the roar of a crowd and the taste of beer and hot dogs.

Behold our future link to each other and our machines: the humble microphone-equipped earbud. Initially, it will tether to our phones for processing, connectivity and charge. Eventually, advanced compute and energy harvesting would obviate the need for phones, and those earpieces will be our bridge to every human being on the planet, and access corpus of human knowledge and expression.

The timing of this prediction may be ironic given Jawbone’s expected shutdown. I believe that there is a magical company to be built around what Jawbone should have been: an AI-powered telepathic interface between people, the present, past and what’s expected to be the future.

Featured Image: Ky/Flickr UNDER A CC BY 2.0 LICENSE

 


0

H2O.ai’s Driverless AI automates machine learning for businesses

19:11 | 6 July

Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning.

At the research level, machine learning problems are complex and unpredictable — combining GANs and reinforcement learning in a never before seen use case takes finesse. But the reality is that a lot of corporates today use machine learning for relatively predictable problems — evaluating default rates with a support vector machine, for example.

But even these relatively straightforward problems are tough for non-technical employees to wrap their heads around. Companies are increasingly working data science into non-traditional sales and HR processes, attempting to train their way to costly innovation.

All of H2O.ai’s products help to make AI more accessible, but Driverless AI takes things a step further by physically automating many of the tough decisions that need to be made when preparing a model. Driverless AI automates feature engineering, the process by which key variables are selected to build a model.

H2O built Driverless AI with popular use cases built-in, but it can’t solve every machine learning problem. Ideally it can find and tune enough standard models to automate at least part of the long tail.

The company alluded to today’s release back in January when it launched Deep Water, a platform allowing its customers to take advantage of deep learning and GPUs.

We’re still in the very early days of machine learning automation. Google CEO Sundar Pichai generated a lot of buzz at this year’s I/O conference when he provided details on the company’s efforts to create an AI tool that could automatically select the best model and characteristics to solve a machine learning problem with trial, error and a ton of compute.

Driverless AI is an early step in the journey of democratizing and abstracting AI for non-technical users. You can download the tool and start experimenting here. 

 


0

TC Sessions: Robotics to feature talks from Rod Brooks, DARPA and MIT CSAIL

19:07 | 20 June

The agenda for TC Sessions: Robotics just keeps getting more irresistible. We are happy to announce that Rod Brooks, co-founder of Rethink Robotics and iRobot, will join us on stage at TechCrunch’s first ever robotics show, July 17 at MIT’s Kresge auditorium.

Brooks is a former director of MIT’s CSAIL program as well as an author and prognosticator on the future of robots. At TechCrunch Disrupt NY in May, Brooks expressed contrarian views about the imminence of driverless cars, the capabilities of artificial intelligence, and rules of engagement for robots at war. We are looking forward to taking that conversation further and learning more about Rethink Robotics’ progress delivering their collaborative robots, Baxter and Sawyer, to work alongside humans in factory settings.

We’re also excited to announce two additional workshops for the event. Both will present attendees opportunities to get the inside track from leaders in the robotics field. The DARPA workshop will focus on the agencies’ aim and how to work with DARPA. It will be led by Dr. William Regli, Acting Director of the Defense Sciences Office. In the MIT CSAIL workshop attendees will get a look at some of the best projects inside MIT’s robotics lab.

These workshops join the agenda that also includes a workshop on educating future roboticists featuring educators from Olin College, Kettering University and Udacity.

General admission tickets are currently available, but seating in MIT’s Kresge Auditorium is limited. We hope to see you there.

DARPA
The mission of the Defense Advanced Research Projects Agency (DARPA) is “to prevent and create strategic surprise by developing breakthrough technologies for national security.” The agency’s project-oriented approach to science and engineering, however, is different both in approach and execution from other U.S. governmental funding agencies. In this workshop, DARPA leadership will discuss the Agency’s vision and goals, provide overviews of each of the organization’s technical offices, in addition to an explanation of the mechanics of working with DARPA. The objective of the workshop is to elicit help in fomenting institutional evolution in America’s broader science and technology ecosystem that is needed to better and more rapidly respond to future challenges.

MIT CSAIL
MIT Computer Science and Artificial Intelligence Laboratory is tasked with researching activities around the bleeding edge of technology. Attendees of this workshop will get an insider’s look at some of the hottest projects being developed in CSAIL’s labs and engineering bays. Robert Katzschmann will present Soft Robotics and the team’s creative approach to allowing robots to manipulate objects. Claudia Perez D’Arpino’s presentation will demonstrate how robots can learn from a single demo and Andrew Spielberg will explain a novel process to create and fabricate robots.

Building Roboticists
David Barrett, a professor of mechanical engineering at Olin College, Ryan Keenan, curriculum lead for Udacity, and Dr. Robert McMahan, President of Kettering University will lead a workshop discussing their views on the best way to train the next generation of roboticists. Each of these educators leads vastly different programs, but the aim is universal: to train the next generation of globally competitive engineers. It’s important that these students learn through hands-on experience how to not only write code, but deploy code in a viable manner that results in a sustainable product.

 


0

Microsoft Maluuba teaches management 101 to machines in its first paper since being acquired

22:45 | 6 April

In mid-January, the ongoing race for AI put Montreal-based Maluuba on our radar. Microsoft acquired the startup and its team of researchers to build better machine intelligence tools for analyzing unstructured text to enable more natural human computer interaction — think bots that can actually respond with reasonable intelligence to a text you send. The team dropped its first paper since being acquired and it sheds light on the group’s priorities.

The paper outlines a method for multi-advisor reinforcement learning that breaks problems down to be simpler and more easily computable. In oversimplified terms, Maluuba is effectively trying to teach leadership to groups of machines working to solve problems.

Problem

Existing conversational interfaces are rigid and easily broken. Siri, Alexa and Cortana are miles ahead of old-fashioned dialog trees, but they still are a far cry from generalized intelligence. From a computational standpoint, a complete model of the world would be infeasible to create so instead engineers create specialized machine intelligence tools that can perform well on a smaller number of tasks. This is why you can ask Siri to make a phone call but can’t ask it to organize a large dinner event.

A lot of attention is being given to reinforcement learning, a specialized branch of machine learning. As I have explained previously, reinforcement learning steals the idea of utility from economists in an effort to quantify and iteratively evaluate decision making. Instead of explicitly telling an autonomous car every rule of the road, it can be more effective to gamify the problem and assign figurative “points” that the intelligent system can optimize. The system could hypothetically lose points for driving over a double yellow line and gain them for maintaining the speed limit.

This allows for a much more adaptable system, but it unfortunately is still a rather complex problem requiring a lot of compute. This is where multi-advisor reinforcement learning comes in.

  1. IMG_2149

    The Maluuba team working at their office.
  2. IMG_5356

    The Maluuba team working at their office.
  3. IMG_2183

    The Maluuba team working at their office.
 View Slideshow
Previous Next Exit

Solution

The Maluuba team is trying to solve these complexity problems that face reinforcement learning. Their approach is to use multiple “advisors” to break the problem down into smaller, more digestible, chunks. Traditionally, a single virtual agent is used for reinforcement learning but in recent years multi-agent approaches have become more common.

In a conversation, the group presented the example of an intelligent scheduling assistant. Rather than have a single agent learn to schedule every kind of optimal meeting, it could someday make sense to assign a different agent to different classes of meetings. The challenge is getting all these agents to work together in consonance.

Intuitively it’s easy to imagine these agents as humans splitting up a task. Getting people to work together efficiently is no small task even though a divide and conquer strategy can outperform the lone wolf mentality.

The solution is to have an aggregator sit on top of all the “advisors” to make a decision. Each advisor in Maluuba’s paper has a different focus with respect to the grand problem being solved. Each agent gets a different reward for the action it specializes in. If agents take different positions, the aggregator steps in and arbitrates.

Maluuba used a simplified version of Ms. Pac-Man, called Pac-Boy to test different methods for it multi-advisor reinforcement aggregating learning framework. The team wants to study the process of breaking down problems. Ideally there is some universality in how problems can be organized around a number of optimal aggregators. This is another place where it’s interesting to think about how humans decompose problems, often inefficiently — think leadership 101 for machines.

Why you should care

Multi-advisor reinforcement learning can save CPU and GPU power. Breaking down a problem also makes it more easily distributed to different servers for paralyzed processing. Reduced complexity is universally helpful for all reinforcement learning problems.

The research team explained to me that it’s still early days for working alongside Microsoft. They’re transitioning to Azure and building out communication channels between exiting machine learning teams. But when that process is complete, it strikes me that Maluuba will play a huge role in analyzing text and the language held within it.

While reinforcement learning itself isn’t novel, Maluuba is pouring a lot of resources into it. We have already seen the potential of reinforcement learning in DeepMind’s AlphaGo. Future joint research projects could bring more efficient and adaptable reinforcement learning into new consumer and enterprise facing dialog products for Microsoft.

 


0

The sound of impending failure

20:00 | 29 January

machine-learning-sound If we can find a way to automate listening itself, we would be able to more intelligently monitor our world and its machines day and night. We could predict the failure of engines, rail infrastructure, oil drills and power plants in real time — notifying humans the moment of an acoustical anomaly. This has the potential to save lives, but despite advances in machine learning, we… Read More

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 11

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short