Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 30

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 24

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Виктор Иванович

Виктор Иванович, 39

Joined: 29 January 2014

About myself: Жизнь удалась!

Interests: Куба и Панама

Alexey Geno

Alexey Geno, 6

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Robotics

<< Back Forward >>
Topics from 1 to 10 | in all: 830

Combining augmented reality, 3D printing and a robotic arm to prototype in real time

23:51 | 19 February

Robotic Modeling Assistant (RoMA) is a joint project out of MIT and Cornell that brings together a variety of different emerging technologies in an attempt to build a better prototyping machine.

Using an augmented reality headset and two controllers, the designer builds a 3D model using a CAD (computer-aided design) program. A robotic arm then goes to work constructing a skeletal model using a simple plastic depositing 3D printer mounted on its hand.

“With RoMA, users can integrate real-world constraints into a design rapidly, allowing them to create well-proportioned tangible artifacts,” according to team leader, Huaishu Peng. “Users can even directly design on and around an existing object, and extending the artifact by in-situ fabrication.”

A video uploaded by Peng shows that the system’s 3D printing is still pretty crude. Being mounted to the end of the arm, versus a more constrained 3D printer bed has a much looser effect on the print.

It is, however, a lot faster than most methods that use the familiar FDM method you’ll find in most desktop 3D printers, and as such could eventually be useful to those looking to essentially sketch things out n a three-dimension space with a bit more control than you’ll get on a 3D printing pen like the 3Doodler.

The arm is also programmed to react to in real time to the designer’s actions. “At any time, the designer can touch the handle of the platform and rotate it to bring part of the model forward,” writes Peng. “The robotic arm will park away from the user automatically. If the designer steps away from the printing platform, the robotic fabricator can take the full control of the platform and finish the printing job.”

 


0

Robot assistants and a marijuana incubator

23:03 | 18 February

We’ve had plenty of time to get used to our robot overlords and Boston Dynamics is helping us get there. This week we talk about the company’s addition of a door-opening arm to its SpotMini robot. It’s not spooky at all.

We then switch gears and discuss Facebook’s Messenger for Kids. Is it good, bad or the company’s master plan to get every last human being with a smartphone on the platform?

Later in the episode, MRD chats with Lanese Martin, co-founder of the Hood Incubator. The Hood Incubator is an Oakland-based organization that aims to foster equity in the marijuana industry. Through its programming, Hood Incubator supports people of color building businesses in the legal marijuana industry.

Check it out at the top or head over to your favorite podcasting platform to download, subscribe and rate. Until next week!

Featured Image: Bloomberg / Contributor/Getty Images

 


0

Miso scores $10 million to bring its hamburger-flipping robot to more restaurants

16:00 | 15 February

Pasadena-based hardware startup Miso Robotics just got a big vote of confidence from investors, in the form of a $10 million Series B. This latest windfall led by Acacia Research Corporation brings the company’s total disclosed funding to $14 million and arrives as it ramps up production and gets ready to deliver its hamburger-cooking robot Flippy to 50 CaliBurger locations.

“We’re super stoked to use this funding to develop and scale our capabilities of our kitchen assistants and AI platform,” CEO/co-founder Dave Zito said on a call with TechCrunch ahead of the announcement. “Our current investors saw an early look at our progress, and they were so blown away that they doubled-down.”

A robot’s view of the grill

The round also includes new investors, including, notably, Levy, a Chicago-based hospitality company that runs restaurants and vending machines in entertainment and sporting venues in the U.S. and U.K. The company’s investment is clearly a strategic one, as it looks toward staffing solutions in its heavily trafficked locations.

“The Levy participation is really centered around their looking at this future world where people are increasingly wanting prepared foods,” says Zito. “People really like the idea of a kitchen assistant that can really come in and be that third hand for the overworked staff. They’re all reporting high turnover rate and increasing customer demand for fresh ingredients prepared quickly. Trying to keep that at accessible prices is hard.”

We’ve already seen a number of demos of Flippy in earlier iterations, including a peek inside the robot’s AI-based vision system back at Disrupt in September. The company promises a more public debut of the robot at the Pasadena CaliBurger location is set to arrive in “the coming weeks.”

Featured Image: Miso Robotics

 


0

CommonSense Robotics raises $20M for robotics tech for online grocery fulfilment

15:00 | 15 February

CommonSense Robotics, an Israel-based startup developing AI and robotics tech to help online grocery retailers speed up fulfilment and delivery, has raised $20 million in Series A funding.

The round was led by Playground Global, with participation from previous investors Aleph VC and Eric Scs Innovation Endeavors. It brings the company’s total funding to $26 million.

‘The funds will be used to scale up CommonSense Robotics’ facility deployment rate, develop their next generation of robotics and AI, and expand global operations and sales,” says the startup.

Using what it describes as advanced robotics and AI, CommonSense Robotics claims to enable retailers — even relatively small ones — to offer one hour, on-demand grocery deliveries to consumers “at a profitable margin”. It does this by employing robots to power bespoke warehouses or micro-fulfilment centers that are small enough to be placed in urban areas rather than miles away on the outskirts of town.

The robots are designed to store products and bring the right ones to humans who then pack a customer’s order. More robots are then used to get the packaged order out to dispatch. This robot/AI and human combo promises to significantly reduce the cost of on-demand groceries, thus broadening the range of retailers that can compete with Amazon

“Our AI software breaks the order down into robot tasks, and finds the right robots to complete those tasks,” Elram Goren, CEO and co-founder of CommonSense Robotics, tells me.

“We have robots that are capable of moving boxes (totes) around extremely efficiently and at high speeds. Our various types of robots will bring the right totes of products to a stationary human ‘picker’ who in turn packs the order, which is then sent by robots towards the delivery interface where orders are packed into a van or scooter for dispatch”.

In addition, Goren says all of this is designed to happen within a space of about 10,000 square feet made up of a 3D “cube” of racks ie utilising vertical as well as horizontal space.

Explains the CommonSense Robotics CEO: “Our robotics and AI are unique and proprietary with the entire system designed to maximise space efficiencies (how small we can have the warehouse), labor efficiencies (how little we can have human labor in the process), how fast we can deal with an average grocery order (usually less than 3 minutes, where a completely human process takes about 10x that), and how close we can have the centers to the customers”.

Meanwhile, The Tel Aviv company is currently deploying the first generation of its robots in its first operational facility, and has plans to open more facilities in the U.S., U.K. and Israel in 2018.

 


0

Sony now has a Koov robotics learning kit for US classrooms

20:20 | 14 February

After soft-launching a blocks-based educational robotics kit in the US last summer to gauge local interest, Sony has judged the reception for its Koov kit warm enough to fire a fully fledged educator offering into the US market. The Koov Educator Kid goes up for pre-order today, with an estimated shipping date of March 25.

The $520 price-tag puts it at the pricier end of the spectrum so far as STEM-targeted learn-to-code gizmos generally go. (And there are a lot of those quasi-educational ‘toys’ for parents to choose from these days.) But, as the name suggests, Sony’s Koov kit is specifically designed for educators and for use in classrooms, with each kit supporting multiple users.

Specifically, each Koov Educator kit is good for “up to five students”, according to Sony — and presumably so long as the kids play nice together and fairly share the blocks and bits. So in that context the pricing, while not cheap, looks more reasonable.

Sony is also explicitly targeting ‘STEAM’ learning, with the ‘A’ in the acronym standing for ‘Art’, alongside the more usual Science, Technology, Engineering and Math components, which sets Koov apart from some less flexible learn to code gizmos in the market.

Though other modular electronics coding kits, such as from the likes of LittleBits and Sam Labs (to name two), are also playing in much the same space.

The Koov system is designed for children aged eight and older. And as well as translucent plug together blocks and Arduino-compatible electronics bits, there’s a Scratch-based drag and drop coding interface to link physical creation with digital control, via a cross-platform companion app.

The Educator Kit contains more than 300 connectable blocks in all, plus multiple sensors, motors, LEDs and other electronics bits and bobs (the full inventory is here). It also includes class management software, curriculum-aligned lesson plans, step-by-step guides for kids and student progress reports.

Sony says the Koov app serves up more than 30 hours of educational content, via a Learning Course feature which it says is intended to offer students an introduction to “key concepts” in coding, building and design. As with the majority of these STEM gizmos, the educational philosophy leans heavily on the idea of learning through playing (around).

The Koov kit also includes 23 pre-designed, pre-coded “Robot Recipes” to encourage kids to get building right away. Though the wider aim of the Koov system is to support children being able to design and build their own robots (back to that ‘Art’ element) — and indeed Sony claims there are “countless” ways to stick its blocks and bits together. So much like Lego, then.

It also bills the system as “flexible enough” for students to use independently while also providing material to support structured learning too.

 


0

Boston Dynamics’ newest robot learns to open doors

00:38 | 13 February

We knew this day would come sooner or later. Like the cloned velociraptors before it, Boston Dynamics’ newly redesigned Spot Mini has figured out how to open doors — with either its arm or face, depending on how you look at it.

The team behind the Big Dog proves that it’s still the master of viral robotic marketing, even after switching teams from Google to Softbank. Three months after debuting a more streamlined version of its electronic Spot Mini, the company’s got another teaser wherein one robot equipped with a head-mounted arm makes (relatively) quick work of a door, letting his his pal waltz through.

The video’s impressive for both the agility of the arm itself, as well as the robot’s ability to maintain balance as it swings open what looks to be a fairly heavy door.

“Clever girl,” indeed.

Like the last video, the teaser doesn’t offer a ton of insight into what’s new with the bumble bee colored version of the company’s already announced robot. Last time out, it appeared as though we got a preview of a pair of Kinect-style 3D cameras that could give a little more insight into the robot’s navigation system.

That tech seemed to hint at the possibility of an advanced autonomous control system. Given the brevity of the video, however, it’s tough to say whether someone’s controlling the ‘bots just out of frame.

If the company managed program Spot Mini to actually open the door on its own in order to help free its friend, well, perhaps it’s time to be concerned.

 


0

Teaching robots to understand their world through basic motor skills

23:21 | 10 February

Robots are great at doing what they’re told. But sometimes inputting that information into a system is a far more complex process than the task we’re asking them to execute. That’s part of the reason they’re best suited for simple/repetitive jobs.

A team of researchers at Brown University and MIT is working to develop a system in which robots can plan tasks by developing abstract concepts of real-world objects and ideas based on motor skills. With this system, the robots can perform complex tasks without getting bogged down in the minutia required to complete them.

The researchers programmed a two-armed robot (Anathema Device or “Ana”) to manipulate objects in a room — opening and closing a cupboard and a cooler, flipping on a light switch and picking up a bottle. While performing the tasks, the robot was taking in its surroundings and processing information through algorithms developed by the researchers.

According to the team, the robot was able to learn abstract concepts about the object and the environment. Ana was able to determine that doors need to be closed before they can be opened.

“She learned that the light inside the cupboard was so bright that it whited out her sensors,” the researchers wrote in a release announcing their findings. “So in order to manipulate the bottle inside the cupboard, the light had to be off. She also learned that in order to turn the light off, the cupboard door needed to be closed, because the open door blocked her access to the switch.”

Once processed, the robot associates a symbol with one of these abstract concepts. It’s a sort of common language developed between the robot and human that doesn’t require complex coding to execute. This kind of adaptive quality means the robots could become far more capable of performing a greater variety of tasks in more diverse environments by choosing the actions they need to perform in a given scenario.

“If we want intelligent robots, we can’t write a program for everything we might want them to do,” George Konidaris, a Brown University assistant professor who led the study told TechCrunch. “We have to be able to give them goals and have them generate behavior on their own.”

Of course, asking every robot to learn this way is equally inefficient, but the researchers believe they can develop a common language and create skills that could be download to new hardware.

“I think what will happen in the future is there will be skills libraries, and you can download those,” explains Konidaris. “You can say, ‘I want the skill library for working in the kitchen,’ and that will come with the skill library for doing things in the kitchen.”

 


0

Researchers discovered a new kind of stereo vision by putting tiny 3D glasses on mantises

22:19 | 9 February

Researchers at Newcastle University, U.K. believe they’ve discovered a differently evolved form of stereo vision in mantises. The research team studied the phenomenon in the insects precisely as one would hope — by attaching a pair of tiny 3D glasses to their bug eyes.

The scientists attached a mantis-sized pair of dual-color 3D glasses to the insects’ eyes, using beeswax as temporary adhesive. The team then showed video of potential prey, which the mantises lunged at. In that respect, the bugs appeared to approach 3D image processing in much the same way humans do.

But when the insects were shown dot patterns used to test 3D vision in humans, they reacted differently. “Even if the scientists made the two eyes’ images completely different,” the university writes, describing its findings, “mantises can still match up the places where things are changing. They did so even when humans couldn’t.”

According to the university, the discovery of stereo vision makes the mantises unique in the insect world. The manner of 3D vision is also different than the variety developed in other animals like monkeys, cats, horses, owls and us. In this version of the trait, the mantises are matching motion perceived between two eyes, rather than the brightness we use.

“We don’t know of any other kind of animal that does this,” Dr. Vivek Nityananda told TechCrunch. “There’s really no other example or precedent we have for this kind of 3D vision.” Nityananda adds that this sort of 3D vision has been theorized in the past, but the team believes that this is the first time it’s been detected in the animal kingdom.

The scientists believe the system was developed in a way that’s much less complex than our version of 3D vision, in order to be processed by the mantises’ less complex brains. That, Nityananda says, could be a boon for roboticists looking to implement 3D systems into less complex and lighterweight machines.

“It’s a simpler system,” he says. “All mantises are doing is detecting the change in the relevant position in both eyes. Detection of change would be much easier to implement [in robotics], versus the more elaborate details in matching the views of two eyes. That would require much less computation power, and you could put that into perhaps a much more lightweight robot or sensor.”

 


0

Aurora will power Byton EV’s autonomous driving features

18:38 | 9 February

Aurora, the self-driving startup founded by Google self-driving car project alum Chris Urmson, along with Tesla Autopilot developer Sterling Anderson, CMU robotics expert and Uber vet Drew Bagnell, and a team of industry experts, will be making the autonomous smarts for Byton’s forthcoming electric vehicle. Byton, a startup that had a splashy debut at CES earlier this year.

Byton’s Concept electric SUV is a car with a lot of interesting tech features, aside from its all-electric drive train. The vehicle has a massive, dashboard-covering display that incorporates information readouts, entertainment options and vehicle controls. It’s a screen that seems somewhat ill-suited for the task of paying attention to the road while driving, and the Byton car also has front seats that swivel towards the inside of the vehicle so that those in the front can better interact with those in the back.

Both of those features are more geared toward a future in which autonomous driving is a ready and viable option for Byton owners. The car is aiming for a 2019 starting ship date, by which time it’s possible self-driving features won’t seem such a distant dream. And now we know that Byton has a technology partners on the autonomous driving side of things with the technical know-how to make it an even more realistic expectation.

Aurora, despite officially breaking cover only just last year, is already working with a range of automakers on their autonomous driving technology, including Volkswagen and Hyundai. Aurora CEO Chris Urmson explained that its goals mean it’s happy to work with companies at all stages of development and maturity to help make self-driving a practical reality.

“Our mission is to deliver the benefits of self-driving technology safety, quickly and broadly,” he said in n interview. “So for us to have that broad part, it means we have to work with a nudger of great partners, and we’re very fortunate with the folks we have [as partners] to date… this is how we help the business, and we look forward to being able to engage with others in the future.”

For Byton and Aurora, this partnership will kick off with pilot test driving in California sometime soon,  and Byton hopes to eventually tap Aurora with its goal of fielding premium electric consumer vehicles with SAE Level 4 and Level 5 autonomous capabilities.

Aurora as a company is excited about its progress during its first year in operation, and is ramping up staffing and attracting key talent in a very competitive industry thanks to its pedigree and founding team, Urmson tells me.

“It started with a handful of us, a couple in my living room here in California, and a couple in Pittsburgh. We’ve been growing the team, that’s been one of the core focuses of this last year,” he said. “In my previous gig I had the privilege of helping build that program from day one, to a massive organization certainly leading the space, and now with Sterling and Drew, we have the opportunity to build version two of that, and learn from our experience, and build an organization and build a technology that can have a huge impact on the world, and do that quickly and safely.”

 


0

Disney has begun populating its parks with autonomous, personality-driven robots

19:00 | 8 February

The process of making a Disney park feel alive is most easily encapsulated in animatronic figures. These hydraulic, pneumatic and now electric figures have been a fixture at Disneyland since the 60s. Since then, massive advancements have been made in control systems, movement architecture and programming. The most advanced animatronic figures like the Na’Vi Shaman in Disney World’s Flight of Passage are plain and simply robots. And very sophisticated ones at that.

But not every animatronic in the parks can be a simple pneumatic connected to a bulky master system or a highly advanced and complex robotic masterwork. That’s where the Vyloo come in.

Begun as a project to help populate the park with more interactive elements, the Vyloo are three small alien creatures in a self-contained pod that renders them autonomous. They have moods, interact with guests through non-verbal gestures and cues and are powered by a completely onboard system that can be tuned quickly and left to do its thing.

“What we pitched was a project to try to bring small autonomous animatronic creatures to life. We were really interested in the idea of creating some little guys that could truly respond to and interact with guests,” says Leslie Evans, Senior R&D Imagineer at Disney. She and Alexis Wieland, Executive R&D Imagineer, started the project with the goals to create something that was autonomous, but also created a reaction in the guests that felt like a real emotional relationship. They needed to have a “spectrum of personalities” and then a set of tools that would allow them to dial those attributes up and down before setting them loose on guests.

“I think that a lot of this was coming out of this desire to start thinking about animatronics as actors, so being able to say we want these characters to be shy, we want them to be outgoing ‑‑ trying to define them in terms of personality ‑‑ and then translating all of that into the technical tools that we need to bring the characters to life,” says Evans.

I first saw the Vyloo, then informally called ‘Tiny Life’, on a tour with a Girls Who Code group a year ago. At the time they looked fairly similar, if more ‘plain’. The basic unit is a log with three small creatures, now known as Vyloo, sitting on top of it. The creatures are outfitted with sensors and cameras and the ancillary equipment that allows them to run is completely contained inside their bodies or the log structure. This allows the Vyloo to be incredibly modular. This is unlike most robots in the park, which require attachment to external auxiliary systems that control or manipulate them.

The project had gotten to the point of being able to demo it to groups like the GWC class when the serendipity that happens at Imagineering often kicked in.

“One of the guys from our department had been working on “Guardians of the Galaxy ‑‑ Mission Breakout,” and he was like, “Hey, guys, you really need to come to R&D and see these creatures. I think it would be the perfect thing for the first scene, that queue line in the collector’s collection room,” says Evans.

[youtube 
]

This interplay between what Imagineering is experimenting with and what gets put in the parks is one of the things that makes the unit such a unique robotics lab. Many times off the shelf parts like Kinect sensors are grabbed and fit to the task, other times specific parts, components or even materials have to be invented and manufactured. Whatever technical hurdles are being overcome are always in service of the story, but the way that they may be put to use isn’t always known at the outset.

The goal of having autonomous robotic units that can easily be placed throughout the parks to create a richer, more interactive environments was half of the goal, and it worked well with the Mission Breakout scene, which features creatures and objects collected by…The Collector, natch. But that was only part of the goal with the project.

“Our characters right now give very polished, perfect performances, but they really are a loop in the sense that they don’t really respond to the guests, so bringing the characters down so they know the guests are there and actually respond appropriately,” says Wieland. “To stay in character is a big part of what we were trying to pull off here, and so moving in that direction is a big part of it. How do we make our characters more visceral in the moment with the guests?”

To that end, the Vyloo are programmed initially with a range of motions and actions they can take – squishing and stretching, cocking their heads, moving their necks around. These actions are then given over to a program that takes in signals from guests by tracking whether they’re looking at them, listening to the guests talk to the Vyloo while they’re staring at their cage and following them with their gaze when they move.

The next phase, which is what the Mission Breakout test is all about, was to learn how the guests interacted with the creatures.

“As we watched them in the park, there’s something really magical about when one of these creatures looks at you. It’s enabling guests to play with our animatronics in a new way that we haven’t seen before, and I think the response to that has been very positive and super exciting that people are playing with these guys,” says Evans.

And those interactions include many things that the team never dreamed of.

“The one which blows me away… [is] guests do these kissy ‘faces and stick their tongues out at them constantly. This is like, “oh, I hadn’t even thought of that,”” says Alexis. “They’re doing interactions that never occurred to me, and that’s part of the reason we’re doing this. It’s fantastic. One of the things we see all the time is people play with them and interact with them. That’s fun, and then they take a step to leave, and it keeps watching them. You see them stop, and move back, and they’re like, ‘oh my God.’”

“It’s really wonderful when that light bulb goes off…the kids get it much quicker than the adults do,” he adds.

To tune those reactions, the team created a sort of game controller on steroids, featuring knobs and sliders that allow them to tune their “attitudes” and awareness. They can be feisty or chill; more interactive or less; hyper or sleepy. Once the tuning is done, they’re set loose on the guests again.

In a moment of further synergy, the Vyloo actually ended up in the second Guardians of the Galaxy movie, right after the Milano crash lands on the planet Berhert. Director of the Guardians of the Galaxy films, James Gunn, saw the prototypes at Imagineering and loved them — and ended up giving them their name.

“When we saw the prototypes at Imagineering R&D Headquarters we were all blown away. Such fantastic creations. The colors were a little too muted for the Guardians of the Galaxy world though, so we tweaked the design and made them a little more flamboyant to fit the Guardians aesthetic. Then I got to name them! I love the Vyloos and would love nothing more than to have one as a pet!”

All of this started off with a bright yellow puppet that Evans describes as ‘dirt-simple’ and made of spare fur and wooden rods, but still says was “cute”.

“With the puppet, we worked with a puppeteer here who’s unbelievably talented and really helped give this project life in a way that can’t be understated. He was critical to the success. We did hours of study with this thing where we would say, “How do you say hi if you’re really shy, but you’re also kind of curious?” We would record all of these motions that we did with the puppet. A lot of times we’d start off in this very analog way. It helps us get a bunch of creative intent defined quickly.

Suddenly, we have all this footage of all of these different personalities. How would they say hi? How do they get surprised? What do they do if you walk away from them? From that, we started to tease out, “This is a ton of data. How do we simplify this down into a prototype system that we could build to try to show the power of some of these simple interactions?” ‑‑ saying hi, what happens when you leave, what happens when you get startled, if there’s animatronics around you, how do I feel about my friends?

We did a huge pass at defining a wide range of that, and then from there, pared down. “What’s the critical stuff that we really need for this first minimum viable product, a kind of animatronic that we’d put out in the first pass?”

From there, the emotional shorthand was fit into as simple a set of gestures as possible to limit the complexity of the creatures. One of the most effective of these is what the team refers to as the ‘squash and stretch’ — the act of pushing its head down into its body to squish or craning the neck to stretch out. This simple action took the place of a lot of emotional cues that humans use like eyebrows or facial muscles.

Once the Vyloo made it into the queue, there were changes to be made too to compensate for the loudness of the environment, the relative attentiveness or lack thereof of the guests and more.

As with every project that makes it out of Imagineering, the primary goal of the Vyloo and the creatures that will come after it is to crank up the delight factor of guests to Disney’s various parks. As a component of that, the robots that Disney is building need to become more interactive and able to be viewed from close up, with convincing interchanges of emotion and communication.

The future of robotics at Disney is heavy with emotional context, autonomy and interactivity. It’s focusing incredibly heavily on the emotional quotient of robots, rather than seeking pure efficiencies. But at the same time it needs to make robots that withstand incredible scrutiny from millions of visitors, are robust enough to operate with near perfect uptime 14 hours a day all year long for years.

It’s this unique blend of disciplines that are all driven by an ego-light ‘whatever it takes’ mentality that make it one of the most exciting robotics labs in the world. The Vyloo are a relatively low-key addition to the arsenal of attractions in the park, but they telegraph a great amount of cool things to come for Disney fans, as well as some overall learnings for the robotics industry about the power of emotion and interactivity in increasing efficiency and coexistence with robots.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 830

Site search


Last comments

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

An Interview With Shaquille O’Neal: Businessman, Investor And Video Game Star
Anna K
Shaquilee is a mogul! I see him the Gold bond commercials and think that he's doing something right…
Anna K