Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 3016

Apply to the pitch-off at TC Sessions: Robotics & AI 2020

20:03 | 11 December

Mark your calendars and dust off your public-speaking skills. This year, there’s an exciting new opportunity at TC Sessions: Robotics & AI, which returns to  UC Berkeley on March 3, 2020. We’ve added a pitch-off specifically for early-stage startups focused on AI or robotics.

You heard right. In addition to a full day packed with speakers, breakout sessions and Q&As featuring the top names, leading minds and creative makers in robotics and AI, we’re upping the ante. We’ll choose 10 startups to pitch at a private event the night before the show opens. Here’s how it works.

The first step: Apply to the pitch-off by February 1. TechCrunch editors will review all applications and select 10 startups to participate. We’ll notify the founders by February 15 — you’ll have plenty of time to hone your pitch.

You’ll deliver your pitch at a private event, and your audience will consist of TechCrunch editors, main-stage speakers and industry experts. Our panel of VC judges will choose five teams as finalists, and they will pitch the next day on the main stage at TC Sessions: Robotics + AI.

Talk about an unprecedented opportunity. Place your startup in front of the influential movers and shakers of these two world-changing industries — and get video coverage on TechCrunch, too. We expect attendance to meet or exceed last year’s when 1,500 people attended the show and tens of thousands followed along online.

Oh, and here’s one more pitch-off perk. Each of the 10 startup team finalists will receive two free tickets to attend TC Sessions: Robotics + AI 2020 the next day.

TC Sessions: Robotics & AI 2020 takes place on March 3. Apply to the pitch-off here by February 1. Don’t want to pitch? That’s fine — but don’t miss this epic day-long event dedicated to exploring the latest technology, trends and investment strategies in robotics and AI. Get your early bird ticket here and save $100. We’ll see you in Berkeley!

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.

 


0

Arcona uses machine learning to compose adaptive soundtracks in real time

19:15 | 11 December

Arcona Music took to the stage at Disrupt Berlin today to showcase its adaptive music service. The local startup utilizes machine learning to create musical beds capable of adapting to different contexts in real-time. The user simply needs to input a handful of parameters, and the service will adjust accordingly.

“Give it a style, an emotion and a musical theme, and you can say, ‘play this,’ and the engine will take that blueprint and realize it,” service cofounder Ryan Groves explained, in a conversation with TechCrunch. “If, at any point, the emotion or style changes, it will adapt to that and create this essentially infinite stream of music. You can play a particular song blueprint for as long as is necessary in any dynamic environment.”

The service is still in its infancy, at the moment. Its two founders are its only two full-time employees, along with a part-time developer. Groves and co-founder Amélie Anglade bootstrapped the scrappy startup, which has yet to seek funding.

 

Groves is a composer and musical theorist who formerly worked at popular AI-based music composition service, Ditty. Anglade is a music information retrieval specialist who worked at SoundCloud.

Rhythm gaming is the first clear application for the service. The popular gaming genre is built around a changing soundtrack and could potentially benefit from music that requires minimal pre-programing. Moving forward, the potential for such a service is far broader.

“In the very long term,” Groves said, “we should see this being almost your own personal orchestra, leveraging augmented reality, GPS and all that stuff, and just responding to your environment as you’re listening.

 


0

BMW says ‘ja’ to Android Auto

18:52 | 11 December

BMW today announced that it is finally bringing Android Auto to its vehicles, starting in July 2020. With that, it will join Apple’s CarPlay in the company’s vehicles.

The first live demo of Android Auto in a BMW will happen at CES 2020 next month and after that, it will become available as an update to drivers in 20 countries with cars that feature the BMW OS 7.0. BMW will support Android Auto over a wireless connection, though, which somewhat limits its comparability.

Only two years ago, the company said that it wasn’t interested in supporting Android Auto. At the time, Dieter May, who was the senior VP for Digital Services and Business Model at the time, explicitly told me that the company wanted to focus on its first-party apps in order to retain full control over the in-car interface and that he wasn’t interested in seeing Android Auto in BMWs. May has since left the company, though it’s also worth noting that Android Auto itself has become significantly more polished over the course of the last two years, too.

“The Google Assistant on Android Auto makes it easy to get directions, keep in touch and stay productive. Many of our customers have pointed out the importance to them of having Android Auto inside a BMW for using a number of familiar Android smartphone features safely without being distracted from the road, in addition to BMW’s own functions and services,” said Peter Henrich, Senior Vice President Product Management BMW, in today’s announcement.

With this, BMW will also finally offer support for the Google Assistant, after early bets on Alexa, Cortana and the BMW Assistant (which itself is built on top of Microsoft’s AI stack). As the company has long said, it wants to offer support for all popular digital assistants and for the Google Assistant, the only way to make that work in a car is, at least for the time being, Android Auto.

In BMWs, Android Auto will see integrations into the car’s digital cockpit, in addition to BMW’s Info Display and the heads-up display (for directions). That’s a pretty deep integration, which goes beyond what most car manufacturers feature today.

“We are excited to work with BMW to bring wireless Android Auto to their customers worldwide next year,” said Patrick Brady, Vice President of Engineering at Google. “The seamless connection from Android smartphones to BMW vehicles allows customers to hit the road faster while maintaining access to all of their favorite apps and services in a safer experience.”

 


0

Arthur announces $3.3M seed to monitor machine learning model performance

18:11 | 11 December

Machine learning is a complex process. You build a model, test it in laboratory conditions, then put it out in the world. After that, how do you monitor how well it’s tracking what you designed it do? Arthur wants to help, and today it emerged from stealth with a new platform to help you monitor machine learning models in production.

The company also announced it had closed a $3.3 million seed round, which closed in August.

Arthur CEO and co-founder Adam Wenchel says that Arthur is analogous to a performance monitoring platform like New Relic or DataDog, but instead of monitoring your systems, it’s tracking the performance of your machine learning models.

“We are an AI monitoring and explainability company, which means when you put your models in production, we let you monitor them to know that they’re not going off the rails, that you can explain what they’re doing, that they’re not performing badly and are not being totally biassed — all of the ways models can go wrong,” Wenchel explained.

Data scientists build machine learning models and test them in the lab, but as Wenchel says, when that model leaves the controlled environment of the lab, lots can go wrong, and it’s hard to keep track of that. “Models always perform well in the lab, but then you put them out in the real world and there is often a drop-off in performance — in fact, almost always. So being able to measure and monitor that is a capability people really need,” he said.

Interestingly enough, AWS announced a new model monitoring tool last week as part of SageMaker Studio. IBM also announced a similar tool for models built on the Watson platform earlier this year, but Wenchel says the involvement of the big guys could work to his company’s advantage since his product is platform-agnostic. “Having a neutral third party for your monitoring that works equally well across stacks is going to be pretty valuable,” he said.

As for the funding, it was co-led by Work-Bench and Index Ventures with participation from Hunter Walk at Homebrew, Jerry Yang at AME Ventures and others.

Jonathan Lehr, a general partner at Work-Bench sees a company with a lot of potential. “We regularly speak with ML executives from Fortune 1000 companies and one of their biggest concerns as they become more data-driven is model behavior in production. The Arthur platform is by far the best solution we’ve seen for AI monitoring and transparency…” he said.

The company, which is based in New York City, currently has 10 people. It launched 2018, and has been heads down working on the product since. Today, marks the release of the product publicly.

 


0

Scaled Robotics keeps an autonomous eye on busy construction sites

16:16 | 11 December

Buildings under construction are a maze of half-completed structures, gantries, stacked materials, and busy workers — tracking what’s going on can be a nightmare. Scaled Robotics has designed a robot that can navigate this chaos and produce 3D progress maps in minutes, precise enough to detect that a beam is just a centimeter or two off.

Bottlenecks in construction aren’t limited to manpower and materials. Understanding exactly what’s been done and what needs doing is a critical part of completing a project in good time, but it’s the kind of painstaking work that requires special training and equipment. Or, as Scaled Robotics showed today at TC Disrupt Berlin 2019, specially trained equipment.

The team has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. An object recognition system allows it to tell the difference between a constructed wall and a piece of sheet rock leaned against it, between a staircase and temporary stairs for electric work, and so on.

By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. They’ve built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.

All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. But it’s not all about making the suits happy.

“It’s not just about getting management to buy in, you need the guy who’s going to use it every day to buy in. So we’ve made a conscious effort to fit seamlessly into what they do, and they love that aspect of it,” explained co-founder Bharath Sankaran. “You don’t need a computer scientist in the room. Issues get flagged in the morning, and that’s a coffee conversation – here’s the problem, bam, let’s go take a look at it.”

Scaled Robotics

The robot can make its rounds faster than a couple humans with measuring tapes and clipboards, certainly, but also someone equipped with a stationary laser ranging device that they carry from room to room. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment.

The data is assembled automatically but the robot can be either autonomous or manually controlled — in developing it, they’ve brought the weight down from about 70 kilograms to 20, meaning it can be carried easily from floor to floor if necessary (or take the elevator); and simple joystick controls mean anyone can drive it.

A trio of pilot projects concluded this year and have resulted in paid pilots next year, which is of course a promising development.

Interestingly, the team found that construction companies were using outdated information and often more or less assumed they had done everything in the meantime correctly.

“Right now decisions are being made on data that’s maybe a month old,” said co-founder Stuart Maggs. “We can probably cover 2000 square meters in 40 minutes. One of the first times we took data on a site, they were completely convinced everything they’d done was perfect. We put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks.”

The company uses a service-based business model, providing the robot and software on a monthly basis, with prices rising with square footage. That saves the construction company the trouble of actually buying, certifying, and maintaining an unfamiliar new robotic system.

Scaled Robotics

But the founders emphasized that tracking progress is only the first hint of what can be done with this kind of accurate, timely data.

“The big picture version of where this is going is that this is the visual wiki for everything related to your construction site. You just click and you see everything that’s relevant,” said Sankaran. “Then you can provide other ancillary products, like health and safety stuff, where is storage space on site, predicting whether the project is on schedule.”

“At the moment, what you’re seeing is about looking at one moment in time and diagnosing it as quickly as possible,” said Maggs. “But it will also be about tracking that over time: We can find patterns within that construction process. That data feeds that back into their processes, so it goes from a reactive workflow to a proactive one.”

“As the product evolves you start unwrapping, like an onion, the different layers of functionality,” said Sankaran.

The company has come this far on $1 million of seed funding, but is hot on the track of more. Perhaps more importantly, its partnerships with construction giant PERI and Autodesk, which has helped push digital construction tools, may make it a familiar presence at building sites around the world soon.

 


0

Google Assistant gets a customized alarm, based on weather and time

20:00 | 10 December

Alarm clocks were one of the most obvious implementations since the introduction of the smart screen. Devices like Lenovo’s Smart Clock and the Amazon Echo Show 5 have demonstrated some interesting in the bedside display form factor, and Google has worked with the former the refine the experience.

This morning, the company introduced a handful of features to refine the experience. “Impromptu” is an interesting new addition to the portfolio that constructs a customized alarm based on a series of factors, including weather and time of day.

Here’s what an 50 degree, early morning wake up sounds like:

https://techcrunch.com/wp-content/uploads/2019/12/keyword_blog_example_alarm.mp3

Not a bad thing to wake up to. A little Gershwin-esque, perhaps. 

Per a blog post that went up this morning, the alarm ringtone is based on the company’s open-source project, Magenta. Google AI describes it thusly,

Magenta was started by researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We develop new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend their processes using these models. We use TensorFlow and release our models and tools in open source on our GitHub.

The new feature rolls out today.

 


0

AWS is sick of waiting for your company to move to the cloud

22:29 | 9 December

AWS held its annual re:Invent customer conference last week in Las Vegas. Being Vegas, there was pageantry aplenty, of course, but this year’s model felt a bit different than in years past, lacking the onslaught of major announcements we are used to getting at this event.

Perhaps the pace of innovation could finally be slowing, but the company still had a few messages for attendees. For starters, AWS CEO Andy Jassy made it clear he’s tired of the slow pace of change inside the enterprise. In Jassy’s view, the time for incremental change is over, and it’s time to start moving to the cloud faster.

AWS also placed a couple of big bets this year in Vegas to help make that happen. The first involves AI and machine learning. The second, moving computing to the edge, closer to the business than the traditional cloud allows.

The question is what is driving these strategies? AWS had a clear head start in the cloud, and owns a third of the market, more than double its closest rival, Microsoft. The good news is that the market is still growing and will continue to do so for the foreseeable future. The bad news for AWS is that it can probably see Google and Microsoft beginning to resonate with more customers, and it’s looking for new ways to get a piece of the untapped part of the market to choose AWS.

Move faster, dammit

The worldwide infrastructure business surpassed $100 billion this year, yet we have only just scratched the surface of this market. Surely, digital-first companies, those born in the cloud, understand all of the advantages of working there, but large enterprises are still moving surprisingly slowly.

Jassy indicated more than once last week that he’s had enough of that. He wants to see companies transform more quickly, and in his view it’s not a technical problem, it’s a lack of leadership. If you want to get to the cloud faster, you need executive buy-in pushing it.

Jassy outlined four steps in his keynote to help companies move faster and get more workloads in the cloud. He believes in doing so, it will not only continue to enrich his own company, it will also help customers avoid disruptive forces in their markets.

For starters, he says that it’s imperative to get the senior team aligned behind a change. “Inertia is a powerful thing,” Jassy told the audience at his keynote on Tuesday. He’s right of course. There are forces inside every company designed with good reason to protect the organization from massive systemic changes, but these forces — whether legal, compliance, security or HR — can hold back a company when meaningful change is needed.

He said that a fuller shift to the cloud requires ambitious planning. “It’s easy to go a long time dipping your toe in the water if you don’t have an aggressive goal,” he emphasized. To move faster, you also need staff that can help you get there — and that requires training.

Finally, you need a thoughtful, methodical migration plan. Most companies start with the stuff that’s easy to move to the cloud, then begin to migrate workloads that require some adjustments. They continue along this path all the way to things you might not choose to move at all.

Jassy knows that the faster companies get on board and move to the cloud, the better off his company is going to be, assuming it can capture the lion’s share of those workloads. The trouble is that after you move that first easy batch, getting to the cloud becomes increasingly challenging, and that’s one of the big reasons why companies have moved slower than Jassy would like.

The power of machine learning to drive adoption

One way to motivate folks to move faster is help them understand the power of machine learning. AWS made a slew of announcements around machine learning designed to give customers a more comprehensive Amazon solution. This included SageMaker Studio, a machine learning development environment along with notebook, debugging and monitoring tools. Finally, the company announced AutoPilot, a tool that gives more insight into automatically-generated machine learning models, another way to go faster.

The company also announced a new connected keyboard called DeepComposer, designed to teach developers about machine learning in a fun way. It joins DeepLens and DeepRacer, two tools released at previous re:Invents. All of this is designed for developers to help them get comfortable with machine learning.

It wasn’t a coincidence the company also announced a significant partnership with the NFL to use machine learning to help make players safer. It’s an excellent use case. The NFL has tons of data on its players, and it has decades of film. If it can use that data as fuel for machine learning-driven solutions to help prevent injuries, it could end up being a catalyst for meaningful change driven by machine learning in the cloud.

Machine learning provides another reason to move to the cloud. This shows that the cloud isn’t just about agility and speed, it’s also about innovation and transformation. If you can take advantage of machine learning to transform your business, it’s another reason to move to the cloud.

Moving to the edge

Finally, AWS recognizes that computing in cloud can only get you so far. In spite of the leaps it has made architecturally, there is still a latency issue that will be unacceptable for some workloads. That’s why it was a big deal that the company announced a couple of edge computing solutions including the general availability of Outposts, its private cloud in a box along with a new concept called Local Zones last week.

The company announced Outposts last year as a way to bring the cloud on prem. It is supposed to behave exactly the same way as traditional cloud resources, but AWS installs, manages and maintains a physical box in your data center. It’s the ultimate in edge computing, bringing the compute power right into your building.

For those who don’t want to go that far, AWS also introduced Local Zones, starting with one in LA, where the cloud infrastructure resources are close by instead of in your building. The idea is the same — to reduce the physical distance between you and your compute resources and reduce latency.

All of this is designed to put the cloud in reach of more customers, to help them move to the cloud faster. Sure, it’s self-serving, but 11 years after I first heard the term cloud computing, maybe it really is time to give companies a harder push.

 


0

Why AWS is selling a MIDI keyboard to teach machine learning

03:00 | 6 December

Earlier this week, AWS launched DeepComposer, a set of web-based tools for learning about AI to make music and a $99 MIDI keyboard for inputting melodies. That launch created a fair bit of confusion, though, so we sat down with Mike Miller, the director of AWS’s AI Devices group, to talk about where DeepComposer fits into the company’s lineup of AI devices, which includes the DeepLens camera and the DeepRacer AI car, both of which are meant to teach developers about specific AI concepts, too.

The first thing that’s important to remember here is that DeepComposer is a learning tool. It’s not meant for musicians — it’s meant for engineers who want to learn about generative AI. But AWS didn’t help itself by calling this “the world’s first machine learning-enabled musical keyboard for developers.” The keyboard itself, after all, is just a standard, basic MIDI keyboard. There’s no intelligence in it. All of the AI work is happening in the cloud.

“The goal here is to teach generative AI as one of the most interesting trends in machine learning in the last 10 years,” Miller told us. “We specifically told GANs, generative adversarial networks, where there are two networks that are trained together. The reason that’s interesting from our perspective for developers is that it’s very complicated and a lot of the things that developers learn about training machine learning models get jumbled up when you’re training two together.”

With DeepComposer, the developer steps through a process of learning the basics. With the keyboard, you can input a basic melody — but if you don’t have it, you also can use an on-screen keyboard to get started or use a few default melodies (think Ode to Joy). From a practical perspective, the system then goes out and generates a background track for that melody based on a musical style you choose. To keep things simple, the system ignores some values from the keyboard, though, including velocity (just in case you needed more evidence that this is not a keyboard for musicians). But more importantly, developers can then also dig into the actual models the system generated — and even export them to a Jupyter notebook.

For the purpose of DeepComposer, the MIDI data is just another data source to teach developers about GANs and SageMaker, AWS’s machine learning platform that powers DeepComposer behind the scenes.

“The advantage of using MIDI files and basing out training on MIDI is that the representation of the data that goes into the training is in a format that is actually the same representation of data in an image, for example,” explained Miller. “And so it’s actually very applicable and analogous, so as a developer look at that SageMaker notebook and understands the data formatting and how we pass the data in, that’s applicable to other domains as well.”

That’s why the tools expose all of the raw data, too, including loss functions, analytics and the results of the various models as they try to get to an acceptable result, etc. Because this is obviously a tool for generating music, it’ll also expose some of the data about the music, like pitch and empty bars.

“We believe that as developers get into the SageMaker models, they’ll see that, hey, I can apply this to other domains and I can take this and make it my own and see what I can generate,” said Miller.

Having heard the results so far, I think it’s safe to say that DeepComposer won’t produce any hits soon. It seems pretty good at creating a drum track, but bass lines seem a bit erratic. Still, it’s a cool demo of this machine learning technique, even though my guess is that its success will be a bit more limited than DeepRacer, which is a concept that is a bit easier to understand for most since the majority of developers will look at it, think they need to be able to play an instrument to use it, and move on.

Additional reporting by Ron Miller.

 


0

New tweet generator mocks venture capitalists

23:50 | 5 December

“Airbnb’s unit economics are quite legendary — the S-1 is going to be MOST disrupted FASTEST in the next 3 YEARS? Caps for effect.”

Who Tweeted that? Initialized Capital’s Garry Tan? Homebrew’s Hunter Walk? Y Combinator co-founder Paul Graham? Or perhaps one of the dozens of other venture capitalists active on Twitter .

No, it was Parrot.VC, a new Twitter account and website dedicated to making light of VC Twitter. The creator of the new tool, which first landed on Twitter in late November, fed 65,000 tweets written by some 50 venture capitalists to a machine learning bot. The result is an automated tweet generator ready to spew somewhat nonsensical (or entirely nonsensical) <280-character statements.

[gallery ids="1920794,1920795,1920797,1920806"]

According to Hacker News, where the creator shared information about their project, the bot uses predictive text to generate “amazing, new startup advice,” adding “Gavin Belson – hit me up, this is the perfect acquisition for Hooli,” referencing the popular satirical TV show, Silicon Valley. 

This isn’t the first time someone has leveraged artificial intelligence to make fun of the tech community. One of my personal favorites, BodegaBot, inspired by the Bodega fiasco of late 2017, satirizes Silicon Valley’s unhinged desire to replace domestic service with technology.

 


0

AI-enabled assistant robot returning to the Space Station with improved emotional intelligence

21:32 | 5 December

The Crew Interactive Mobile Companion (or CIMON for short) recorded a number of firsts on its initial mission to the International Space Station, which took place last November, including becoming the first ever autonomous free-floating robot to operate aboard the station, and the first ever smart astronaut assistant. But CIMON is much more than an Alexa for space, and CIMON-2, which launched aboard today’s SpaceX ISS resupply mission, will demonstrate a number of ways the astronaut support robot can help those working in space – from both a practical and an emotional angle.

CIMON is the joint-product of a collaboration between IBM, the German Aerospace Center (DLR) and Airbus, and its aim is to design and develop a robotic assistant for use in space that can serve a number of functions, including things as mundane as helping to retrieve information and keep track of tasks astronauts are doing on board the station, and as wild as potentially helping to alleviate or curb the effects of social issues that might arise from settings in which a small team works in close quarters over a long period.

“The goal of mission one was to really to commission CIMON and to really understand if he can actually work with the astronauts – if there are experiments that he can support,” explained IBM’s Matthias Biniok, project manager on the Watson AI aspects of the mission. “So that was very successful – the astronauts really liked working with CIMON.”

“Now, we are looking at the next version: CIMON-2.” Biniok continued. “That has more capability. For example, it has better software and better hardware that has been improved based on the outcomes that we had with mission one – and we have also some new features. So for example, on the artificial intelligence side, we have something called emotional intelligence, based on our IBM Watson Tone Analyzer, with we’re trying to understand and analyze the emotions during a conversation between CIMON and the astronauts to see how they’re feeling – if they’re feeling joyful, if something makes them angry, and so on.”

That, Biniok says, could help evolve CIMON into a robotic countermeasure for something called ‘groupthink,’ a phenomenon wherein a group of people who work closely together gradually have all their opinions migrate towards consensus or similarity. A CIMON with proper emotional intelligence could detect when this might be occurring, and react by either providing an objective, neutral view – or even potentially taking on a contrarian or ‘Devil’s advocate’ perspective, Biniok says.

That’s a a future aim, but in the near-term CIMON can have a lot of practical benefit simply by freeing up time spent on certain tasks by astronauts themselves.

“Time is super expensive on the International Space Station,” Biniok said. “And it’s very limited, so if we could save some crew time with planning that would be super helpful to the astronauts. CIMON can also support experiments – imagine that you’re an astronaut up there, you have complex research experiments going on, and there’s a huge amount of documentation for that. And if you are missing some information, or you have a question about it, then you have to look up in this documentation, and that that takes time. Instead of doing that, so you could actually just ask CIMON – so for example ‘what’s the next step CIMON?, or ‘why am I using Teflon and not any other materials?’

CIMON can also act as a mobile documentarian, using its onboard video camera to record experiments and other activities on the Space Station. It’s able to do so autonomously, too, Biniok notes, so that an astronaut can theoretically ask it to navigate to a specific location, take a photo, then return and show that photo to the astronaut.

This time around, CIMON will be looking to stay on the ISS for a much longer span than version one; up to three years, in fact. Biniok had nothing specific to share on plans beyond that, but did say that long-term, the plan is absolutely to extend CIMON’s mission to include the Moon, Mars and beyond.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 3016

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short