Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 1843

The future of AI relies on a code of ethics

01:00 | 22 June

Facebook has recently come under intense scrutiny for sharing the data of millions of users without their knowledge. We’ve also learned that Facebook is using AI to predict users’ future behavior and selling that data to advertisers. Not surprisingly, Facebook’s business model and how it handles its users’ data has sparked a long-awaited conversation — and controversy — about data privacy. These revelations will undoubtedly force the company to evolve their data sharing and protection strategy and policy.

More importantly, it’s a call to action: We need a code of ethics.

As the AI revolution continues to accelerate, new technology is being developed to solve key problems faced by consumers, businesses and the world at large. It is the next stage of evolution for countless industries, from security and enterprise to retail and healthcare. I believe that in the near future, almost all new technology will incorporate some form of AI or machine learning, enabling humans to interact with data and devices in ways we can’t yet imagine.

Moving forward, our reliance on AI will deepen, inevitably causing many ethical issues to arise as humans turn over to algorithms their cars, homes and businesses. These issues and their consequences will not discriminate, and the impact will be far-reaching — affecting everyone, including public citizens, small businesses utilizing AI or entrepreneurs developing the latest tech. No one will be left untouched. I am aware of a few existing initiatives focused on more research, best practices and collaboration; however, it’s clear that there’s much more work to be done. 

For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions.

Researchers, entrepreneurs and global organizations must lay the groundwork for a code of AI ethics to guide us through these upcoming breakthroughs and inevitable dilemmas. I should clarify that this won’t be a single code of ethics — each company and industry will have to come up with their own unique guidelines.

For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions. I do not have the answers to these questions right now, but my goal is to bring more awareness to this topic, along with simple common sense, and work toward a solution. Here are some of the issues related to AI and automation that keep me up at night.

The ethics of driverless cars

With the invention of the car came the invention of the car accident. Similarly, an AI-augmented car will bring with it ethical and business implications that we must be prepared to face. Researchers and programmers will have to ask themselves what safety and mobility trade-offs are inherent in autonomous vehicles.

Ethical challenges will unfold as algorithms are developed that impact how humans and autonomous vehicles interact. Should these algorithms be transparent? For example, will a car rear-end an abruptly stopped car or swerve and hit a dog on the side of the street? Key decisions will be made by a fusion processor in split seconds, running AI, connecting a car’s vast array of sensors. Will entrepreneurs and small businesses be kept in the dark while these algorithms dominate the market?

Driverless cars will also transform the way consumers behave. Companies will need to anticipate this behavior and offer solutions to fill those gaps. Now is the time to start predicting how this technology will change consumer needs and what products and services can be created to meet them.

The battle against fake news

As our news media and social platforms become increasingly AI-driven, businesses from startups to global powerhouses must be aware of their ethical implications and choose wisely when working this technology into their products.

We’re already seeing AI being used to create and defend against political propaganda and fake news. Meanwhile, dark money has been used for social media ads that can target incredibly specific populations in an attempt to influence public opinion or even political elections. What happens when we can no longer trust our news sources and social media feeds?

AI will continue to give algorithms significant influence over what we see and read in our daily lives. We have to ask ourselves how much trust we can put in the systems that we’re creating and how much power we can give them. I think it’s up to companies like Facebook, Google and Twitter — and future platforms — to put safeguards in place to prevent them from being misused. We need the equivalent of Underwriters Laboratories (UL) for news!

The future of the automated workplace

Companies large and small must begin preparing for the future of work in the age of automation. Automation will replace some labor and enhance other jobs. Many workers will be empowered with these new tools, enabling them to work more quickly and efficiently. However, many companies will have to account for the jobs lost to automation.

Businesses should begin thinking about what labor may soon be automated and how their workforce can be utilized in other areas. A large portion of the workforce will have to be trained for new jobs created by automation in what is becoming commonly referred to as collaborative automation. The challenge will come when deciding on how to retrain and redistribute employees whose jobs have been automated or augmented. Will it be the government, employers or automation companies? In the end, these sectors will need to work together as automation changes the landscape of work.

No one will be left untouched.

It’s true that AI is the next stage of tech evolution, and that it’s everywhere. It has become portable, accessible and economical. We have now, finally, reached the AI tipping point. But that point is on a precarious edge, see-sawing somewhere between an AI dreamland and an AI nightmare.

In order to surpass the AI hype and take advantage of its transformative powers, it’s essential that we get AI right, starting with the ethics. As entrepreneurs rush to develop the latest AI tech or use it to solve key business problems, each has a responsibility to consider the ethics of this technology. Researchers, governments and businesses must cooperatively develop ethical guidelines that help to ensure a responsible use of AI to the benefit of all.

From driverless cars to media platforms to the workplace, AI is going to have a significant impact on how we live our lives. But as AI thought leaders and experts, we shouldn’t just deliver the technology — we need to closely monitor it and ask the right questions as the industry evolves.

It has never been a more exciting time to be an entrepreneur in the rise of AI, but there’s a lot of work to be done now and in the future to ensure we’re using the technology responsibly.

 


0

Species-identifying AI gets a boost from images snapped by citizen naturalists

22:22 | 21 June

Someday we’ll have an app that you can point at a weird bug or unfamiliar fern and have it spit out the genus and species. But right now computer vision systems just aren’t up to the task. To help things along, researchers have assembled hundreds of thousands of images taken by regular folks of critters in real life situations — and by studying these, our AI helpers may be able to get a handle on biodiversity.

Many computer vision algorithms have been trained on one of several large sets of images, which may have everything from people to household objects to fruits and vegetables in them. That’s great for learning a little about a lot of things, but what if you want to go deep on a specific subject or type of image? You need a special set of lots of that kind of image.

For some specialties, we have that already: FaceNet, for instance, is the standard set for learning how to recognize or replicate faces. But while computers may have trouble recognizing faces, we rarely do — while on the other hand, I can never remember the name of the birds that land on my feeder in the spring.

Fortunately, I’m not the only one with this problem, and for years the community of the iNaturalist app has been collecting pictures of common and uncommon animals for identification. And it turns out that these images are the perfect way to teach a system how to recognize plants and animals in the wild.

Could you tell the difference?

You might think that a computer could learn all it needs to from biology textbooks, field guides, and National Geographic. But when you or I take a picture of a sea lion, it looks a lot different from a professional shot: the background is different, the angle isn’t perfect, the focus is probably off, and there may even be other animals in the shot. Even a good computer vision algorithm might not see much in common between the two.

The photos taken through the iNaturalist app, however, are all of the amateur type — yet they have also been validated and identified by professionals who, far better than any computer, can recognize a species even when it’s occluded, poorly lit, or blurry.

The researchers, from Caltech, Google, Cornell, and iNaturalist itself, put together a limited subset of the more than 1.6 million images in the app’s databases, presented this week at CVPR in Salt Lake City. They decided that in order for the set to be robust, it should have lots of different angles and situations, so they searched for species that have had at least 20 different people spot them.

The resulting set of images (PDF) still has over 859,000 pictures of over 5,000 species. These they had people annotate by drawing boxes around the critter in the picture, so the computer would know what to pay attention to. A set of images was set aside for training the system, another set for testing it.

Examples of bounding boxes being put on images.

Ironically, they can tell it’s a good set because existing image recognition engines perform so poorly on it, not even reaching 70 percent first-guess accuracy. The very qualities that make the images themselves so amateurish and difficult to parse make them extremely valuable as raw data; these pictures haven’t been sanitized or set up to make it any easier for the algorithms to sort through.

Even the systems created by the researchers with the iNat2017 set didn’t fare so well. But that’s okay — finding where there’s room to improve is part of defining the problem space.

The set is expanding, as others like it do, and the researchers note that the number of species with 20 independent observations has more than doubled since they started working on the dataset. That means iNat2018, already under development, will be much larger and will likely lead to more robust recognition systems.

The team says they’re working on adding more attributes to the set so that a system will be able to report not just species, but sex, life stage, habitat notes, and other metadata. And if it fails to nail down the species, it could in the future at least make a guess at the genus or whatever taxonomic rank it’s confident about — e.g. it may not be able to tell if it’s anthopleura elegantissima or anthopleura xanthogrammica, but it’s definitely an anemone.

This is just one of many parallel efforts to improve the state of computer vision in natural environments; you can learn more about the ongoing collection and competition that leads to the iNat datasets here, and other more class-specific challenges are listed here.

 


0

Microsoft is buying a AI startup, Bonsai

22:40 | 20 June

If all of the big tech co’s agree on one thing at the moment, it’s that artificial intelligence and machine learning point the way forward for their businesses. As a matter of fact, Microsoft is about to acquire Bonsai, a small Berkeley-based startup it hopes to make the centerpiece of its AI efforts.

The company specializes in reinforcement learning, a kind of trial and error approach to teach a system within in the confines of a simulation. That learning can be used train autonomous systems to complete specific tasks.  Microsoft says the acquisition will serve to forward the kind of research the company has been pursuing in the field by leveraging its Azure cloud platform.

“To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer,” Microsoft Corporate VP Gurdeep Pall said in an announcement. “Bonsai has made tremendous progress here and Microsoft remains committed to furthering this work.”

Microsoft is among a number of high profile companies that have supported the four-year-old startup. Last year, it joined ABB, Samsung and Siemens in helping the company raise a $7.6 million round, bringing the company’s total raise up to $13.6 million, per CrunchbaseThe pending  move follows the recent high-profile acquisition of code hosting tool, GitHub.

“Going forward, we see a massive opportunity to empower enterprises & developers globally with the tools and technology needed to build and operate the BRAINs that power these intelligent autonomous systems,” Bonsai co-founder/CEO Mark Hammond said in a blog post. “We are not the only ones that feel this way. Today we are excited to announce that Microsoft will be acquiring Bonsai to help accelerate the realization of this common vision.”

 


0

Beamery closes $28M Series B to stoke support for its ‘talent CRM’

19:30 | 20 June

Beamery, a London-based startup that offers self-styled “talent CRM”– aka ‘candidate relationship management’ — and recruitment marketing software targeted at fast-growing companies, has closed a $28M Series B funding round, led by EQT Ventures.

Also participating in the round are M12, Microsoft’s venture fund, and existing investors Index Ventures, Edenred Capital Partners and Angelpad Fund. Beamery last raised a $5M Series A, in April 2017, led by Index.

Its pitch centers on the notion of helping businesses win a ‘talent war’ by taking a more strategic and pro-active approach to future hires vs just maintaining a spreadsheet of potential candidates.

Its platform aims to help the target enterprises build and manage a talent pool of people they might want to hire in future to get out ahead of the competition in HR terms, including providing tools for customized marketing aimed at nurture relations with possible future hires.

Customer numbers for Beamery’s software have stepped up from around 50 in April 2017 to 100 using it now — including the likes of Facebook (which is using it globally), Continental, VMware, Zalando, Grab and Balfour Beatty.

It says the new funding will be going towards supporting customer growth, including by ramping up hiring in its offices in London (HQ), Austin and San Francisco.

It also wants to expand into more markets. “We’re focusing on some of the world’s biggest global businesses that need support in multiple timezones and geographies so really it’s a global approach,” said a spokesman on that.

“Companies adopting the system are large enterprises doing talent at scale, that are innovative in terms of being proactive about recruiting, candidate experience and employer brand,” he added.

A “significant” portion of the Series B funds will also go towards R&D and produce development focused on its HR tech niche.

“Across all sectors, there’s a shift towards proactive recruitment through technology, and Beamery is emerging as the category leader,” added Tom Mendoza, venture lead and investment advisor at EQT, in a supporting statement.

“Beamery has a fantastic product, world-class high-ambition founders, and an outstanding analytics-driven team. They’ve been relentless about building the best talent CRM and marketing platform and gaining a deep understanding of the industry-wide problems.”

 


0

New system connects your mind to a machine to help stop mistakes

19:01 | 20 June

How do you tell your robot not do something that could be catastrophic? You could give it a verbal or programmatic command or you could have it watch your brain for signs of distress and have it stop itself. That’s what researchers at MIT’s robotics research lab have done with a system that is wired to your brain and tells robots how to do their job.

The initial system is fairly simple. A scalp EEG and EMG system is connected to a Baxter work robot and lets a human wave or gesture when the robot is doing something that it shouldn’t be doing. For example, the robot could regularly do a task – drilling holes, for example – but when it approaches an unfamiliar scenario the human can gesture at the task that should be done.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” said PhD candidate Joseph DelPreto. “This helps make communicating with a robot more like communicating with another person.”

Because the system uses nuances like gestures and emotional reactions you can train robots to interact with humans with disabilities and even prevent accidents by catching concern or alarm before it is communicated verbally. This lets workers stop a robot before it damages something and even help the robot understand slight changes to its tasks before it begins.

In their tests the team trained Baxter to drill holes in an airplane fuselage. The task changed occasionally and a human standing nearby was able to gesture to the robot to change position before it drilled, essentially training it to do new tasks in the midst of its current task. Further, there was no actual programming involved on the human’s part, just a suggestion that the robot move the drill left or right on the fuselage. The most important thing? Humans don’t have to think in a special way or train themselves to interact with the machine.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” said DelPreto. “The machine adapts to you, and not the other way around.”

The team will present their findings at the Robotics: Science and Systems (RSS) conference.

 


0

Europe takes another step towards copyright pre-filters for user generated content

14:17 | 20 June

In a key vote this morning the European Parliament’s legal affairs committee has backed the two most controversial elements of a digital copyright reform package — which critics warn could have a chilling effect on Internet norms like memes and also damage freedom of expression online.

In the draft copyright directive, Article 11; “Protection of press publications concerning online uses” — which targets news aggregator business models by setting out a neighboring right for the use of snippets of journalistic content that requires users to get a license from the publisher (aka ‘the link tax’, as critics dub it) — was adopted by a 13:12 majority of the legal committee.

While, Article 13; “Use of protected content by online content sharing service providers” — which makes platforms directly liable for copyright infringements by their users, thereby pushing them towards creating filters that monitor all content uploads with all the associated potential chilling affects (aka ‘censorship machines’) — was adopted by a 15:10 majority.

MEPs critical of the proposals have vowed to continue to oppose the measures, and the EU parliament will eventually need to vote as a whole.

, the
, has been adopted by
with a 15:10 majority. Again: We will take this fight to plenary and still hope to
pic.twitter.com/BLguxmHCWs

— Julia Reda (@Senficon)

EU Member State representatives in the EU Council will also need to vote on the reforms before the directive can become law. Though, as it stands, a majority of European governments appear to back the proposals.

European digital rights group EDRi, a long-standing critic of Article 13, has a breakdown of the next steps for the copyright directive here.

Derailing the proposals now essentially rests on whether enough MEPs can be convinced its politically expedient to do so — factoring in a timeline that includes their next election in May 2019.

Last week, a coalition of original Internet architects, computer scientists, academics and supporters — including Sir Tim Berners-Lee, Vint Cerf, Bruce Schneier, Jimmy Wales and Mitch Kapor — penned an open letter to the European Parliament’s president to oppose Article 13, warning that while “well-intended” the requirement that Internet platforms perform automatic filtering of all content uploaded by users “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.

“As creators ourselves, we share the concern that there should be a fair distribution of revenues from the online use of copyright works, that benefits creators, publishers, and platforms alike. But Article 13 is not the right way to achieve this,” they write in the letter.

“By inverting this liability model and essentially making platforms directly responsible for ensuring the legality of content in the first instance, the business models and investments of platforms large and small will be impacted. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.”

The Wikimedia Foundational also blogged separately, setting out some specific concerns about the impact that mandatory upload filters could have on Wikipedia.

“[A]ny sort of law which mandates the deployment of automatic filters to screen all uploaded content using AI or related technologies does not leave room for the types of community processes which have been so effective on the Wikimedia projects,” it warned last week. “As previously mentioned, upload filters as they exist today view content through a broad lens, that can miss a lot of the nuances which are crucial for the review of content and assessments of legality or veracity.”

More generally critics warn that expressive and creative remix formats like memes and GIFs — which form an integral part of the rich communication currency of the Internet — will be at risk if the proposals become law…

This may be illegal under

. If just one of the four images is copyrighted, Twitter would be compelled to take this picture off.
https://t.co/aSbYoiPDm0

— Ziga Turk (@ZigaTurkEU)

Regarding Article 11, Europe already has experience experimenting with a neighboring right for news, after an ancillary copyright law was enacted in Germany in 2013.

But local publishers ended up offering Google free consent to display their snippets after they saw traffic fall substantially when Google stopped showing their content rather than pay for using them.

Spain also enacted a similar law for publishers in 2014, but its implementation required publishers to charge for using their snippets — leading Google to permanently close its news aggregation service in the country.

Critics of this component of the digital copyright reform package also warn it’s unclear what kinds of news content will constitute a snippet, and thus fall under the proposal — even suggesting a URL including the headline of an article could fall foul of the copyright extension; ergo the hyperlink itself could be in danger.

They also argue that an amendment giving Member States the flexibility to decide whether or not a snippet should be considered “insubstantial” (and thus freely shared) or not, does not clear up problems — saying it just risks causing fresh fragmentation across the bloc, at a time when the Commission is keenly pushing a so-called ‘Digital Single Market’ strategy.

“Instead of one Europe-wide law, we’d have 28,” warns Reda on that. “With the most extreme becoming the de-facto standard: To avoid being sued, international internet platforms would be motivated to comply with the strictest version implemented by any member state.”

Returning to Article 13, the EU’s executive, the Commission — the body responsible for drafting the copyright reforms — has also been pushing online platforms towards pre-filtering content as a mechanism for combating terrorist content, setting out a “one hour rule” for takedowns of this type of content earlier this year, for example.

But again critics of the copyright reforms argue it’s outrageously disproportionate to seek to apply the same measures that are being applied to try to clamp down on terrorist propaganda and serious criminal offenses like child exploitation to police copyright.

“For copyrighted content these automated tools simply undermine copyright exceptions. And they are not proportionate,” Reda told us last year. “We are not talking about violent crimes here in the way that terrorism or child abuse are. We’re talking about something that is a really widespread phenomenon and that’s dealt with by providing attractive legal offers to people. And not by treating them as criminals.”

 


0

Blockchain browser Brave starts opt-in testing of on-device ad targeting

12:12 | 20 June

Brave, an ad-blocking web browser with a blockchain-based twist, has started trials of ads that reward viewers for watching them — the next step in its ambitious push towards a consent-based, pro-privacy overhaul of online advertising.

Brave’s Basic Attention Token (BAT) is the underlying micropayments mechanism it’s using to fuel the model. The startup was founded in 2015 by former Mozilla CEO Brendan Eich, and had a hugely successful initial coin offering last year.

In a blog post announcing the opt-in trial yesterday, Brave says it’s started “voluntary testing” of the ad model before it scales up to additional user trials.

These first tests involve around 250 “pre-packaged ads” being shown to trial volunteers via a dedicated version of the Brave browser that’s both loaded with the ads and capable of tracking users’ browsing behavior.

The startup signed up Dow Jones Media Group as a partner for the trial-based ad content back in April.

People interested in joining these trials are being asked to contact its Early Access group — via community.brave.com.

Brave says the test is intended to analyze user interactions to generate test data for training its on-device machine learning algorithms. So while its ultimate goal for the BAT platform is to be able to deliver ads without eroding individual users’ privacy via this kind of invasive tracking, the test phase does involve “a detailed log” of browsing activity being sent to it.

Though Brave also specifies: “Brave will not share this information, and users can leave this test at any time by switching off this feature or using a regular version of Brave (which never logs user browsing data to any server).”

“Once we’re satisfied with the performance of the ad system, Brave ads will be shown directly in the browser in a private channel to users who consent to see them. When the Brave ad system becomes widely available, users will receive 70% of the gross ad revenue, while preserving their privacy,” it adds.

The key privacy-by-design shift Brave is working towards is moving ad targeting from a cloud-based ad exchange to the local device where users can control their own interactions with marketing content, and don’t have to give up personal data to a chain of opaque third parties (armed with hooks and data-sucking pipes) in order to do so.

Local device ad targeting will work by Brave pushing out ad catalogs (one per region and natural language) to available devices on a recurring basis.

“Downloading a catalog does not identify any user,” it writes. “As the user browses, Brave locally matches the best available ad from the catalog to display that ad at the appropriate time. Brave ads are opt-in and consent-based (disabled by default), and engineered to operate without leaking the user’s personal data from their device.”

It couches this approach as “a more efficient and direct opportunity to access user attention without the inherent liabilities and risks involved with large scale user data collection”.

Though there’s still a ways to go before Brave is in a position to prove out its claims — including several more testing phases.

Brave says it’s planning to run further studies later this month with a larger set of users that will focus on improving its user modeling — “to integrate specific usage of the browser, with the primary goal of understanding how behavior in the browser impacts when to deliver ads”.

“This will serve to strengthen existing modeling and data classification engines and to refine the system’s machine learning,” it adds.

After that it says it will start to expand user trials — “in a few months” — focusing testing on the impact of rewards in its user-centric ad system.

“Thousands of ads will be used in this phase, and users will be able to earn tokens for viewing and interacting with ads,” it says of that.

Brave’s initial goal is for users to be able to reward content producers via the utility BAT token stored in a payment wallet baked into the browser. The default distributes the tokens stored in a users’ wallet based on time spent on Brave-verified websites (though users can also make manual tips).

Though payments using BAT may also ultimately be able to do more.

Its roadmap envisages real ad revenue and donation flow fee revenue being generated via its system this year, and also anticipates BAT integration into “other apps based on open source & specs for greater ad buying leverage and publisher onboarding”.

 


0

Football matches land on your table thanks to augmented reality

01:25 | 20 June

It’s World Cup season, so that means that even articles about machine learning have to have a football angle. Today’s concession to the beautiful game is a system that takes 2D videos of matches and recreates them in 3D so you can watch them on your coffee table (assuming you have some kind of augmented reality setup, which you almost certainly don’t). It’s not as good as being there, but it might be better than watching it on TV.

The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid.

Considering the source data — two-dimensional, low-resolution, and in motion — it’s a pretty serious accomplishment to reliably reconstruct a realistic and reasonably accurate 3D pose for each player.

Now, it’s far from perfect. One might even say it’s a bit useless. The characters’ positions are estimated, so they jump around a bit, and the ball doesn’t really appear much, so everyone appears to just be dancing around on a field. (That’s on the to-do list.)

But the idea is great, and this is a working if highly limited first shot at it. Assuming the system could ingest a whole game based on multiple angles (it could source the footage directly from the networks), you could have a 3D replay available just minutes after the actual match concluded.

Not only that, but wouldn’t it be cool to be able to gather round a central location and watch the game from multiple angles on it? I’ve always thought one of the worst things about watching sports on TVs is everyone is sitting there staring in one direction, seeing the exact same thing. Letting people spread out, pick sides, see things from different angles to analyze strategies — that would be fantastic.

All we need is for someone to invent a perfect, affordable holographic display that works from all angles and we’re set.

The research is being presented at the Computer Vision and Pattern Recognition conference in Salt Lake City, and it’s a collaboration between Facebook, Google, and the University of Washington.

 


0

Google injects Hire with AI to speed up common tasks

19:00 | 19 June

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

 


0

Amazon is bringing the Echo to Italy and Spain

18:12 | 19 June

Alexa’s slow but steady march across the glob continues, as Amazon gets ready to bring the smart assistant to Italy and Spain later this year. The AI will be joined by the company’s own Echo devices, along with with third-party hardware from Bose and Sonos.

In the meantime, the Amazon’s opening the Alexa Skills Kit to developers in those countries. It’s also making the Alexa Voice Service developer preview available to hardware developers looking to build third-party devices using the assistant and throwing in an Echo device to the first 100 devs for good measure.

Just this month, the company added nearby France to the list of Alexa/Echo markets, joining the U.S., Canada, the U.K., Australia, India, New Zealand, Germany, Japan and Ireland. That manner of roll out takes time. In addition to priming the pump for developers, Alexa needs to be tweaked to learn not only a new language, but also accents, subtle linguistic nuances and local customs.

No word yet on the specific timeframe for launch, or which devices are coming to the aforementioned countries. France, for its part, got the Echo, Echo Dot and Echo Spot. Google meanwhile, has already added Italian support for Assistant and announced Home availability for Spain at I/O, along with Denmark, Korea, Mexico, the Netherlands, Norway and Sweden.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 1843

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short