Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 3000

Uber’s self-driving unit starts mapping Washington D.C. ahead of testing

23:53 | 23 January

Uber Advanced Technologies Group will start mapping Washington D.C., ahead of plans to begin testing its self-driving vehicles in the city this year.

Initially, there will be three Uber vehicles mapping the area, a company spokesperson said. These vehicles, which will be manually driven and have two trained employees inside, will collect sensor data using a top-mounted sensor wing equipped with cameras and a spinning lidar. The data will be used to build high-definition maps. The data will also be used for Uber’s virtual simulation and test track testing scenarios.

Uber intends to launch autonomous vehicles in Washington D.C. before the end of 2020.

At least one other company is already testing self-driving cars in Washington D.C. Ford announced in October 2018 plans to test its autonomous vehicles in Washington, D.C. Argo AI is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles.

Argo, which is backed by Ford and Volkswagen, started mapping the city in 2018. Testing was expected to begin in the first quarter of 2019.

Uber ATG has kept a low profile ever since one of its human-supervised test vehicles struck and killed a pedestrian in Tempe, Arizona in March 2018. The company halted its entire autonomous vehicle operation immediately following the incident.

Nine months later, Uber ATG resumed on-road testing of its self-driving vehicles in Pittsburgh, following a Pennsylvania Department of Transportation decision to authorize the company to put its autonomous vehicles on public roads. The company hasn’t resumed testing in other markets such as San Francisco.

Uber is collecting data and mapping in three other cities in Dallas, San Francisco and Toronto. In those cities, just like in Washington D.C., Uber manually drives its test vehicles.

Uber spun out the self-driving car business in April 2019 after closing $1 billion in funding from Toyota, auto-parts maker Denso and SoftBank’s Vision Fund. The deal valued Uber ATG at $7.25 billion, at the time of the announcement. Under the deal, Toyota and Denso are providing $667 million, with the Vision Fund throwing in the remaining $333 million.

 


0

Cortex Labs helps data scientists deploy machine learning models in the cloud

21:32 | 23 January

It’s one thing to develop a working machine learning model, it’s another to put it to work in an application. Cortex Labs is an early stage startup with some open source tooling designed to help data scientists take that last step.

The company’s founders were students at Berkeley when they observed that one of the problems around creating machine learning models was finding a way to deploy them. While there was a lot of open source tooling available, data scientists are not experts in infrastructure.

CEO Omer Spillinger says that infrastructure was something the four members of the founding team — himself, CTO David Eliahu, head of engineering Vishal Bollu and head of growth Caleb Kaiser — understood well.

What the four founders did was take a set of open source tools and combine them with AWS services to provide a way to deploy models more easily. “We take open source tools like TensorFlow, Kubernetes and Docker and we combine them with AWS services like CloudWatch, EKS (Amazon’s flavor of Kubernetes) and S3 to basically give one API for developers to deploy their models,” Spillinger explained.

He says that a data scientist starts by uploading an exported model file to S3 cloud storage. “Then we pull it, containerize it and deploy it on Kubernetes behind the scenes. We automatically scale the workload and automatically switch you to GPUs if it’s compute intensive. We stream logs and expose [the model] to the web. We help you manage security around that, stuff like that,” he said

While he acknowledges this not unlike Amazon SageMaker, the company’s long-term goal is to support all of the major cloud platforms. SageMaker of course only works on the Amazon cloud, while Cortex will eventually work on any cloud. In fact, Spillinger says that the biggest feature request they’ve gotten to this point, is to support Google Cloud. He says that and support for Microsoft Azure are on the road map.

The Cortex founders have been keeping their head above water while they wait for a commercial product with the help of an $888,888 seed round from Engineering Capital in 2018. If you’re wondering about that oddly specific number, it’s partly an inside joke — Spillinger’s birthday is August 8th — and partly a number arrived at to make the valuation work, he said.

For now, the company is offering the open source tools, and building a community of developers and data scientists. Eventually, it wants to monetize by building a cloud service for companies who don’t want to manage clusters — but that is down the road, Spillinger said.

 


0

Unearth the future of agriculture at TC Sessions: Robotics+AI with the CEOs of Traptic, Farmwise and Pyka

19:30 | 22 January

Farming is one of the oldest professions, but today those amber waves of grain (and soy) are a test bed for sophisticated robotic solutions to problems farmers have had for millennia. Learn about the cutting edge (sometimes literally) of agricultural robots at TC Sessions: Robotics+AI on March 3 with the founders of Traptic, Pyka, and Farmwise.

Traptic, and its co-founder and CEO Lewis Anderson, you may remember from Disrupt SF 2019, where it was a finalist in the Startup Battlefield. The company has developed a robotic berry picker that identifies ripe strawberries and plucks them off the plants with a gentle grip. It could be the beginning of a new automated era for the fruit industry, which is decades behind grains and other crops when it comes to machine-based harvesting.

Farmwise has a job that’s equally delicate yet involves rough treatment of the plants — weeding. Its towering machine trundles along rows of crops, using computer vision to locate and remove invasive plants, working 24/7, 365 days a year. CEO Sebastian Boyer will speak to the difficulty of this task and how he plans to evolve the machines to become “doctors” for crops, monitoring health and spontaneously removing pests like aphids.

Pyka’s robot is considerably less earthbound than those: an autonomous, all-electric crop-spraying aircraft — with wings! This is a much different challenge from the more stable farming and spraying drones like those of DroneSeed and SkyX, but the choice gives the craft more power and range, hugely important for today’s vast fields. Co-founder Michael Norcia can speak to that scale and his company’s methods of meeting it.

These three companies and founders are at the very frontier of what’s possible at the intersection of agriculture and technology, so expect a fruitful conversation.

$150 Early Bird savings end on Feb. 14! Book your $275 Early Bird Ticket today and put that extra money in your pocket.

Students, grab your super discounted $50 tickets right here. You might just meet your future employer/internship opportunity at this event.

Startups, we only have 5 demo tables left for the event. Book your $2200 demo table here and get in front of some of today’s leading names in the biz. Each table comes with 4 tickets to attend the show.

 


0

ServiceNow acquires Loom Systems to expand AIOps coverage

16:30 | 22 January

ServiceNow announced today that it has acquired Loom Systems, an Israeli startup that specializes in AIOps. The companies did not reveal the purchase price.

IT operations collects tons of data across a number of monitoring and logging tools, way too much for any team of humans to keep up with. That’s why there are startups like Loom turning to AI to help sort through it. It can find issues and patterns in the data that would be challenging or impossible for humans to find. Applying AI to operations data in this manner has become known as AIOps in industry parlance.

ServiceNow is first and foremost a company trying to digitize the service process, however that manifests itself. IT service operations is a big part of that. Companies can monitor their systems, wait until a problem happens and then try and track down the cause and fix it, or they can use the power of artificial intelligence to find potential dangers to the system health and neutralize them before they become major problems. That’s what an AIOps product like Loom’s can bring to the table.

Jeff Hausman, vice president and general manager of IT Operations Management at ServiceNow sees Loom’s strengths merging with ServiceNow’s existing tooling to help keep IT systems running. “We will leverage Loom Systems’ log analytics capabilities to help customers analyze data, automate remediation and reduce L1 incidents,” he told TechCrunch.

Loom co-founder and CEO Gabby Menachem not surprisingly sees a similar value proposition. “By joining forces, we have the unique opportunity to bring together our AI innovations and ServiceNow’s AIOps capabilities to help customers prevent and fix IT issues before they become problems,” he said in a statement.

Loom raised $16 million since it launched in 2015, according to PitchBook data. Its most recent round for $10 million was in November 2019. Today’s deal is expected to close by the end of this quarter.

 


0

Google Cloud lands Lufthansa Group and Sabre as new customers

21:37 | 21 January

Google’s strategy for bringing new customers to its cloud is to focus on the enterprise and specific verticals like healthcare, energy, financial service and retail, among others. It’s healthcare efforts recently experienced a bit of a setback, with Epic now telling its customers that it is not moving forward with its plans to support Google Cloud, but in return, Google now got to announce two new customers in the travel business: Lufthansa Group, the world’s largest airline group by revenue, and Sabre, a company that provides backend services to airlines, hotels and travel aggregators.

For Sabre, Google Cloud is now the preferred cloud provider. Like a lot of companies in the travel (and especially the airline) industry, Sabre runs plenty of legacy systems and is currently in the process of modernizing its infrastructure. To do so, it has now entered a 10-year strategic partnership with Google “to improve operational agility while developing new services and creating a new marketplace for its airline,  hospitality and travel agency customers.” The promise, here, too, is that these new technologies will allow the company to offer new travel tools for its customers.

When you hear about airline systems going down, it’s often Sabre’s fault, so just being able to avoid that would already bring a lot of value to its customers.

“At Google we build tools to help others, so a big part of our mission is helping other companies realize theirs. We’re so glad that Sabre has chosen to work with us to further their mission of building the future of travel,” said Google CEO Sundar Pichai . “Travelers seek convenience, choice and value. Our capabilities in AI and cloud computing will help Sabre deliver more of what consumers want.”

The same holds true for Google’s deal with Lufthansa Group, which includes German flag carrier Lufthansa itself, but also subsidiaries like Austrian, Swiss, Eurowings and Brussels Airlines, as well as a number of technical and logistics companies that provide services to various airlines.

“By combining Google Cloud’s technology with Lufthansa Group’s operational expertise, we are driving the digitization of our operation even further,” said Dr. Detlef Kayser, Member of the Executive Board of the Lufthansa Group. “This will enable us to identify possible flight irregularities even earlier and implement countermeasures at an early stage.”

Lufthansa Group has selected Google as a strategic partner to “optimized its operations performance.” A team from Google will work directly with Lufthansa to bring this project to life. The idea here is to use Google Cloud to build tools that help the company run its operations as smoothly as possible and to provide recommendations when things go awry due to bad weather, airspace congestion or a strike (which seems to happen rather regularly at Lufthansa these days).

Delta recently launched a similar platform to help its employees.

 


0

Facebook speeds up AI training by culling the weak

20:01 | 21 January

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

 


0

Diligent’s Vivian Chu and Labrador’s Mike Dooley will discuss assistive robotics at TC Sessions: Robotics+AI

21:55 | 20 January

Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.

The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.

Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.

Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.

Early Bird tickets are now on sale for $275, but you better hurry, prices go up in less than a month by $100. Students can book a super discounted ticket for just $50 right here.

 


0

Fable Studio founder Edward Saatchi on designing virtual beings

19:42 | 20 January

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 3 of 3: designing virtual companions

In this discussion, Fable CEO Edward Saatchi addresses the technical and artistic dynamics of virtual companions: AIs created to establish one-to-one relationships with consumers. After mobile, Saatchi says he believes such virtual beings will act as the next paradigm for human-computer interaction.

 


0

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

17:11 | 20 January

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

 


0

Shadows’ Dylan Flinn and Kombo’s Kevin Gould on the business of ‘virtual influencers’

19:24 | 19 January

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 2 of 3: the business of virtual influencers

Today’s discussion focuses on virtual influencers: fictional characters that build and engage followings of real people over social media. To explore the topic, I spoke with two experienced entrepreneurs:

  • Dylan Flinn is CEO of Shadows, an LA-based animation studio that’s building a roster of interactive characters for social media audiences. Dylan started his career in VC, funding companies such as Robinhood, Patreon and Bustle, and also spent two years as an agent at CAA.
  • Kevin Gould is CEO of Kombo Ventures, a talent management and brand incubation firm that has guided the careers of top influencers like Jake Paul and SSSniperWolf. He is the co-founder of three direct-to-consumer brands — Insert Name Here, Wakeheart and Glamnetic — and is an angel investor in companies like Clutter, Beautycon and DraftKings.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 3000

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short