Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 30

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 24

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Виктор Иванович

Виктор Иванович, 39

Joined: 29 January 2014

About myself: Жизнь удалась!

Interests: Куба и Панама

Alexey Geno

Alexey Geno, 6

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 1558

Skelter Labs raises $9M to help put Korea on the global AI map

13:21 | 21 February

China and the U.S. are the two countries most closely associated with artificial intelligence (AI) technology, but a startup in Korea is out to add its nation to mix after it raised more than $9 million from some big-name investors.

Skelter Labs, which was founded in 2015 by Google’s former chief technical officer in Korea, announced today that it has raised KRW 10 billion ($9.3 million). Korean internet and messaging giant Kakao is a major backer, investing in the round via both its ‘KakaoBrain’ AI unit and its K-Cube VC firm, both of which are existing investors. Stonebridge Ventures and Lotte Homeshopping, the TV and internet shopping business owned by multi-billion dollar retail giant Lotte, also participated.

(Kakao Group CEO Jimmy Rim arrived at Kakao via its acquisition of K-Cube in 2015, before going on to take the top job later that year — so it is a pretty strategic asset.)

Skelter Labs started out as an app development house when it was initially founded by CEO Ted Cho, the former engineering site director at Google Korea, with products that include a flight booking app, chatbot network and point-of-sale software, but, over the past year, it began to focus on AI.

It now works with a range of enterprises and businesses in Korea to bring its artificial intelligence and machine-learning smarts into play. In particular, the company specializes in conversational AI, deep learning — speech recognition and image recognition — and context recognition.

Like many AI startups, which collaborative with third parties on services, the exact scope of its work is fairly secretive. A representative from Skelter Labs declined to name specific customers, but they told TechCrunch that the startup is planning to expand its services overseas following this funding.

While it is working with large enterprise and third parties to refine its core technology, a representative explained that the wider vision is to bring its machine-learning technology to daily life and schedules. That, Skelter Labs explained, could take the form of “intelligent virtual assistant technology that can be widely applied to various areas including smart speakers, smartphones, home appliances, automobiles and wearable devices.”

The startup has more than 50 staff at its office in Seoul, with experience from companies like Google, Samsung, LG and science and technology research university KAIST’s AI division.

Korea is undoubtedly a hotbed for tech talent — with the likes of Samsung and LG employing huge numbers of people — but that is yet to translate a huge number of tech startups, although the progress is promising.

In the AI space, Korea hasn’t received anything like the global attention of China or the U.S.. Indeed, a recent CB Insights report concluded that China took 48 percent of the $5 billion-plus raised by AI startups in 2017. Overtaken by China for the first time, the U.S. placed second on 38 percent, but startups located in ‘the rest of the world’ accounted for only the remaining 14 percent.

Clearly, there’s potential to grow that tiny share. A $9 million round doesn’t move the investment needle on a global basis, but it is a significant sum for a Korean startup and it gives Skelter Labs the potential to accelerate its business.

 


0

Think fast – this system watches you answer questions to make sure you’re human

22:14 | 20 February

The war against bots is never-ending, though hopefully it doesn’t end in the Skynet-type scenario we all secretly expect. In the meantime it’s more about cutting down on spam, not knocking down hunter-killers. Still, the machines are getting smarter and simple facial recognition may not be enough to tell you’re a human. Machines can make faces now, it seems — but they’re not so good at answering questions with them.

Researchers at Georgia Tech are working on a CAPTCHA-type system that takes advantage of the fact that a human can quickly and convincingly answer just about any question, while even a state of the art facial animation and voice generation systems struggle to generate a response.

There are a variety of these types of human/robot differentiation tests out there, which do everything from test your ability to identify letters, animals and street signs to simply checking whether you’re already logged into some Google service. But ideally it’s something easy for humans and hard for computers.

It’s easy for people to have faces — in fact, it’s positively difficult to not have a face. Yet it’s a huge amount of work for a computer to render and modify a reasonably realistic face (we’re assuming the system isn’t fooled by JPEGs).

It’s also easy for a person answer a simple question, especially if it’s pointless. Computers, however, will spin their wheels coming up with a plausible answer to something like, “do you prefer dogs or cats?” As humans, we understand there’s no right answer to this question (well, no universally accepted one anyway) and we can answer immediately. A computer will have to evaluate all kinds of things just to understand the question, and double-check its answer, then render a face saying it. That takes time.

The solution being pursued by Erkam Uzun, Wenke Lee and others at Georgia Tech leverages this. The prospective logger-in is put on camera — this is assuming people will allow the CAPTCHA to use it, which is a whole other issue — and presented with a question. Of course there may be some secondary obfuscation — distorted letters and all that — but the content is key, keeping the answer simple enough for a human to answer quickly but still challenge a computer.

In tests, people answered within a second on average, while the very best computer efforts clocked in at six seconds at the very least, and often more. And that’s assuming the spammer has a high-powered facial rendering engine that knows what it need to do. The verification system not only looks at the timing, but checks the voice and face against the user’s records.

“We looked at the problem knowing what the attackers would likely do,” explained Georgia Tech researcher Simon Pak Ho Chung. “Improving image quality is one possible response, but we wanted to create a whole new game.”

It’s obviously a much more involved system than the simple CAPTCHAs that we encounter now and then on the web, but the research could lead to stronger login security on social networks and the like. With spammers and hackers gaining computing power and new capabilities by the day, we’ll probably need all the help we can get.

Featured Image: Franck Boston/Shutterstock

 


0

Hypergiant helps big brands look beyond the AI buzzwords

18:37 | 20 February

Artificial intelligence and machine learning are phrases that get tossed around a lot these days, to the point where they’re starting to seem meaningless. In fact, Ben Lamm said he’s seen the problem firsthand at his chatbot startup Conversable.

“We kind of noticed this huge gap,” Lamm said. “Everybody has an emotional reaction to AI, everybody wants AI, nobody seems to know what that means.”

So Lamm founded a new startup called Hypergiant, which will work with large brands and enterprise to address address what he described as “this hunger for pragmatism in AI.”

In his view, most existing AI solutions either require “super powerful” technology, or they’re “complete BS marketing fluff.” Lamm’s goal is to find the sweet spot in the middle, where the technology can be used by Fortune 500 companies to solve real business problems.

For example, Hypergiant has already worked with TGI Friday’s to create Flanagan, an AI-powered mixologist. Sound gimmicky? Well, Lamm said Flanagan allows the restaurant chain to collect more data about its customers’ tastes, and to increase loyalty by offering personalized drink recommendations.

There are actually three divisions within Hypergiant. At Hypergiant Applied Sciences, the team will be working to develop and commercialize its own AI products. However, when it comes to working with brand costumers, Lamm said Hypergiant Space Age Solutions (yes, that’s the real name) will happily adopt whatever technology best meets the customer’s needs.

And then there’s Hypergiant Ventures, which invests in AI startups. It’s already backed Pilosa, Cerebri AI and Clearblade.

Lamm founded Hypergiant with two of his old colleagues from Chaotic Moon, the technology studio that Accenture acquired in 2015 — John Fremont (who served as artificial intelligence lead at Accenture after the acquisition and is now Hypergiant’s chief strategy officer) and Will Womble (who serves as chief revenue officer). And while Lamm is Hypergiant’s executive chairman and CEO, he’ll continue working as CEO at Conversable.

Featured Image: Aniwhite/Shutterstock

 


0

Nuance ends development of the Swype keyboard apps

15:00 | 20 February

The party is over for third party keyboards. But hey, it was fun while it lasted. Nuance, the company that acquired veteran swipe-to-type keyboard maker Swype — all the way back in 2011, shelling out a cool $100M — has ended development of its Swype+Dragon dictation Android and iOS apps.

The news was reported earlier by the Xda developer blog, which spotted a Reddit post by a user and says it got confirmation from Nuance that development for both the Android and iOS apps has been discontinued. We’ve also reached out to the company with questions. A search for the Swype app on iOS now results in suggestions for rival keyboard apps.

As Xda points out, Nuance has been concentrating on its b2b business using its speech recognition tech to enable speech to text utility — such as a dedicated version of its dictation product which is targeted at healthcare workers.

The b2b space also provides the business model that’s so often been lacking for keyboard players in the consumer space (even those with hundreds of millions of users — frankly, the typing was on the wall when major player Swiftkey took the exit route to Microsoft back in 2016).

The wider context here is that as speech recognition technologies have got better — improvements in turn made possible thanks to language models trained with data sucked up from keyboard inputs — voice interfaces can start to supplant keyboard-based input methods in more areas.

In the consumer space, Google especially has also doubled down on its own Gboard keyboard (which includes a dictation feature). While Apple’s native iOS keyboard is less fully featured but does include next-word prediction built in. So with mobile’s platform giants wading in there’s added survival pressure on third party keyboard app makers.

Nuance targeting its efforts at a narrow problem like patient documentation also makes sense because of the specialist nomenclature and routine procedures involved, which naturally provides a better framework for voice input accuracy vs more unpredictable and/or creative environments where dictation inaccuracies might more easily creep in.

So while Siri might still suck at understanding what you’re asking, a dedicated speech to text engine that’s been trained on medical data-sets and processes can provide compelling utility for clinicians needing to quickly capture patient notes, potentially even reducing inaccuracies which can creep in via old handwritten ways of doing things.

Connectivity getting embedded into more and more types of devices, including things that lack screens like (many) smart speakers, also means voice interfaces are naturally getting more uplift. And Nuance has been building dictation products for cars too, for example.

Still, it’s not quite the end of the road for third party consumer keyboard plays. VC backed freemium keyboard app Grammarly — which last year raised a whopping $110M, promising to improve your writing not just pick up typos but keylogging everything you type to do so — has been making a lot of noise and plastering its ads all over the Internet to drive consumer uptake. (My App Store search for Swype returned an ad for Grammarly as the top result, for example.)

And while Grammarly is taking revenue via a set of pricing plans to get a more fully featured version of its service, it also says its using typing data to improve its underlying algorithms and language models. So it remains to be seen what its data-mining keyboard business might evolve into (or exit to) in time.

Another consumer player, the Fleksy keyboard, also got revived last year — with a new developer team behind it, whose vision is for the keyboard to be a services platform and whose stated mission is to keep an independent and pro-privacy keyboard dream alive. So don’t stop typing just yet.

 


0

Technological solutions to technology’s problems feature in “How to Fix The Future”

03:45 | 19 February

Larry Downes Contributor

Larry Downes is a senior industry and innovation fellow at Georgetown University's McDonough School of Business. He is the author of several books on the Internet and business.

In this edition of Innovate 2018, Andrew Keen finds himself in the hot seat.

Keen, whose new book, “How to Fix the Future”, was published earlier this month, discusses a moment when it has suddenly become fashionable for tech luminaries to abandon utopianism in favor of its opposite.  The first generation of IPO winners have now become some of tech’s most vocal critic—conveniently of new products and services launched by a younger generation of entrepreneurs.

For example, Tesla’s Elon Musk says that advances in Artificial Intelligence present a “fundamental risk to the existence of civilization.”  Salesforce CEO Marc Benioff believes Facebook ought to be regulated like a tobacco company because social media has become (literally?) carcinogenic.  And Russian zillionaire George Soros last week called Google “a menace to society.”

Eschewing much of the over-the-top luddism that now fills the New York Times (“Silicon Valley is Not Your Friends”), the Guardian (“The Tech Insiders Who Fear a Smartphone Dystopia”), and other mainstream media outlets, Keen proffers practical solutions to a wide range of tech-related woes.  These include persistent public and private surveillance, labor displacement, and fake news.

From experiments in Estonia, Switzerland, Singapore, India and other digital outposts, Keen distills these five tools for fixing the future:

  • Increased regulation, particularly through antitrust law
  • New innovations designed to solve the unintended side-effects of earlier disruptors
  • Targeted philanthropy from tech’s leading moneymakers
  • Modern social safety nets for displaced workers and disenfranchised consumers
  • Educational systems geared for 21st century life

 


0

Fake news is an existential crisis for social media 

22:12 | 18 February

The funny thing about fake news is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake news’. Some very visibly, a lot a lot less so.

Such as Russia painting the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that Russian-generated fake news is itself “fake news”.

And, indeed, the social media firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the data they release about it — in what can look suspiciously like an attempt to downplay the significance and impact of malicious digital propaganda, because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a tech firm’s favor) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or misinformation as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course. But it’s also clearly deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.

Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate — and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

*

Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed — still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that’s running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.

Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.

The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.

Here they are:

  • How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016?
  • How much have these media platforms spent to build their social followings?
  • Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account?
  • Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content?
  • Did either RT, Sputnik or Ruptly use ‘dark posts’ on either Facebook or Twitter to push their content during the EU referendum, or have they used ‘dark posts’ to build their extensive social media following?
  • What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik.
  • Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?

We put these questions to Facebook and Twitter.

In response, a Twitter spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. 

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period.  

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.

A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn’t peer reviewed, and so on.

But, in an illustrative twist, if you Google “89up Brexit”, Google New injects fresh Kremlin-backed opinions into the search results it delivers — see the top and third result here…

Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the UK staying in the EU. Making it easy for Russian state organs to slur the research as worthless.

The social media firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally ‘Russia Today’. Fake news thrives on shamelessness, clearly.

It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

Frankly, this situation is looking increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious disinformation and even drafting anti-fake news election laws.

And while the social media firms have been a bit more alacritous to respond to domestic lawmakers’ requests for action and investigation into political disinformation, that just makes their wider inaction, when viable and reasonable concerns are brought to them by non-US politicians and other concerned individuals, all the more inexcusable.

The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

But if tech giants have treated requests for information and help about political disinformation from the UK — a close US ally — so poorly, you can imagine how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the homeland.

Earlier this month, in what looked very much like an act of exasperation, the chair of the UK’s fake news enquiry, Damian Collins, flew his committee over the Atlantic to question Facebook, Twitter and Google policy staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of new information. The full impact of Russia’s meddling in the Brexit vote remains unquantified.

One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

*

The partial data about Russia’s Brexit dis-ops, which Facebook and Twitter have trickled out so far, like blood from the proverbial stone, is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news to churn out more maliciously minded content, as RT and Sputnik demonstrably have.

In all probability, it also pours more fuel on Brexit-based societal division. The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU. What role did social media and Kremlin agents play in exacerbating those divisions? Without hard data it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow Russia’s line that so called fake news is a fiction sicked up by over-imaginative Russophobes.

But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

Self interest also compellingly explains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough data and/or failing to interrogate deeply enough their own systems when asked to respond to reasonable data requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out to try to land a knock-out blow in the game of ‘fake news and/or total fiction’, serves no useful purpose in a civilized society. It’s just more FUD for the fake news mill.

Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side. (As they frequently are by and on social media platforms.) So you could do robust math on fake news — if only you had access to the underlying data.

But only the social media platforms have that. And they’re not falling over themselves to share it. Instead, Twitter routinely rubbishes third party studies exactly because external researchers don’t have full visibility into how its systems shape and distribute content.

Yet external researchers don’t have that visibility because Twitter prevents them from seeing how it shapes tweet flow. Therein lies the rub.

Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

Facebook has also built some anti-fake news tools to try to tweak what its algorithms favor, though nothing it’s done on that front to date looks very successfully (even as a more major change to its New Feed, to make it less of a news feed, has had a unilateral and damaging impact on the visibility of genuine news organizations’ content — so is arguably going to be unhelpful in reducing Facebook-fueled disinformation).

In another instance, Facebook’s mass closing of what it described as “fake accounts” ahead of, for example, the UK and French elections can also look problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what content they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin disinformation-spreading bots.

More recently, Facebook has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political advertising on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own VP of ads has admitted that Russian efforts to spread propaganda are ongoing and persistent, and do not solely target elections or politicians…

The main goal of the Russian propaganda and misinformation effort is to divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well. We are quite divided as a nation.

— Rob Goldman (@robjective) February 17, 2018

The Russian campaign is ongoing. Just last week saw news that Russian spies attempted to sell a fake video of Trump with a hooker to the NSA. US officials cut off the deal because they were wary of being entangled in a Russian plot to create discord. https://t.co/jO9GwWy2qH

— Rob Goldman (@robjective) February 17, 2018

The wider point is that social division is itself a tool for impacting democracy and elections — so if you want to achieve ongoing political meddling that’s the game you play.

You don’t just fire up your disinformation guns ahead of a particular election. You work to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot box, whenever there’s an election.

Russia knows this. And that’s why the Kremlin has been playing such a long propaganda game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes digitally amplified disinformation an existential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of social media platforms that this game is being played so effectively — because these platforms have historically preferred to champion free speech rather than root out and eradicate hate speech and abuse; inviting trolls and malicious actors to exploit the freedom afforded by their free speech ideology and to turn powerful broadcast and information-targeting platforms into cyberweapons that blast the free societies that created them.

Social media’s filtering and sorting algorithms also crucially failed to make any distinction between information and disinformation. Which was their great existential error of judgement, as they sought to eschew editorial responsibility while simultaneously working to dominate and crush traditional media outlets which do operate within a more tightly regulated environment (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

But if your platform treats everything and almost anything indiscriminately as ‘content’, then don’t be surprised if fake news becomes indistinguishable from the genuine article because you’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Goldman’s suggested answer to social media’s existential fake news problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that social media platforms are piping into people’s eyeballs.

Lessons in critical thinking are certainly a good idea. But fakes are compelling for a reason. Look at the tenacity with which conspiracy theories take hold in the US. In short, it would take a very long time and a very large investment in critical thinking education programs to create any kind of shielding intellectual capacity able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get existentially worse too.

So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

*

If you’re the target of malicious propaganda you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind. Imagine, for example, your trigger reaction to being sent a deepfake of your wife in bed with your best friend.

That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms that are simultaneously global in reach and capable of delivering individually targeted propaganda campaigns. That’s the crux of the shift here).

Fake news is also insidious because of the lack of civic restrains on disinformation agents, which makes maliciously minded fake news so much more potent and problematic than plain old digital advertising.

I mean, even people who’ve searched for ‘slippers’ online an awful lot of times, because they really love buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers Facebook serves them. They’re also probably unlikely to actively evangelize their slipper preferences to their friends, family and wider society — by, for example, posting about their slipper-based views on their social media feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. There aren’t tribes of slipper owners as there are smartphone buyers. Because slippers are a non-complex, functional comfort item with minimal fashion impact. So an individual’s slipper preferences, even if very liberally put about on social media, are unlikely to generate strong opinions or reactions either way.

Political opinions and political positions are another matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it another way, political opinions are not slippers. People rarely try a new one on for size. Yet social media firms spent a very long time indeed trying to sell the ludicrous fallacy that content about slippers and maliciously crafted political propaganda, mass-targeted tracelessly and inexpensively via their digital ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for example.

Indeed, look back over the last few years’ news about fake news, and social media platforms have demonstrably sought to play down the idea that the content distributed via their platforms might have had any sort of quantifiable impact on the democratic process at all.

Yet these are the same firms that make money — very large amounts of money, in some cases — by selling their capability to influentially target advertising.

So they have essentially tried to claim that it’s only when foreign entities engage with their digital advertising platforms, and used their digital advertising tools — not to sell slippers or a Netflix subscription but to press people’s biases and prejudices in order to sew social division and impact democratic outcomes — that, all of a sudden, these powerful tech tools cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall content ocean they’re serving up so hey why can’t you just stop overreacting?

That’s also pure misdirection of course. The wider problem with malicious disinformation is it pervades all content on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much money paying Twitter and Facebook for Brexit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share content created by RT, for example — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and impact of its digital propaganda without having to send the tech firms any more checks.

And indeed, Russia is still operating ranks of bots on social media which are actively working to divide public opinion, as Facebook freely admits.

Maliciously minded content has also been shown to be preferred by (for example) Facebook’s or Google’s algorithms vs truthful content, because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing techie efforts to fix what they view as some kind of content-sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contextual judgement is very hard for AI — as Zuckerberg himself has conceded. But human review is unthinkable. Tech giants simply do not want to employ the numbers of humans that would be necessary to always be making the right editorial call on each and every piece of digital content.

If they did, they’d instantly become the largest media organizations in the world — needing at least hundreds of thousands (if not millions) of trained journalists to serve every market and local region they cover.

They would also instantly invite regulation as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake news is an existential problem for social media.

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

Because if you sit and think about the full scope of malicious disinformation, coupled with the automated global distribution platforms that social media has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions look viable:

A) bespoke regulation, including regulatory access to proprietary algorithmic content-sorting engines.

B) breaking up big tech so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses — swapping out editorially structured news distribution for machine-powered content hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of abuses and misuses slowly emerges. And as certain damages are felt.

Facebook’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s YouTube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in media distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled information streams.

Really, the case for social media regulation is starting to look unstoppable.

But even with unfettered access to internal data and the potential to control content-sifting engines, how do you fix a problem that scales so very big and broad?

Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

So, again, this problem looks existential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Featured Image: Quinn Dombrowski/Flickr UNDER A CC BY-SA 2.0 LICENSE

 


0

A peek inside Alphabet’s investing universe

21:10 | 17 February

Jason Rowley Contributor

Jason Rowley is a venture capital and technology reporter for Crunchbase News.

More posts by this contributor:
  • Raise softly and deliver a big exit
  • Mobile delivers high exit multiples despite broader market slowdown

Chances are high you have heard of Google. You are likely a contributor to one of the 3.5 billion search queries the website processes daily. But unless you’re a venture capitalist, an entrepreneur or a slightly obsessive technology journalist, you may not know that Google — or, more properly, Alphabet, the corporate parent to the search and internet ad giant — is also in the business of investing in startups. And, like most of what Google does, Alphabet invests at scale.

Today we’re going to undertake, if you will forgive the pun, a search of Google’s venture investments, its portfolio’s performance and what the company’s investing activity may say about its plans going forward.

Alphabet was the most active corporate investor in 2017

Taken together, Alphabet is one of the most prolific corporate investors in startups. In 2017, Crunchbase data shows that Alphabet’s three main investing arms — GV (formerly known as Google Ventures), CapitalG and Gradient Ventures — and Google itself invested in 103 deals.

(Crunchbase News contacted Alphabet for this story but did not hear back in time for publication.)

Below, you’ll find a chart comparing Alphabet’s investment activity to other major corporate investors, based on publicly disclosed deals captured in Crunchbase data.

For years, Intel and its venture arm Intel Capital topped the ranks of most active corporate venture investors. But for 2017, Crunchbase data suggests that Alphabet’s primary venture funds unseat the chip manufacturer. With 72 deals struck, Tencent Holdings and its venture affiliates rank second and SoftBank, which has a $100 billion pool of capital to slosh around, comes in third with 64 deals announced in 2017.

The Alphabet investing universe

As we alluded to earlier, Alphabet has a somewhat unusual setup for a corporate investor. Data shows that Alphabet makes the overwhelming majority of its equity investments out of four primary entities:

  • GV, formerly known as Google Ventures, is Alphabet’s most prolific venture fund.
  • Growth equity fund CapitalG invests primarily in late-stage deals.
  • Gradient Ventures, Google’s newest fund, is focused on artificial intelligence deals.
  • Finally, Google itself, has made a number of direct corporate venture investments.

Alphabet and its funds upped their pace of investing too, as the chart below shows:

In 2017, Alphabet’s equity investment deal volume topped historical highs from 2014.

In addition to these equity investment operations, Google operates the Launchpad Accelerator, which grants $50,000 equity-free to startups in Africa, Asia, South America and Eastern Europe. The company also issues grants and makes impact-oriented investments out of an entity called Google.org.

Taken together, here is what the Alphabet investment universe looks like:

The network visualization above shows the connections between Alphabet’s various investing groups and their respective portfolios.1 This graphic depicts 676 connections between six Google investing groups (labeled above in yellow), 570 portfolio organizations and 75 companies that acquired Alphabet-backed portfolio companies.

And, for the most part, there isn’t as much overlap as one may expect. CapitalG and GV only share two portfolio companies. GV invested in the seed round of Gusto, the payroll and HR software platform, and both GV and CapitalG invested in Gusto’s Series B round. GV and CapitalG also invested in Pindrop’s Series C round, although CapitalG led that round. Apart from those two companies, though, Crunchbase data doesn’t suggest any other portfolio overlap between GV and CapitalG.

Google and GV also share some portfolio companies. Google led INVIDI Technologies’ Series D round, in which GV was a mere participant. Google also led the Series A round of popular consumer genetics company 23andMe. Google followed on in the Series B round, in which Google co-founder Sergey Brin was also an investor. GV didn’t invest in 23andMe until its Series C. GV continued its investment all the way through 23andMe’s Series E. Google and GV are also investors in Ripcord, an early-stage company building robots that scan and digitize paper documents.

Shared exits

If there isn’t much overlap between Alphabet’s assorted funds and their investing activity, where is it then? The answer, it seems, may be in the exit data.

A wide range of companies have acquired startups in which one or more of Alphabet’s capital deployment arms invested. Crunchbase data shows that 81 entities have acquired 100 companies in which Google invested. Of those, it seems like Alphabet is its own best customer, as the chart below shows:

All in, Alphabet has acquired seven companies in which it had previously invested. Google itself acquired six companies it previously invested in, and its X unit (formerly known as Google X) acquired Makani Power, a company that developed airborne wind turbines, in which Google had directly invested. Other frequent trading partners with Google are Cisco, which has acquired six Google-backed companies, and Yahoo (now, together with AOL, part of Verizon-controlled Oath) with five acquisitions.

As an aside, Google invested in both SolarCity and Tesla, two companies with ties to Elon Musk. In 2011, Google invested $280 million in SolarCity, a company founded by two cousins of Musk. Google and its co-founders Larry Page and Sergey Brin invested in Tesla’s Series C round alongside Musk, Tesla’s co-founder. Tesla went public in 2010 and completed its acquisition of SolarCity, a $2.6 billion all-stock deal, in 2016.

And as the network visualization above shows, Tesla isn’t the only Alphabet portfolio company to go public. Alphabet funds struck venture deals with 11 other companies that have since gone public, including Baidu, HubSpot, Cloudera, Spero Therapeutics, Lending Club and Zynga.

Deals spanning A to Z

If one had to describe Alphabet funds’ collective portfolio of venture deals in one word, it would be “eclectic.” Unlike many corporate venture portfolios, there doesn’t appear to be a unifying, cohesive theme to Alphabet’s outside investments. The AI-focus of Gradient Ventures aside, Alphabet is just as likely to invest in a homeowners insurance company like Lemonade or a customer support platform like UJET (which Crunchbase News covered recently) as it is to invest in non-dairy milk producer Ripple Foods or African tech recruiting platform Andela.

The diversity of Alphabet’s venture investments echoes the diverse collection of businesses, initiatives and long-shot bets under its corporate umbrella. And just like it’s difficult to predict what kind of new project Alphabet will launch next, it seems that no amount of searching and sifting can say what its venture arms will embrace next.

  1. The network visualization was created using Gephi, an open-source software package used for making network visualizations, and the ForceAtlas2 layout algorithm.
Featured Image: Li-Anne Dias

 


0

This autonomous 3D scanner figures out where it needs to look

03:09 | 16 February

If you need to make a 3D model of an object, there are plenty of ways to do so, but most of them are only automated to the extent that they know how to spin in circles around that object and put together a mesh. This new system from Fraunhofer does it more intelligently, getting a basic idea of the object to be scanned and planning out what motions will let it do so efficiently and comprehensively.

It takes what can be a time-consuming step out of the process in which a scan is complete and the user has to inspect it, find where it falls short (an overhanging part occluding another, for instance, or an area of greater complexity that requires closer scrutiny), and customize a new scan to make up for these lacks. Alternatively, the scanner might already have to have a 3D model loaded in order to recognize what it’s looking at and know where to focus.

Fraunhofer’s project, led by Pedro Santos at the Institute for Computer Graphics Research, aims to get it right the first time by having the system evaluate its own imagery as it goes and plan its next move.

The special thing about our system is that it scans components autonomously and in real time,” he said in a news release. It’s able to “measure any component, irrespective of its design — and you don’t have to teach it.”

This could help in creating one-off duplicates of parts the system has never seen before, like a custom-made lamp or container, or an replacement for a vintage car’s door or engine.

If you happen to be in Hanover in April, drop by Hannover Messe and try it out for yourself.

Featured Image: Fraunhofer

 


0

MIT’s new chip could bring neural nets to battery-powered gadgets

19:37 | 14 February

MIT researchers have developed a chip designed to speed up the hard work of running neural networks, while also reducing the power consumed when doing so dramatically – by up to 95 percent, in fact. The basic concept involves simplifying the chip design so that shuttling of data between different processors on the same chip is taken out of the equation.

The big advantage of this new method, developed by a team lead by MIT graduate student Avishek Biswas, is that it could potentially be used to run neural networks on smartphones, household devices and other portable gadgets, rather than requiring servers drawing constant power from the grid.

Why is that important? Because it means that phones of the future using this chip could do things like advanced speech and face recognition using neural nets and deep learning locally, rather than requiring on more crude, rule-based algorithms, or routing information to the cloud and back to interpret results.

Computing ‘at the edge,’ as its called, or at the site of sensors actually gathering the data, is increasingly something companies are pursuing and implementing, so this new chip design method could have a big impact on that growing opportunity should it become commercialized.

Featured Image: Zapp2Photo/Getty Images

 


0

HTC’s smartphone chief and ex-CFO, Chialin Chang, resigns

12:57 | 14 February

HTC’s smartphone and connected devices president and former CFO, Chialin Chang, has resigned. The move, spotted earlier by Engadget, was announced today and is effective immediately.

Chang joined the company in 2012 as CFO. He also previously ran HTC’s global sales business. Before eventually becoming president of smartphones and connected devices in 2016.

HTC’s investor note specifies Chang is leaving for “personal career plan” reasons, and local press in Taiwan is reporting that Chang intends to set up his own AI startup.

HTC does not list a replacement for the position and did not respond when we asked about its plans for rehiring a smartphone chief. A company spokesperson provided the following statement on the news: “We can confirm Chialin Chang has resigned from his position as President of the Smartphone and Connected Devices Business at HTC.  We thank him for his dedication to the Company for the last six years and wish him well in his future endeavours.”

Chang’s departure follows HTC transferring more than 2,000 of its best engineers to Google at the end of last month on the completion of a $1.1BN cooperation agreement between the pair, which was announced last fall — with Mountain View taking charge of many of the HTC engineers who worked on its Pixel devices.

In exchange HTC has a chunk of cash but the size of its engineering team has shrunk by about a fifth — and it’s now down a smartphone president to boot. How it can go about reviving a smartphone business which has for years suffered lackluster earnings — on account of being outmanoeuvred and outgunned by faster and better resourced rivals — remains an open-ended question at this point.

In recent years HTC has been increasingly focused on its emerging VR business — under the Vive brand and in partnership with games publisher Valve. Though, in September, it said it remained committed to both VR and smartphones, including its U series of premium smartphones.

And it also said its collaboration with Google means it can continue to work with the engineers that now work for the Pixel maker.

Chairwoman and CEO Cher Wang is giving a keynote speech at the Mobile World Congress tradeshow later this month. But there will be no flashy HTC press conference — as was the norm in its smartphone heyday.

Indeed, it does not appear to have any flagship hardware launches in the pipe for MWC 2018. Though, once again, it will be demoing its Vive VR technology to delegates at the world’s biggest mobile show.

Featured Image: Joan Cros Garcia/Corbis/Getty Images

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 1558

Site search


Last comments

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

An Interview With Shaquille O’Neal: Businessman, Investor And Video Game Star
Anna K
Shaquilee is a mogul! I see him the Gold bond commercials and think that he's doing something right…
Anna K