Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Brexit

<< Back Forward >>
Topics from 1 to 10 | in all: 19

Laurel Bowden of VC firm 83North on the European deep tech and startup ecosystems

21:56 | 7 October

London and Tel Aviv based VC firm 83North has closed out its fifth fund at $300 million, as we reported earlier. It last raised a $250 million fund in 2017 and expects to continue the same investment mix, while tracking developments in emerging areas like healthcare AI and autonomous vehicles.

In a conversation with general partner Laurel Bowden, the veteran investor shared a few further thoughts with Extra Crunch — talking about the tech scene in Europe vs Israel, what the firm looks for in a team and tips on scaling globally.

The interview has been lightly edited for clarity. 

TechCrunch: Is Europe starting to catch up to Israel when it comes to deep tech startups?

Laurel Bowden: We clearly think we have in our portfolio some deep tech. And in other VC portfolios too — there’s clearly some deep tech [coming out of Europe]. And then on the reverse side you’ve seen more consumer-related stuff coming out of Israel. But still if you take a blanket look, we see more data infrastructure, security, storage coming out of Israel than we see in Europe — that’s for sure.

 


0

How UK VCs are managing the risk of a ‘no deal’ Brexit

00:08 | 29 August

Grab your economic zombie mask: A Halloween “no deal” Brexit is careening into view. New prime minister Boris Johnson has pledged that the country will leave the European Union on October 31 with or without a deal — “do or die” as he put it. A year earlier as the foreign secretary, he used an even more colorful phrase to skewer diplomatic concern about the impact of a hard Brexit on business — reportedly condensing his position to a pithy expletive: “Fuck business.”

It was only a few years ago during the summer of 2016, following the shock result of the UK’s in/out EU referendum, the government’s aspiration was to leave in a “smooth and orderly” manner as the prelude to a “close and special” future trading partnership, as then PM Theresa May put it. A withdrawal deal was negotiated but repeatedly rejected by parliament. The PM herself was next to be despatched.

Now, here we are. The U.K. has arrived at a political impasse in which the nation is coasting toward a Brexit cliff edge. We’re at the brink here, with domestic politics turned upside down, because “no deal” is the only leverage left for “do or die” brexiteers that parliament can’t easily block.

Ironic because there’s no majority in parliament for “no deal.” But the end of the Article 50 extension period represents a legal default — a hard deadline that means the U.K. will soon fall out of the EU unless additional action is taken. Of course time itself can’t be made to grind to a halt. So “no deal” is the easy option for a government that’s made doing anything else to sort Brexit really really hard.

After three full years of Brexit uncertainty, the upshot for U.K. business is there’s no end in sight to even the known unknowns. And now a clutch of unknown unknowns seems set to pounce come Halloween when the country steps into the chaos of leaving with nada, as the current government says it must.

So how is the U.K. tech industry managing the risk of a chaotic exit from the European Union? The prevailing view among investors about founders is that Brexit means uncertain business as usual. “Resilience is the mother of entrepreneurship!” was the almost glib response of one VC asked how founders are coping.

“This is no worse than the existential dread that most founders feel every day about something or other,” said another, dubbing Brexit “just an enormous distraction.” And while he said the vast majority of founders in the firm’s portfolio would rather the whole thing was cancelled — “most realize it’s not going to be so they just want to get on.”

 


0

Brittany Kaiser dumps more evidence of Brexit’s democratic trainwreck

17:40 | 30 July

A UK parliamentary committee has published new evidence fleshing out how membership data was passed from UKIP, a pro-Brexit political party, to Leave.EU, a Brexit supporting campaign active in the 2016 EU referendum — via the disgraced and now defunct data company, Cambridge Analytica.

In evidence sessions last year, during the DCMS committee’s enquiry into online disinformation, it was told by both the former CEO of Cambridge Analytica, and the main financial backer of the Leave.EU campaign, the businessman Arron Banks, that Cambridge Analytica did no work for the Leave.EU campaign.

Documents published today by the committee clearly contradict that narrative — revealing internal correspondence about the use of a UKIP dataset to create voter profiles to carry out “national microtargeting” for Leave.EU.

They also show CA staff raising concerns about the legality of the plan to model UKIP data to enable Leave.EU to identify and target receptive voters with pro-Brexit messaging.

The UK’s 2016 in-out EU referendum saw the voting public narrowing voting to leave — by 52:48.

New evidence from Brittany Kaiser

The evidence, which includes emails between key Cambridge Analytica, employees of Leave.EU and UKIP, has been submitted to the DCMS committee by Brittany Kaiser — a former director of CA (who you may just have seen occupying a central role in Netflix’s The Great Hack documentary, which digs into links between the Trump campaign and the Brexit campaign).

“As you can see with the evidence… chargeable work was completed for UKIP and Leave.EU, and I have strong reasons to believe that those datasets and analysed data processed by Cambridge Analytica as part of a Phase 1 payable work engagement… were later used by the Leave.EU campaign without Cambridge Analytica’s further assistance,” writes Kaiser in a covering letter to committee chair, Damian Collins, summarizing the submissions.

Kaiser gave oral evidence to the committee at a public hearing in April last year.

At the time she said CA had been undertaking parallel pitches for Leave.EU and UKIP — as well as for two insurance brands owned by Banks — and had used membership survey data provided by UKIP to built a model for pro-brexit voter personality types, with the intention of it being used “to benefit Leave.EU”.

“We never had a contract with Leave.EU. The contract was with the UK Independence party for the analysis of this data, but it was meant to benefit Leave.EU,” she said then.

The new emails submitted by Kaiser back up her earlier evidence. They also show there was discussion of drawing up a contract between CA, UKIP and Leave.EU in the fall before the referendum vote.

In one email — dated November 10, 2015 — CA’s COO & CFO, Julian Wheatland, writes that: “I had a call with [Leave.EU’s] Andy Wigmore today (Arron’s right hand man) and he confirmed that, even though we haven’t got the contract with the Leave written up, it’s all under control and it will happen just as soon as [UKIP-linked lawyer] Matthew Richardson has finished working out the correct contract structure between UKIP, CA and Leave.”

Another item Kaiser has submitted to the committee is a separate November email from Wigmore, inviting press to a briefing by Leave.EU — entitled “how to win the EU referendum” — an event at which Kaiser gave a pitch on CA’s work. In this email Wigmore describes the firm as “the worlds leading target voter messaging campaigners”.

In another document, CA’s Wheatland is shown in an email thread ahead of that presentation telling Wigmore and Richardson “we need to agree the line in the presentations next week with regards the origin of the data we have analysed”.

“We have generated some interesting findings that we can share in the presentation, but we are certain to be asked where the data came from. Can we declare that we have analysed UKIP membership and survey data?” he then asks.

UKIP’s Richardson replies with a negative, saying: “I would rather we didn’t, to be honest” — adding that he has a meeting with Wigmore to discuss “all of this”, and ending with: “We will have a plan by the end of that lunch, I think”.

In another email, dated November 10, sent to multiple recipients ahead of the presentation, Wheatland writes: “We need to start preparing Brittany’s presentation, which will involve working with some of the insights David [Wilkinson, CA’s chief data scientist] has been able to glean from the UKIP membership data.”

He also asks Wilkinson if he can start to “share insights from the UKIP data” — as well as asking “when are we getting the rest of the data?”. (In a later email, dated November 16, Wilkinson shares plots of modelled data with Kaiser — apparently showing the UKIP data now segmented into four blocks of brexit supporters, which have been named: ‘Eager activist’; ‘Young reformer’; ‘Disaffected Tories’; and ‘Left behinds’.)

In the same email Wheatland instructs Jordanna Zetter, an employee of CA’s parent company SCL, to brief Kaiser on “how to field a variety of questions about CA and our methodology, but also SCL. Rest of the world, SCL Defence etc” — asking her to liaise with other key SCL/CA staff to “produce some ‘line to take’ notes”.

Another document in the bundle appears to show Kaiser’s talking points for the briefing. These make no mention of CA’s intention to carry out “national microtargeting” for Leave.EU — merely saying it will conduct “message testing and audience segmentation”.

“We will be working with the campaign’s pollsters and other vendors to compile all the data we have available to us,” is another of the bland talking points Kaiser was instructed to feed to the press.

“Our team of data scientists will conduct deep-dive analysis that will enable us to understand the electorate better than the rival campaigns,” is one more unenlightening line intended for public consumption.

But while CA was preparing to present the UK media with a sanitized false narrative to gloss over the individual voter targeting work it actually intended to carry out for Leave.EU, behind the scenes concerns were being raised about how “national microtargeting” would conflict with UK data protection law.

Another email thread, started November 19, highlights internal discussion about the legality of the plan — with Wheatland sharing “written advice from Queen’s Counsel on the question of how we can legally process data in the UK, specifically UKIP’s data for Leave.eu and also more generally”. (Although Kaiser has not shared the legal advice itself.)

Wilkinson replies to this email with what he couches as “some concerns” regarding shortfalls in the advice, before going into detail on how CA is intending to further process the modelled UKIP data in order to individually microtarget brexit voters — which he suggests would not be legal under UK data protection law “as the identification of these people would constitute personal data”.

He writes:

I have some concerns about what this document says is our “output” – points 22 to 24. Whilst it includes what we have already done on their data (clustering and initial profiling of their members, and providing this to them as summary information), it does not say anything about using the models of the clusters that we create to extrapolate to new individuals and infer their profile. In fact it says that our output does not identify individuals. Thus it says nothing about our microtargeting approach typical in the US, which I believe was something that we wanted to do with leave eu data to identify how each their supporters should be contacted according to their inferred profile.

For example, we wouldn’t be able to show which members are likely to belong to group A and thus should be messaged in this particular way – as the identification of these people would constitute personal data. We could only say “group A typically looks like this summary profile”.

Wilkinson ends by asking for clarification ahead of a looming meeting with Leave.EU, saying: “It would be really useful to have this clarified early on tomorrow, because I was under the impression it would be a large part of our product offering to our UK clients.” [emphasis ours]

Wheatland follows up with a one line email, asking Richardson to “comment on David’s concern” — who then chips into the discussion, saying there’s “some confusion at our end about where this data is coming from and going to”.

He goes on to summarize the “premises” of the advice he says UKIP was given regarding sharing the data with CA (and afterwards the modelled data with Leave.EU, as he implies is the plan) — writing that his understanding is that CA will return: “Analysed Data to UKIP”, and then: “As the Analysed Dataset contains no personal data UKIP are free to give that Analysed Dataset to anyone else to do with what they wish. UKIP will give the Analysed Dataset to Leave.EU”.

“Could you please confirm that the above is correct?” Richardson goes on. “Do I also understand correctly that CA then intend to use the Analysed Dataset and overlay it on Leave.EU’s legitimately acquired data to infer (interpolate) profiles for each of their supporters so as to better control the messaging that leave.eu sends out to those supporters?

“Is it also correct that CA then intend to use the Analysed Dataset and overlay it on publicly available data to infer (interpolate) which members of the public are most likely to become Leave.EU supporters and what messages would encourage them to do so?

“If these understandings are not correct please let me know and I will give you a call to discuss this.”

About half an hour later another SCL Group employee, Peregrine Willoughby-Brown, joins the discussion to back up Wilkinson’s legal concerns.

“The [Queen’s Counsel] opinion only seems to be an analysis of the legality of the work we have already done for UKIP, rather than any judgement on whether or not we can do microtargeting. As such, whilst it is helpful to know that we haven’t already broken the law, it doesn’t offer clear guidance on how we can proceed with reference to a larger scope of work,” she writes without apparent alarm at the possibility that the entire campaign plan might be illegal under UK privacy law.

“I haven’t read it in sufficient depth to know whether or not it offers indirect insight into how we could proceed with national microtargeting, which it may do,” she adds — ending by saying she and a colleague will discuss it further “later today”.

It’s not clear whether concerns about the legality of the microtargeting plan derailed the signing of any formal contract between Leave.EU and CA — even though the documents imply data was shared, even if only during the scoping stage of the work.

“The fact remains that chargeable work was done by Cambridge Analytica, at the direction of Leave.EU and UKIP executives, despite a contract never being signed,” writes Kaiser in her cover letter to the committee on this. “Despite having no signed contract, the invoice was still paid, not to Cambridge Analytica but instead paid by Arron Banks to UKIP directly. This payment was then not passed onto Cambridge Analytica for the work completed, as an internal decision in UKIP, as their party was not the beneficiary of the work, but Leave.EU was.”

Kaiser has also shared a presentation of the UKIP survey data, which bears the names of three academics: Harold Clarke, University of Texas at Dallas & University of Essex; Matthew Goodwin, University of Kent; and Paul Whiteley, University of Essex, which details results from the online portion of the membership survey — aka the core dataset CA modelled for targeting Brexit voters with the intention of helping the Leave.EU campaign.

(At a glance, this survey suggests there’s an interesting analysis waiting to be done of the

for the current blitz of campaign message testing ads being run on Facebook by the new (pro-brexit) UK prime minister Boris Johnson and the core UKIP demographic, as revealed by the survey data… )

[gallery ids="1862050,1862051,1862052"]

Call for Leave.EU probe to be reopened

Ian Lucas, MP, a member of the DCMS committee has called for the UK’s Electoral Commission to re-open its investigation into Leave.EU in view of “additional evidence” from Kaiser.

We reached out to the Electoral Commission to ask if it will be revisiting the matter.

An Electoral Commission spokesperson told us: “We are considering this new information in relation to our role regulating campaigner activity at the EU referendum. This relates to the 10 week period leading up to the referendum and to campaigning activity specifically aimed at persuading people to vote for a particular outcome.

“Last July we did impose significant penalties on Leave.EU for committing multiple offences under electoral law at the EU Referendum, including for submitting an incomplete spending return.”

Last year the Electoral Commission also found that the official Vote Leave Brexit campaign broke the law by breaching election campaign spending limits. It channelled money to a Canadian data firm linked to Cambridge Analytica to target political ads on Facebook’s platform, via undeclared joint working with a youth-focused Brexit campaign, BeLeave.

Six months ago the UK’s data watchdog also issued fines against Leave.EU and Banks’ insurance company, Eldon Insurance — having found what it dubbed as “serious” breaches of electronic marketing laws, including the campaign using insurance customers’ details to unlawfully to send almost 300,000 political marketing messages.

A spokeswoman for the ICO told us it does not have a statement on Kaiser’s latest evidence but added that its enforcement team “will be reviewing the documents released by DCMS”.

The regulator has been running a wider enquiry into use of personal data for social media political campaigning. And last year the information commissioner called for an ethical pause on its use — warning that trust in democracy risked being undermined.

And while Facebook has since applied a thin film of ‘political ads’ transparency to its platform (which researches continue to warn is not nearly transparent enough to quantify political use of its ads platform), UK election campaign laws have yet to be updated to take account of the digital firehoses now (il)liberally shaping political debate and public opinion at scale.

It’s now more than three years since the UK’s shock vote to leave the European Union — a vote that has so far delivered three years of divisive political chaos, despatching two prime ministers and derailing politics and policymaking as usual.

Leave.EU

Many questions remain over a referendum that continues to be dogged by scandals — from breaches of campaign spending; to breaches of data protection and privacy law; and indeed the use of unregulated social media — principally Facebook’s ad platform — as the willing conduit for distributing racist dogwhistle attack ads and political misinformation to whip up anti-EU sentiment among UK voters.

Dark money, dark ads — and the importing of US style campaign tactics into UK, circumventing election and data protection laws by the digital platform backdoor.

This is why the DCMS committee’s preliminary report last year called on the government to take “urgent action” to “build resilience against misinformation and disinformation into our democratic system”.

The very same minority government, struggling to hold itself together in the face of Brexit chaos, failed to respond to the committee’s concerns — and has now been replaced by a cadre of the most militant Brexit backers, who are applying their hands to the cheap and plentiful digital campaign levers.

The UK’s new prime minister, Boris Johnson, is demonstrably doubling down on political microtargeting: Appointing no less than Dominic Cummings, the campaign director of the official Vote Leave campaign, as a special advisor.

At the same time Johnson’s team is firing out a flotilla of Facebook ads — including ads that appear intended to gather voter sentiment for the purpose of crafting individually targeted political messages for any future election campaign.

So it’s full steam ahead with the Facebook ads…

Boris Facebook ads

Yet this ‘democratic reset’ is laid right atop the Brexit trainwreck. It’s coupled to it, in fact.

Cummings worked for the self same Vote Leave campaign that the Electoral Commission found illegally funnelled money — via Cambridge Analytica-linked Canadian data firm AggregateIQ — into a blitz of microtargeted Facebook ads intended to sway voter opinion.

Vote Leave also faced questions over its use of Facebook-run football competition promising a £50M prize-pot to fans in exchange for handing over a bunch of personal data ahead of the referendum, including how they planned to vote. Another data grab wrapped in fancy dress — much like GSR’s thisisyourlife quiz app that provided the foundational dataset for CA’s psychological voter profiling work on the Trump campaign.

The elevating of Cummings to be special adviser to the UK PM represents the polar opposite of an ‘ethical pause’ in political microtargeting.

Make no mistake, this is the Brexit campaign playbook — back in operation, now with full-bore pedal to the metal. (With his hands now on the public purse, Johnson has pledged to spend £100M on marketing to sell a ‘no deal Brexit’ to the UK public.)

Kaiser’s latest evidence may not contain a smoking bomb big enough to blast the issue of data-driven and tech giant-enabled voter manipulation into a mainstream consciousness, where it might have the chance to reset the political conscience of a nation — but it puts more flesh on the bones of how the self-styled ‘bad boys of Brexit’ pulled off their shock win.

In The Great Hack the Brexit campaign is couched as the ‘petri dish’ for the data-fuelled targeting deployed by the firm in the 2016 US presidential election — which delivered a similarly shock victory for Trump.

If that’s so, these latest pieces of evidence imply a suggestively close link between CA’s experimental modelling of UKIP supporter data, as it shifted gears to apply its dark arts closer to home than usual, and the models it subsequently built off of US citizens’ data sucked out of Facebook. And that in turn goes some way to explaining the cosiness between Trump and UKIP founder Nigel Farage…

 

Kaiser ends her letter to DCMS writing: “Given the enormity of the implications of earlier inaccurate conclusions by different investigations, I would hope that Parliament reconsiders the evidence submitted here in good faith. I hope that these ten documents are helpful to your research and furthering the transparency and truth that your investigations are seeking, and that the people of the UK and EU deserve”.

Banks and Wigmore have responded to the publication in their usual style, with a pair of dismissive tweets — questioning Kaiser’s motives for wanting the data to be published and throwing shade on how the evidence was obtained in the first place.

 


0

‘The Great Hack’: Netflix doc unpacks Cambridge Analytica, Trump, Brexit and democracy’s death

05:47 | 24 July

It’s perhaps not for nothing that The Great Hack – the new Netflix documentary about the connections between Cambridge Analytica, the US election and Brexit, out on July 23 – opens with a scene from Burning Man. There, Brittany Kaiser, a former employee of Cambridge Analytica, scrawls the name of the company onto a strut of ‘the temple’ that will eventually get burned in that fiery annual ritual. It’s an apt opening.

There are probably many of us who’d wish quite a lot of the last couple of years could be thrown into that temple fire, but this documentary is the first I’ve seen to expertly unpick what has become the real-world dumpster fire that is social media, dark advertising and global politics which have all become inextricably, and, often fatally, combined.

The documentary is also the first that you could plausibly recommend those of your relatives and friends who don’t work in tech, as it explains how social media – specifically Facebook – is now manipulating our lives and society, whether we like it or not.

As New York Professor David Carroll puts it at the beginning, Facebook gives “any buyer direct access to my emotional pulse” – and that included political campaigns during the Brexit referendum and the Trump election. Privacy campaigner Carroll is pivotal to the film’s story of how our data is being manipulated and essentially kept from us by Facebook.

The UK’s referendum decision to leave the European Union, in fact, became “the petri dish” for a Cambridge Analytica experiment, says Guardian journalist Carole Cadwalladr She broke the story of how the political consultancy, led by Eton-educated CEO Alexander Nix, applied techniques normally used by ‘psyops’ operatives in Afghanistan to the democratic operations of the US and UK, and many other countries, over a chilling 20+ year history. Watching this film, you literally start to wonder if history has been warped towards a sickening dystopia.

carole

The petri-dish of Brexit worked. Millions of adverts, explains the documentary, targeted individuals, exploiting fear and anger, to switch them from ‘persuadables’, as CA called them, into passionate advocates for, first Brexit in the UK, and then Trump later on.

Switching to the US, the filmmakers show how CA worked directly with Trump’s “Project Alamo” campaign, spending a million dollars a day on Facebook ads ahead of the 2016 election.

The film expertly explains the timeline of how CA had first worked off Ted Cruz’s campaign, and nearly propelled that lack-luster candidate into first place in the Republican nominations. It was then that the Trump campaign picked up on CA’s military-like operation.

After loading up the psychographic survey information CA had obtained from Aleksandr Kogan, the Cambridge University academic who orchestrated the harvesting of Facebook data, the world had become their oyster. Or, perhaps more accurately, their oyster farm.

Back in London, Cadwalladr notices triumphant Brexit campaigners fraternizing with Trump and starts digging. There is a thread connecting them to Breitbart owner Steve Bannon. There is a thread connecting them to Cambridge Analytica. She tugs on those threads and, like that iconic scene in ‘The Hurt Locker’ where all the threads pull-up unexploded mines, she starts to realize that Cambridge Analytica links them all. She needs a source though. That came in the form of former employee Chris Wylie, a brave young man who was able to unravel many of the CA threads.

But the film’s attention is often drawn back to Kaiser, who had worked first on US political campaigns and then on Brexit for CA. She had been drawn to the company by smooth-talking CEO Nix, who begged: “Let me get you drunk and steal all of your secrets.”

But was she a real whistleblower? Or was she trying to cover her tracks? How could someone who’d worked on the Obama campaign switch to Trump? Was she a victim of Cambridge Analytica, or one of its villains?

British political analyst Paul Hilder manages to get her to come to the UK to testify before a parliamentary inquiry. There is high drama as her part in the story unfolds.

Kaiser appears in various guises which vary from idealistically naive to stupid, from knowing to manipulative. It’s almost impossible to know which. But hearing about her revelation as to why she made the choices she did… well, it’s an eye-opener.

brit

Both she and Wylie have complex stories in this tale, where not everything seems to be as it is, reflecting our new world, where truth is increasingly hard to determine.

Other characters come and go in this story. Zuckerburg makes an appearance in Congress and we learn of the casual relationship Facebook had to its complicity in these political earthquakes. Although if you’re reading TechCrunch, then you will probably know at least part of this story.

Created for Netflix by Jehane Noujaim and Karim Amer, these Egyptian-Americans made “The Square”, about the Egyptian revolution of 2011. To them, the way Cambridge Analytica applied its methods to online campaigning was just as much a revolution as Egyptians toppling a dictator from Cario’s iconic Tahrir Square.

For them, the huge irony is that “psyops”, or psychological operations used on Muslim populations in Iraq and Afghanistan after the 9/11 terrorist attacks ended up being used to influence Western elections.

Cadwalladr stands head and shoulders above all as a bastion of dogged journalism, even as she is attacked from all quarters, and still is to this day.

What you won’t find out from this film is what happens next. For many, questions remain on the table: What will happen now Facebook is entering Cryptocurrency? Will that mean it could be used for dark election campaigning? Will people be paid for their votes next time, not just in Likes? Kaiser has a bitcoin logo on the back of her phone. Is that connected? The film doesn’t comment.

But it certainly unfolds like a slow-motion car crash, where democracy is the car and you’re inside it.

 


0

Job Today gets a $16M top up as it preps for Brexit bump

15:42 | 12 September

Accel-backed mobile-first jobs app Job Today has pulled in another $16M — an expansion to its November 2016 $20M Series B round. It raised a $10M Series A in January of the same year.

The 2015 founded startup offers a mobile app for job seekers that does away with the need for a CV.

Instead job seekers create a profile in the app and can apply to relevant jobs. Employers can then triage potential applicants via the app and chat to any they like the look of via its messaging platform.

The approach has been especially popular with fast turnover jobs in the service industry, such as hospitality and retail.

Job Today says it has more than five million job seekers registered on its platform, and claims to have delivered more than 100 million candidate applications to the 400,000+ predominantly small businesses posting jobs via the app to date (with 1M+ jobs posted). It currently operates in two markets: Spain and the UK.

The additional funding will be put towards expanding its presence in the UK market — where it says it’s seen “significant growth” in both job postings and candidate applications.

It says the overall volume of applications has increased by 46% year-on-year in the market, with the number of applications per candidate growing by 32% in the same period. The likes of Costa Coffee, Pret A Manger and Eat are named as among its “regular hirers”.

It’s also envisaging a Brexit bump for the local casual job market, as the UK’s decision to leave the European Union looks set to impact the supply of labor for employers…

Commenting in a statement, CEO Eugene Mizin, said: “The casual job market is often the first to experience the effects from macro-economic forces and Brexit will mean that many non-skilled and non-British workers will leave the UK. This will create a demand to fill casual jobs and create new opportunities for the less-skilled school, college and university leavers entering the workforce for the first time in 2019.”

The Series B expansion funds are coming from New York based investor 14W.

Job Today says it got additional growth uplift after integrating with Google Jobs — aka Google search’s built in AI-powered jobs engine. This launched in the UK in July 2018, and Job Today said it saw 101% growth in users in the first month of integration.

 


0

Uber denies its CTO met with Cambridge Analytica

15:39 | 19 April

Uber has denied that its sitting CTO, Thuan Pham, met with Cambridge Analytica — the controversial political consultancy at the center of a Facebook user data misuse scandal.

But it has not been able to confirm that no meetings between anyone else on its payroll and Cambridge Analytica took place.

“I’m not sure who they think they met with, but I can confirm our CTO never met with them and we don’t have a relationship with them,” an Uber spokeswoman told us.

Giving evidence to the UK parliament earlier this week, former Cambridge Analytica staffer, Brittany Kaiser, had claimed that CA executives met with the Uber CTO in the past two years. Although she did not explicitly name Pham — just citing the CTO job title.

The DCMS committee is running an enquiry into disinformation online.

Asked by the committee whether she had ever come across Uber data being used for any of the political campaigns that CA worked on, Kaiser replied “no”.

However she qualified her answer, adding: “Although Cambridge Analytica definitely had meetings with the CTO of Uber in California — about 1.5 to 2 years ago. 

“I don’t believe anything came of that but a conversation was had,” she also said.

The committee did not query her on the intent of the meetings with Uber — although later she was asked if she’d had any contacts with other “big data companies”, including Google.

Responding on that Kaiser confirmed she had had contacts with “Microsoft, Google and a few other companies of that nature, and Facebook” — though she said this was only in a standard business capacity, noting that CA was a client “purchasing digital advertising space through them”.

On Facebook she added: “They had two different political teams in the United States — so they had their Republican team and their Democrat team, who usually inhabited separate offices on separate floors. My consultants in Washington DC would work closely with the Republican team on how we would use their tools to the best benefit for our clients.”

Last month the committee also asked another ex-CA employee, whistleblower Chris Wylie, whether the company had access to Uber data — apparently concerned that a 2016 Uber data breach, affecting 57 million riders and drivers (which the company only disclosed in November last year) could have been another data source for CA.

“To my knowledge Cambridge Analytica has not used Uber data,” responded Wylie.

Uber told Congress last year that one of the hackers behind the 2016 breach was located in Canada — and that this hacker had first contacted it in November 2016 to demand a six-figure payment for the breached data.

Also located in Canada: Aggregate IQ, a data firm that has been linked to CA — which Wylie has described as the Canadian affiliate of CA’s parent entity, SCL — and which has been credited with playing a major role in the UK’s brexit referendum, receiving £3.5M from leave campaign groups in the run up to the 2016 referendum. (AIQ has denied it has ever been a part of Cambridge Analytica or SCL.)

The question of where this small Canadian firm obtained data on UK citizens to carry out microtargeted political advertising has been another line of enquiry pursued by the committee.

“[W]here did [AggregateIQ] get the data?” asked Wylie last month in his evidence session, discussing AIQ’s involvement in the UK’s Brexit campaign. “How do you create a massive targeting operation in a country that AIQ had not previously worked in in two months? It baffles me as to how that could happen in such a short amount of time.

“That is a good question. It is unfortunate that AIQ hides behind jurisdictional barriers and does not come here and answer those questions. But it is something that hopefully can be looked at as to how did it actually work.”

Wylie has also alleged that AIQ “worked on projects that involved hacked material and kompromat” as well as distributing “violent videos of people being bled to death to intimidate voters”.

“This is the company that played an incredibly pivotal role in politics here. Something that I would strongly recommend to the Committee is that they not only push the authorities here, but give them the support that they need in order to investigate this company and what they were doing in Brexit,” he added.

 


0

Juro grabs another $2m to take the hassle out of contracts

18:37 | 9 April

UK startup Juro, which is applying a “design centric approach” and machine learning tech to help businesses speed up the authoring and management of sales contracts, has closed $2m in seed funding led by Point Nine Capital.

Prior investor Seedcamp also contributed to the round. Juro is announcing Taavet Hinrikus (TransferWise’s co-founder) as an investor now too, as well as Michael Pennington (Gumtree co-founder) and the family office of Paul Forster (co-founder of Indeed.com).

Back in January 2017 the London-based startup closed a $750,000 (£615k) seed round, though CEO and co-founder Richard Mabey tells us that was really better classed as an angel round — with Point Nine Capital only joining “late” in the day.

“We actually could have strung it out to Series A,” he says of the funding that’s being announced now. “But we had multiple offers come in and there is so much of an explosion in demand for the [machine learning] that it made sense to do a round now rather than wait for the A. The whole legal industry is undergoing radical change and we want to be leading it.”

Juro’s SaaS product is an integrated contracts workflow that combines contract creation, e-signing and commenting capabilities with AI-powered contract analytics.

Its general focus is on customers that have to manage a high volume of contacts — such as marketplaces.

The 2016-founded startup is not breaking out any customer numbers yet but says its client list includes the likes of Estee Lauder, Deliveroo and Nested. And Mabey adds that “most” of its demand is coming from enterprise at this point, noting it has “several tech unicorns and Fortune 500 companies in trial”.

While design is clearly a major focus — with the startup deploying clean-looking templates and visual cues to offer a user-friendly ‘upgrade’ on traditional legal processes — the machine learning component is its scalable, value-added differentiator to serve the target b2b users by helping them identify recurring sticking points in contract negotiations and keep on top of contract renewals.

Mabey tells TechCrunch the new funding will be used to double down on development of the machine learning component of the product.

“We’re not the first to market in contract management by about 25 years,” he says with a smilie. “So we have always needed to prove out our vision of why the incumbents are failing. One part of this is clunky UX and we’ve succeeded so far in replacing legacy providers through better design (e.g. we replace DocuSign at 80% of our customers).

“But the thing we and our investors are really excited about is not just helping businesses with contract workflow but helping them understand their contract data, auto-tag contracts, see pattens in negotiations and red flag unusual contract terms.”

While this machine learning element is where he sees Juro cutting out a competitive edge in an existing and established market, Mabey concedes it takes “quite a lot of capital to do well”. Hence taking more funding now.

“We need a level of predictive accuracy in our models that risk averse lawyers can get comfortable with and that’s a big ask!” he says.

Specifically, Juro will be using the funding to hire data scientists and machine learning engineers — building out the team at both its London and Riga offices. “We’re doing it like crazy,” adds Mabey. “For example, we just hired from the UK government Digital Service the data scientist who delivered the first ML model used by the UK government (on the gov.uk website).

“There is a huge opportunity here but great execution is key and we’re building a world class team to do it. It’s a big bet to grow revenue as quickly as we are and do this kind of R&D but that’s just what the market is demanding.”

Juro’s HQ remains in London for now, though Mabey notes its entire engineering team is based in the EU — between Riga, Amsterdam and Barcelona — “in part to avoid ‘Brexit risk'”.

“Only 27% of the team is British and we have customers operating in 12 countries — something I’m quite proud of — but it does leave us rather exposed. We’re very open minded about where we will be based in the future and are waiting to hear from the government on the final terms of Brexit,” he says when asked whether the startup has any plans to Brexit to Berlin.

“We always look beyond the UK for talent: if the government cannot provide certainty to our Romanian product designer (ex Kalo, Entrepreneur First) that she can stay in the UK post Brexit without risking a visa application, tbh it makes me less bullish on London!”

 

 


0

Facebook reportedly suspends AggregateIQ over connection to improper data-sharing

03:08 | 7 April

AggregateIQ, a Canadian advertising tech and audience intelligence company, has been suspended by Facebook for allegedly being closely connected with SCL, the parent company of Cambridge Analytica, reported the National Observer.

News broke late last month that AIQ, which was deeply involved with (and handsomely paid by) pro-Leave Brexit groups, was not the independent Canadian data broker it claimed to be. Christopher Wylie, the whistleblower who blew the lid off the Cambridge Analytica story, explained it candidly to The Guardian:

Essentially it was set up as a Canadian entity for people who wanted to work on SCL projects who didn’t want to move to London. That’s how AIQ got started: originally to service SCL and Cambridge Analytica projects.

AIQ has maintained that it has operated independently. Dogged denials appear on its webpage:

AggregateIQ has never been and is not a part of Cambridge Analytica or SCL. Aggregate IQ has never entered into a contract with Cambridge Analytica . Chris Wylie has never been employed by AggregateIQ. AggregateIQ has never managed, nor did we ever have access to, any Facebook data or database allegedly obtained improperly by Cambridge Analytica.

But the reporting in the Guardian makes these claims hard to take seriously. For instance, a founding member was listed on Cambridge Analytica’s website as working at “SCL Canada,” the company had no website or phone number of its own for some time, and until 2016, AIQ’s only client was Cambridge Analytica. It really looks as if AIQ is simply a Canadian shell under which operations could be said to be performed independent of CA and SCL.

Whatever the nature of the connection, it was convincing enough for Facebook to put them in the same bucket. The company said in a statement to the National Observer:

In light of recent reports that AggregateIQ may be affiliated with SCL and may, as a result, have improperly received (Facebook) user data, we have added them to the list of entities we have suspended from our platform while we investigate.

That will put a damper on SCL Canada’s work for a bit — it’s hard to do social media targeting work when you’re not allowed on the premises of the biggest social network of them all. Note that no specific wrongdoing on AIQ’s part is suggested — it’s enough that it may be affiliated with SCL and as such may have had access to the dirty data.

I’ve asked both companies for confirmation and will update this post if I hear back.

 


0

Fake news is an existential crisis for social media 

22:12 | 18 February

The funny thing about fake news is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake news’. Some very visibly, a lot a lot less so.

Such as Russia painting the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that Russian-generated fake news is itself “fake news”.

And, indeed, the social media firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the data they release about it — in what can look suspiciously like an attempt to downplay the significance and impact of malicious digital propaganda, because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a tech firm’s favor) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or misinformation as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course. But it’s also clearly deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.

Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate — and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

*

Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed — still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that’s running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.

Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.

The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.

Here they are:

  • How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016?
  • How much have these media platforms spent to build their social followings?
  • Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account?
  • Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content?
  • Did either RT, Sputnik or Ruptly use ‘dark posts’ on either Facebook or Twitter to push their content during the EU referendum, or have they used ‘dark posts’ to build their extensive social media following?
  • What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik.
  • Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?

We put these questions to Facebook and Twitter.

In response, a Twitter spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. 

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period.  

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.

A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn’t peer reviewed, and so on.

But, in an illustrative twist, if you Google “89up Brexit”, Google New injects fresh Kremlin-backed opinions into the search results it delivers — see the top and third result here…

Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the UK staying in the EU. Making it easy for Russian state organs to slur the research as worthless.

The social media firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally ‘Russia Today’. Fake news thrives on shamelessness, clearly.

It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

Frankly, this situation is looking increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious disinformation and even drafting anti-fake news election laws.

And while the social media firms have been a bit more alacritous to respond to domestic lawmakers’ requests for action and investigation into political disinformation, that just makes their wider inaction, when viable and reasonable concerns are brought to them by non-US politicians and other concerned individuals, all the more inexcusable.

The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

But if tech giants have treated requests for information and help about political disinformation from the UK — a close US ally — so poorly, you can imagine how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the homeland.

Earlier this month, in what looked very much like an act of exasperation, the chair of the UK’s fake news enquiry, Damian Collins, flew his committee over the Atlantic to question Facebook, Twitter and Google policy staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of new information. The full impact of Russia’s meddling in the Brexit vote remains unquantified.

One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

*

The partial data about Russia’s Brexit dis-ops, which Facebook and Twitter have trickled out so far, like blood from the proverbial stone, is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news to churn out more maliciously minded content, as RT and Sputnik demonstrably have.

In all probability, it also pours more fuel on Brexit-based societal division. The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU. What role did social media and Kremlin agents play in exacerbating those divisions? Without hard data it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow Russia’s line that so called fake news is a fiction sicked up by over-imaginative Russophobes.

But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

Self interest also compellingly explains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough data and/or failing to interrogate deeply enough their own systems when asked to respond to reasonable data requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out to try to land a knock-out blow in the game of ‘fake news and/or total fiction’, serves no useful purpose in a civilized society. It’s just more FUD for the fake news mill.

Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side. (As they frequently are by and on social media platforms.) So you could do robust math on fake news — if only you had access to the underlying data.

But only the social media platforms have that. And they’re not falling over themselves to share it. Instead, Twitter routinely rubbishes third party studies exactly because external researchers don’t have full visibility into how its systems shape and distribute content.

Yet external researchers don’t have that visibility because Twitter prevents them from seeing how it shapes tweet flow. Therein lies the rub.

Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

Facebook has also built some anti-fake news tools to try to tweak what its algorithms favor, though nothing it’s done on that front to date looks very successfully (even as a more major change to its New Feed, to make it less of a news feed, has had a unilateral and damaging impact on the visibility of genuine news organizations’ content — so is arguably going to be unhelpful in reducing Facebook-fueled disinformation).

In another instance, Facebook’s mass closing of what it described as “fake accounts” ahead of, for example, the UK and French elections can also look problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what content they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin disinformation-spreading bots.

More recently, Facebook has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political advertising on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own VP of ads has admitted that Russian efforts to spread propaganda are ongoing and persistent, and do not solely target elections or politicians…

The main goal of the Russian propaganda and misinformation effort is to divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well. We are quite divided as a nation.

— Rob Goldman (@robjective) February 17, 2018

The Russian campaign is ongoing. Just last week saw news that Russian spies attempted to sell a fake video of Trump with a hooker to the NSA. US officials cut off the deal because they were wary of being entangled in a Russian plot to create discord. https://t.co/jO9GwWy2qH

— Rob Goldman (@robjective) February 17, 2018

The wider point is that social division is itself a tool for impacting democracy and elections — so if you want to achieve ongoing political meddling that’s the game you play.

You don’t just fire up your disinformation guns ahead of a particular election. You work to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot box, whenever there’s an election.

Russia knows this. And that’s why the Kremlin has been playing such a long propaganda game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes digitally amplified disinformation an existential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of social media platforms that this game is being played so effectively — because these platforms have historically preferred to champion free speech rather than root out and eradicate hate speech and abuse; inviting trolls and malicious actors to exploit the freedom afforded by their free speech ideology and to turn powerful broadcast and information-targeting platforms into cyberweapons that blast the free societies that created them.

Social media’s filtering and sorting algorithms also crucially failed to make any distinction between information and disinformation. Which was their great existential error of judgement, as they sought to eschew editorial responsibility while simultaneously working to dominate and crush traditional media outlets which do operate within a more tightly regulated environment (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

But if your platform treats everything and almost anything indiscriminately as ‘content’, then don’t be surprised if fake news becomes indistinguishable from the genuine article because you’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Goldman’s suggested answer to social media’s existential fake news problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that social media platforms are piping into people’s eyeballs.

Lessons in critical thinking are certainly a good idea. But fakes are compelling for a reason. Look at the tenacity with which conspiracy theories take hold in the US. In short, it would take a very long time and a very large investment in critical thinking education programs to create any kind of shielding intellectual capacity able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get existentially worse too.

So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

*

If you’re the target of malicious propaganda you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind. Imagine, for example, your trigger reaction to being sent a deepfake of your wife in bed with your best friend.

That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms that are simultaneously global in reach and capable of delivering individually targeted propaganda campaigns. That’s the crux of the shift here).

Fake news is also insidious because of the lack of civic restrains on disinformation agents, which makes maliciously minded fake news so much more potent and problematic than plain old digital advertising.

I mean, even people who’ve searched for ‘slippers’ online an awful lot of times, because they really love buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers Facebook serves them. They’re also probably unlikely to actively evangelize their slipper preferences to their friends, family and wider society — by, for example, posting about their slipper-based views on their social media feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. There aren’t tribes of slipper owners as there are smartphone buyers. Because slippers are a non-complex, functional comfort item with minimal fashion impact. So an individual’s slipper preferences, even if very liberally put about on social media, are unlikely to generate strong opinions or reactions either way.

Political opinions and political positions are another matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it another way, political opinions are not slippers. People rarely try a new one on for size. Yet social media firms spent a very long time indeed trying to sell the ludicrous fallacy that content about slippers and maliciously crafted political propaganda, mass-targeted tracelessly and inexpensively via their digital ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for example.

Indeed, look back over the last few years’ news about fake news, and social media platforms have demonstrably sought to play down the idea that the content distributed via their platforms might have had any sort of quantifiable impact on the democratic process at all.

Yet these are the same firms that make money — very large amounts of money, in some cases — by selling their capability to influentially target advertising.

So they have essentially tried to claim that it’s only when foreign entities engage with their digital advertising platforms, and used their digital advertising tools — not to sell slippers or a Netflix subscription but to press people’s biases and prejudices in order to sew social division and impact democratic outcomes — that, all of a sudden, these powerful tech tools cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall content ocean they’re serving up so hey why can’t you just stop overreacting?

That’s also pure misdirection of course. The wider problem with malicious disinformation is it pervades all content on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much money paying Twitter and Facebook for Brexit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share content created by RT, for example — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and impact of its digital propaganda without having to send the tech firms any more checks.

And indeed, Russia is still operating ranks of bots on social media which are actively working to divide public opinion, as Facebook freely admits.

Maliciously minded content has also been shown to be preferred by (for example) Facebook’s or Google’s algorithms vs truthful content, because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing techie efforts to fix what they view as some kind of content-sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contextual judgement is very hard for AI — as Zuckerberg himself has conceded. But human review is unthinkable. Tech giants simply do not want to employ the numbers of humans that would be necessary to always be making the right editorial call on each and every piece of digital content.

If they did, they’d instantly become the largest media organizations in the world — needing at least hundreds of thousands (if not millions) of trained journalists to serve every market and local region they cover.

They would also instantly invite regulation as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake news is an existential problem for social media.

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

Because if you sit and think about the full scope of malicious disinformation, coupled with the automated global distribution platforms that social media has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions look viable:

A) bespoke regulation, including regulatory access to proprietary algorithmic content-sorting engines.

B) breaking up big tech so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses — swapping out editorially structured news distribution for machine-powered content hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of abuses and misuses slowly emerges. And as certain damages are felt.

Facebook’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s YouTube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in media distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled information streams.

Really, the case for social media regulation is starting to look unstoppable.

But even with unfettered access to internal data and the potential to control content-sifting engines, how do you fix a problem that scales so very big and broad?

Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

So, again, this problem looks existential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Featured Image: Quinn Dombrowski/Flickr UNDER A CC BY-SA 2.0 LICENSE

 


0

Facebook agrees to take a deeper look into Russian Brexit meddling

16:12 | 18 January

Facebook has said it will conduct a wider investigation into whether there was Russian meddling on its platform relating to the 2016 Brexit referendum vote in the UK.

Yesterday its UK policy director Simon Milner wrote to a parliamentary committee that’s been conducting a wide-ranging enquiry into fake news — and whose chair has been witheringly critical of Facebook and Twitter for failing to co-operate with requests for information and assistance on the topic of Brexit and Russia — saying it will widen its investigation, per the committee’s request.

Though he gave no firm deadline for delivering a fresh report — beyond estimating “a number of weeks”.

It’s not clear whether Twitter will also bow to pressure to conduct a more thorough investigation of Brexit-related disinformation. At the time of writing the company had not responded to our questions either.

At the end of last year committee chair Damian Collins warned both companies they could face sanctions for failing to co-operate with the committee’s enquiry — slamming Twitter’s investigations to date as “completely inadequate”, and expressing disbelief that both companies had essentially ignored the committee’s requests.

“You expressed a view that there may be other similar coordinated activity from Russia that we had not yet identified through our investigation and asked for us to continue our investigatory work. We have considered your request and can confirm that our investigatory team is now looking to see if we can identify other similar clusters engaged in coordinated activity around the Brexit referendum that was not identified previously,” writes Milner in the letter to Collins.

“This work requires detailed analysis of historic data by our security experts, who are also engaged in preventing live threats to our service. We are committed to making all reasonable efforts to establish whether or not there was coordinated activity similar to that which we found in the US and will report back to you as soon as the work has been completed.”

Last year Facebook reported finding just three Russian bought “immigration” ads relating to the Brexit vote — with a spend of less than $1. While Twitter claimed Russian broadcasters had spent around $1,000 to run six Brexit-related ads on its platform.

The companies provided that information in response to the UK’s Electoral Commission, which has been running its own investigation into whether there was any digital misspending relating to the referendum — handing the exact same information to the committee, despite its request for a more wide-ranging probe of Russian meddling.

In its Brexit report, Facebook also only looked at known Russian trollfarm the Internet Research Agency pages or account profiles — which it had previously identified in its US election disinformation probe.

While Twitter apparently made no effort to quantify the volume and influence of Russian-backed bots generating free tweet content around Brexit — so its focus on ads really looks like pure misdirection.

Independent academic studies have suggested there was in fact significant tweet-based activity generated around Brexit by Russian bots.

Last month a report by the US Senate — entitled Putin’s Asymmetric Assault on Democracy in Russia and Europe: Implications for US National Security — also criticized the adequacy of the investigations conducted thus far by Facebook and Twitter into allegations of Russian social media interference vis-a-vis Brexit.

“[I]n limiting their investigation to just the Internet Research Agency, Facebook missed that it is only one troll farm which ‘‘has existed within a larger disinformation ecosystem in St. Petersburg,’’ including Glavset, an alleged successor of the Internet Research Agency, and the Federal News Agency, a reported propaganda ‘‘media farm,’’ according to Russian investigative journalists,” the report authors write.

They also chronicle Collins’ criticism of Twitter’s ‘‘completely inadequate’’ response to the issue.

Featured Image: Bryce Durbin/TechCrunch/Getty Images

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 19

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short