Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Social

<< Back Forward >>
Topics from 1 to 10 | in all: 1564

Where FaZe Clan sees the future of gaming and entertainment

21:20 | 17 January

Lee Trink has spent nearly his entire career in the entertainment business. The former president of Capitol Records is now the head of FaZe Clan, an esports juggernaut that is one of the most recognizable names in the wildly popular phenomenon of competitive gaming.

Trink sees FaZe Clan as the voice of a new generation of consumers who are finding their voice and their identity through gaming — and it’s a voice that’s increasingly speaking volumes in the entertainment industry through a clutch of competitive esports teams, a clothing and lifestyle brand and a network of creators who feed the appetites of millions of young gamers.

As the company struggles with a lawsuit brought by one of its most famous players, Trink is looking to the future — and setting his sights on new markets and new games as he consolidates FaZe Clan’s role as the voice of a new generation.

“The teams and social media output that we create is all marketing,” he says. “It’s not that we have an overall marketing strategy that we then populate with all of these opportunities. We’re not maximizing all of our brands.”

 


0

Felix Capital closes $300M fund to double down on DTC, break into fintech and make late-stage deals

12:49 | 15 January

To kick off 2020, one of Europe’s newer — and more successful — investment firms has closed a fresh, oversubscribed fund, one sign that VC in the region will continue to run strong in the year ahead after startups across Europe raised some $35 billion in 2019. Felix Capital, the London firm founded by Frederic Court that was one of the earlier firms to identify and invest in the trend of direct-to-consumer businesses, has raised $300 million, money that it plans to use to continue investing in creative and consumer startups and platform plays as well as begin to tap into a newer area, fintech — specifically startups that are focused on consumer finance. 

Felix up to now has focused mostly on earlier-stage investments — it now has $600 million under management and 32 companies in its portfolio in eight countries — based across both Europe and the US. Court said in an interview that a portion of this fund will now also go into later, growth rounds, both for companies that Felix has been backing for some time as well as newer faces.

As with the focus of the investments, the make-up of the fund itself has a strong European current: the majority of the LPs are European, Court noted. Although Asia is something it would like to tackle more in the future both as a market for its current portfolio and as an investment opportunity, he added, the firm has yet to invest into the region or substantially raise money from it.

Felix made its debut in 2015, founded by Court after a strong run at Advent Capital where he was involved in a number of big exits. While Court had been a strong player in enterprise software, Felix was a step-change for him into more of a primary focus on consumer startups focused on fashion, lifestyle and creative pursuits.

That has over the years included investing in companies like the breakout high-fashion marketplace Farfetch (which he started to back when still at Advent and is now public), Gwyneth Paltrow’s GOOP, the jewellery startup Mejuri, trend-watching HighSnobiety, and fitness startup Peloton (which has also IPO’d).

It’s not an altogether easygoing, vanilla list of cool stuff. Peloton and GOOP have had been mightily doused in

and sharky sentiments; and sometimes it even seems as if the brands themselves own and cultivate that image. As the saying goes, there’s no such thing as bad press, I guess.

Although it wasn’t something especially articulated in startup land at the time of Felix’s launch, what the firm was honing in on was a rising category of direct-to-consumer startups, essentially all in the area of e-commerce and building brands and businesses that were bypassing traditional retailers and retail channels to develop primary relationships with consumers through newer digital channels such as social media, messaging and email (alongside their own DTC websites). 

This is not all that the company has focused on, with investments into a range of platform businesses like corporate travel site TravelPerk, Amazon -backed food delivery juggernaut Deliveroo and Moonbug (a platform for children’s entertainment content), as well as increasingly later stage rounds (for example it was part of a $104 million round at TravelPerk; a $70 million round for marketplace-building service Mirakl; and $23 million for Mejuri.

Court’s track record prior to Felix, and the success of the current firm to date, are two likely reasons why this latest fund was oversubscribed, and why Court says it wants to further spread its wings into a wider range of areas and investment stages.

The interest in consumer finance is not such a large step away from these areas, when you consider that they are just the other side of the coin from e-commerce: saving money versus spending money.

“We see this as our prism of opportunity,” said Court. “Just as we had the intuition that there was a space for investors looking at [DTC]… we now think there is enough evidence that there is demand from consumers for new ways of dealing with money and personal finance.”

The firm has from the start operated with a board of advisors who also invest money through Felix while also holding down day jobs. They include the likes of executives from eBay, Facebook, and more. David Marcus –who Court backed when he built payments company Zong and eventually sold it to eBay before he went on to become a major mover and shaker at Facebook and is now has the possibly Sisyphean task of building Calibra — is on the list, but that has not translated into Felix dabbling in cryptocurrency.

“We are watching cryptocurrency, but if you take a Felix stance on the area, it’s only had one amazing brand so far, bitcoin,” said Court. “The rest, for a consumer, is very difficult to understand and access. It’s still really early, but I’ve got no doubt that there will be some things emerging, particularly around the idea of ‘invisible money.'”

 


0

In the future, everyone will be famous for 15 followers

02:30 | 11 January

David Teten Contributor
David Teten is an advisor to emerging investment managers and a Venture Partner with HOF Capital. He was previously a partner for 8 years with HOF Capital and ff Venture Capital. David writes regularly at teten.com and @dteten.

Many investors — including me — spend most of our day doing the same things people have always done in our job: in my case, due diligence, deal execution, etc. However, being a “microinfluencer” is now part of the job description.

In the future, everyone will be famous for 15 followers. Traditional celebrities or influencers with millions of followers have a large service industry and tech stack to serve their needs. But the standard toolkit of a microinfluencer is still evolving.

The challenge is that my time and money budget for “influencing”–content creation and marketing– is minimal. Also, since I’m not trying to be a full-time marketer, I can’t use some of the standard celebrity techniques. I can’t pick fights on Twitter; date other celebrities; or swear a lot at conferences. These vectors work for a lot of celebrities and for some businesspeople and politicians, but I’m uncomfortable with them and it will impede my ability to do the rest of my job. Plus, my wife doesn’t let me date celebrities.

 


0

Over two dozen encryption experts call on India to rethink changes to its intermediary liability rules

00:24 | 10 January

Security and encryption experts from around the world are joining a number of organizations to call on India to reconsider its proposed amendments to local intermediary liability rules.

In an open letter to India’s IT Minister Ravi Shankar Prasad on Thursday, 27 security and cryptography experts warned the Indian government that if it goes ahead with its originally proposed changes to the law, it could weaken security and limit the use of strong encryption on the internet.

The Indian government proposed (PDF) a series of changes to its intermediary liability rules in late December 2018 that, if enforced, would require millions of services operated by anyone from small and medium businesses to large corporate giants such as Facebook and Google to make significant changes.

The originally proposed rules say that intermediaries — which the government defines as those services that facilitate communication between two or more users and have five million or more users in India — will have to proactively monitor and filter their users’ content and be able to trace the originator of questionable content to avoid assuming full liability for their users’ actions.

“By tying intermediaries’ protection from liability to their ability to monitor communications being sent across their platforms or systems, the amendments would limit the use of end-to-end encryption and encourage others to weaken existing security measures,” the experts wrote in the letter, coordinated by the Internet Society .

With end-to-end encryption, there is no way for the service provider to access its users’ decrypted content, they said. Some of these experts include individuals who work at Google, Twitter, Access Now, Tor Project and World Wide Web Consortium.

“This means that services using end-to-end encryption cannot provide the level of monitoring required in the proposed amendments. Whether it’s through putting a ‘backdoor’ in an encryption protocol, storing cryptographic keys in escrow, adding silent users to group messages, or some other method, there is no way to create ‘exceptional access’ for some without weakening the security of the system for all,” they added.

Technology giants have so far enjoyed what is known as “safe harbor” laws. The laws, currently applicable in the U.S. under the Communications Decency Act and India under its 2000 Information Technology Act, say that tech platforms won’t be held liable for the things their users share on the platform.

Many organizations have expressed in recent days their reservations about the proposed changes to the law. Earlier this week, Mozilla, GitHub and Cloudflare requested the Indian government to be transparent about the proposals that they have made to the intermediary liability rules. Nobody outside the Indian government has seen the current draft of the proposal, which it plans to submit to India’s Supreme Court for approval by January 15.

Among the concerns raised by some is the vague definition of “intermediary” itself. Critics say the last publicly known version of the draft had an extremely broad definition of the term “intermediary,” that would be applicable to a wide-range of service providers, including popular instant messaging clients, internet service providers, cyber cafes and even Wikipedia.

Amanda Keton, general counsel of Wikimedia Foundation, requested the Indian government late last month to rethink the requirement to bring “traceability” on online communication, as doing so, she warned, would interfere with the ability of Wikipedia contributors to freely participate in the project.

A senior executive with an American technology company, who requested anonymity, told TechCrunch on Wednesday that even as the proposed changes to the intermediary guidelines need major changes, it is high time that the Indian government decided to look into this at all.

“Action on social media platforms, and instant communications services is causing damage in the real world. Spread of hoax has cost us more than at least 30 lives. If tomorrow, someone’s sensitive photos and messages leak on the internet, there is currently little they can expect from their service providers. We need a law to deal with the modern internet’s challenges,” he said.

 


0

Facebook won’t ban political ads, prefers to keep screwing democracy

17:48 | 9 January

It’s 2020 — a key election year in the US — and Facebook is doubling down on its policy of letting people pay it to fuck around with democracy.

Despite trenchant criticism — including from US lawmakers accusing Facebook’s CEO to his face of damaging American democracy the company is digging in, announcing as much today by reiterating its defence of continuing to accept money to run microtargeted political ads.

Instead of banning political ads Facebook is trumpeting a few tweaks to the information it lets users see about political ads — claiming it’s boosting “transparency” and “controls” while leaving its users vulnerable to default settings that offer neither.  

Political ads running on Facebook are able to be targeted at individuals’ preferences as a result of the company’s pervasive tracking and profiling of Internet users. And ethical concerns about microtargeting led the UK’s data protection watchdog to call in 2018 for a pause on the use of digital ad tools like Facebook by political campaigns — warning of grave risks to democracy.

Facebook isn’t for pausing political microtargeting, though. Even though various elements of its data-gathering activities are also subject to privacy and consent complaints, regulatory scrutiny and legal challenge in Europe, under regional data protection legislation.

Instead, the company made it clear last fall that it won’t fact-check political ads, nor block political messages that violate its speech policies — thereby giving politicians carte blanche to run hateful lies, if they so choose.

Facebook’s algorithms also demonstrably select for maximum eyeball engagement, making it simply the ‘smart choice’ for the modern digitally campaigning politician to run outrageous BS on Facebook — as long time Facebook exec Andrew Bosworth recently pointed out in an internal posting that leaked in full to the NYT.

Facebook founder Mark Zuckerberg’s defence of his social network’s political ads policy boils down to repeatedly claiming ‘it’s all free speech man’ (we paraphrase).

This is an entirely nuance-free argument that comedian Sacha Baron Cohen expertly demolished last year, pointing out that: “Under this twisted logic if Facebook were around in the 1930s it would have allowed Hitler to post 30-second ads on his solution to the ‘Jewish problem.’”

Facebook responded to the take-down with a denial that hate speech exists on its platform since it has a policy against it — per its typical crisis PR playbook. And it’s more of the same selectively self-serving arguments being dispensed by Facebook today.

In a blog post attributed to its director of product management, Rob Leathern, it expends more than 1,000 words on why it’s still not banning political ads (it would be bad for advertisers wanting to reaching “key audiences”, is the non-specific claim) — including making a diversionary call for regulators to set ad standards, thereby passing the buck on ‘democratic accountability’ to lawmakers (whose electability might very well depend on how many Facebook ads they run…), while spinning cosmetic, made-for-PR tweaks to its ad settings and what’s displayed in an ad archive that most Facebook users will never have heard of as “expanded transparency” and “more control”. 

In fact these tweaks do nothing to reform the fundamental problem of damaging defaults.

The onus remains on Facebook users to do the leg work on understanding what its platform is pushing at their eyeballs and why.

Even as the ‘extra’ info now being drip-fed to the Ad Library is still highly fuzzy (“We are adding ranges for Potential Reach, which is the estimated target audience size for each political, electoral or social issue ad so you can see how many people an advertiser wanted to reach with every ad,” as Facebook writes of one tweak.)

The new controls similarly require users to delve into complex settings menus in order to avail themselves of inherently incremental limits — such as an option that will let people opt into seeing “fewer” political and social issue ads. (Fewer is naturally relative, ergo the scale of the reduction remains entirely within Facebook’s control — so it’s more meaningless ‘control theatre’ from the lord of dark pattern design. Why can’t people switch off political and issue ads entirely?)

Another incremental setting lets users “stop seeing ads based on an advertiser’s Custom Audience from a list”.

But just imagine trying to explain WTF that means to your parents or grandparents — let alone an average Internet user actually being able to track down the ‘control’ and exercise any meaningful agency over the political junk ads they’re being exposed to on Facebook.

It is, to quote Baron Cohen, “bullshit”.

Nor are outsiders the only ones calling out Zuckerberg on his BS and “twisted logic”: A number of Facebook’s own employees warned in an open letter last year that allowing politicians to lie in Facebook ads essentially weaponizes the platform.

They also argued that the platform’s advanced targeting and behavioral tracking tools make it “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — accusing the company’s leadership of making disingenuous arguments in defence of a toxic, anti-democratic policy. 

Nothing in what Facebook has announced today resets the anti-democratic asymmetry inherent in the platform’s relationship to its users.

Facebook users — and democratic societies — remain, by default, preyed upon by self-interested political interests thanks to Facebook’s policies which are dressed up in a self-interested misappropriation of ‘free speech’ as a cloak for its unfettered exploitation of individual attention as fuel for a propaganda-as-service business.

Yet other policy positions are available.

Twitter announced a total ban on political ads last year — and while the move doesn’t resolve wider disinformation issues attached to its platform, the decision to bar political ads has been widely lauded as a positive, standard-setting example.

Google also followed suit by announcing a ban on “demonstrably false claims” in political ads. It also put limits on the targeting terms that can be used for political advertising buys that appear in search, on display ads and on YouTube.

Still Facebook prefers to exploit “the absence of regulation”, as its blog post puts it, to not do the right thing and keep sticking two fingers up at democratic accountability — because not applying limits on behavioral advertising best serves its business interests. Screw democracy.

“We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Facebook writes, ignoring the fact that some of its own staff already pointed out the sketchy hypocrisy of trying to claim that complex ad targeting tools and techniques are open to public scrutiny.

 


0

Will online privacy make a comeback in 2020?

23:15 | 8 January

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.

The debate now is largely around how to regulate platforms, not whether it needs to happen.

The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.

The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.

“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.

Democracy disrupted

The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.

Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.

Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.

The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.

From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.

Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.

No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.

So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.

Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.

Adtech under attack

Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.

But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.

Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).

Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.

This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.

Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.

European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.

Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.

The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.

A

held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though
it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.

Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.

As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.

In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.

Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?

Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.

Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.

Year of the GDPR banhammer?

2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

 


0

Facebook bans deceptive deepfakes and some misleadingly modified media

14:01 | 7 January

Facebook wants to be the arbiter of truth after all. At least when it comes to intentionally misleading deepfakes and heavily manipulated and/or synthesized media content, such as AI-generated photorealistic human faces that look like real people but aren’t.

In a policy update announced late yesterday, the social network’s VP of global policy management, Monika Bickert, writes that it will take a stricter line on manipulated media content from here on in — removing content that’s been edited or synthesized “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”.

However edits for quality or cuts and splices to videos that simply curtail or change the order of words are not covered by the ban.

Which means that disingenuous doctoring — such as this example from the recent UK General Election (where campaign staff for one political party edited a video of a politician from a rival party who was being asked a question about brexit to make it look like he was lost for words when in fact he wasn’t) — will go entirely untouched by the new ‘tougher’ policy. Ergo there’s little to trouble Internet-savvy political ‘truth’ spinners here. The disingenuousness digital campaigning can go on.

Instead of grappling with that sort of subtle political fakery, Facebook is focusing on quick PR wins — around the most obviously inauthentic stuff where it won’t risk accusations of partisan bias if it pulls bogus content.

Hence the new policy bans deepfake content that involves the use of AI technologies to “merge, replace or superimpose content onto a video, making it appear to be authentic” — which looks as if it will capture the crudest stuff, such as revenge deepfake porn which superimposes a real person’s face onto an adult performer’s body (albeit nudity is already banned on Facebook’s platform).

It’s not a blanket ban on deepfakes either, though — with some big carve outs for “parody or satire”.

So it’s a bit of an open question whether this deepfake video of Mark Zuckerberg, which went viral last summer — seemingly showing the Facebook founder speaking like a megalomaniac — would stay up or not under the new policy. The video’s creators, a pair of artists, described the work as satire so such stuff should survive the ban. (Facebook did also leave it up at the time.)

But, in future, deepfake creators are likely to further push the line to see what they can get away with under the new policy.

The social network’s controversial policy of letting politicians lie in ads also means it could, technically, still give pure political deepfakes a pass — i.e. if a political advertiser was paying it to run purely bogus content as an ad. Though it would be a pretty bold politician to try that.

More likely there’s more mileage for political campaigns and opinion influencers to keep on with more subtle manipulations. Such as the doctored video of House speaker Nancy Pelosi that went viral on Facebook last year, which had slowed down audio that made her sound drunk or ill. The Washington Post suggests that video — while clearly potentially misleading — still wouldn’t qualify to be taken down under Facebook’s new ‘tougher’ manipulated media policy.

Bickert’s blog post stipulates that manipulated content which doesn’t meet Facebook’s new standard for removal may still be reviewed by the independent third party fact-checkers Facebook relies upon for the lion’s share of ‘truth sifting’ on its platform — and who may still rate such content as ‘false’ or ‘partly false’. But she emphasizes it will continue to allow this type of bogus content to circulate (while potentially reducing its distribution), claiming such labelled fakes provide helpful context.

So Facebook’s updated position on manipulated media sums to ‘no to malicious deepfakes but spindoctors please carry on’.

“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” Bickert writes, claiming: “This approach is critical to our strategy and one we heard specifically from our conversations with experts.

“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”

Last month Facebook announced it had unearthed a network of more than 900 fake accounts that had been spreading pro-Trump messaging — some of which had used false profile photos generated by AI.

The dystopian development provides another motivation for the tech giant to ban ‘pure’ AI fakes, given the technology risks supercharging its fake accounts problem. (And, well, that could be bad for business.)

“Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior,” suggests Bickert, arguing that: “Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts.”

While still relatively nascent as a technology, deepfakes have shown themselves to be catnip to the media which loves the spectacle they create. As a result, the tech has landed unusually quickly on legislators’ radars as a disinformation risk — California implemented a ban on political deepfakes around elections this fall, for example — so Facebook is likely hoping to score some quick and easy political points by moving in step with legislators even as it applies its own version of a ban.

Bickert’s blog post also fishes for further points, noting Facebook’s involvement in a Deep Fake Detection Challenge which was announced last fall — “to produce more research and open source tools to detect deepfakes”.

While says Facebook has been working with news agency Reuters to offer free online training courses for journalists to help reporters identify manipulated visuals.

“As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact,” she adds.

 


0

Facebook data misuse and voter manipulation back in the frame with latest Cambridge Analytica leaks

18:17 | 6 January

More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser.

The now shut down data modelling company, which infamously used stolen Facebook data to target voters for President Donald Trump’s campaign in the 2016 U.S. election, was at the center of the data misuse scandal that, in 2018, wiped billions off Facebook’s share price and contributed to a $5BN FTC fine for the tech giant last summer.

However plenty of questions remain, including where, for whom and exactly how Cambridge Analytica and its parent entity SCL Elections operated; as well as how much Facebook’s leadership knew about the dealings of the firm that was using its platform to extract data and target political ads — helped by some of Facebook’s own staff.

Certain Facebook employees were referring to Cambridge Analytica as a “sketchy” company as far back as September 2015 — yet the tech giant only pulled the plug on platform access after the scandal went global in 2018.

Facebook CEO Mark Zuckerberg has also continued to maintain that he only personally learned about CA from a December 2015 Guardian article, which broke the story that Ted Cruz’s presidential campaign was using psychological data based on research covering tens of millions of Facebook users, harvested largely without permission. (It wasn’t until March 2018 that further investigative journalism blew the lid off the story — turning it into a global scandal.)

Former Cambridge Analytica business development director Kaiser, who had a central role in last year’s Netflix documentary about the data misuse scandal (The Great Hack), began her latest data dump late last week — publishing links to scores of previously unreleased internal documents via a Twitter account called @HindsightFiles. (At the time of writing Twitter has placed a temporary limit on viewing the account — citing “unusual activity”, presumably as a result of the volume of downloads it’s attracting.)

Since becoming part of the public CA story Kaiser has been campaigning for Facebook to grant users property rights over their data. She claims she’s releasing new documents from her former employer now because she’s concerned this year’s US election remains at risk of the same type of big-data-enabled voter manipulation that tainted the 2016 result.

“I’m very fearful about what is going to happen in the US election later this year, and I think one of the few ways of protecting ourselves is to get as much information out there as possible,” she told The Guardian.

“Democracies around the world are being auctioned to the highest bidder,” is the tagline clam on the Twitter account Kaiser is using to distribute the previously unpublished documents — more than 100,000 of which are set to be released over the coming months, per the newspaper’s report.

The releases are being grouped into countries — with documents to-date covering Brazil, Kenya and Malaysia. There is also a themed release dealing with issues pertaining to Iran, and another covering CA/SCL’s work for Republican John Bolton’s Political Action Committee in the U.S.

The releases look set to underscore the global scale of CA/SCL’s social media-fuelled operations, with Kaiser

that the previously unreleased emails, project plans, case studies and negotiations span at least 65 countries.

A spreadsheet of associate officers included in the current cache lists SCL associates in a large number of countries and regions including Australia, Argentina, the Balkans, India, Jordan, Lithuania, the Philippines, Switzerland and Turkey, among others. A second tab listing “potential” associates covers political and commercial contacts in various other places including Ukraine and even China.

A UK parliamentary committee which investigated online political campaigning and voter manipulation in 2018 — taking evidence from Kaiser and CA whistleblower Chris Wylie, among others — urged the government to audit the PR and strategic communications industry, warning in its final report how “easy it is for discredited companies to reinvent themselves and potentially use the same data and the same tactics to undermine governments, including in the UK”.

“Data analytics firms have played a key role in elections around the world. Strategic communications companies frequently run campaigns internationally, which are financed by less than transparent means and employ legally dubious methods,” the DCMS committee also concluded.

The committee’s final report highlighted election and referendum campaigns SCL Elections (and its myriad “associated companies”) had been involved in in around thirty countries. But per Kaiser’s telling its activities — and/or ambitions — appear to have been considerably broader and even global in scope.

Documents released to date include a case study of work that CA was contracted to carry out in the U.S. for Bolton’s Super PAC — where it undertook what is described as “a personality-targeted digital advertising campaign with three interlocking goals: to persuade voters to elect Republican Senate candidates in Arkansas, North Carolina and New Hampshire; to elevate national security as an issue of importance and to increase public awareness of Ambassador Bolton’s Super PAC”.

Here CA writes that it segmented “persuadable and low-turnout voter populations to identify several key groups that could be influenced by Bolton Super PAC messaging”, targeting them with online and Direct TV ads — designed to “appeal directly to specific groups’ personality traits, priority issues and demographics”. 

Psychographic profiling — derived from CA’s modelling of Facebook user data — was used to segment U.S. voters into targetable groups, including for serving microtargeted online ads. The company badged voters with personality-specific labels such as “highly neurotic” — targeting individuals with customized content designed to pray on their fears and/or hopes based on its analysis of voters’ personality traits.

The process of segmenting voters by personality and sentiment was made commercially possible by access to identity-linked personal data — which puts Facebook’s population-scale collation of identities and individual-level personal data squarely in the frame.

It was a cache of tens of millions of Facebook profiles, along with responses to a personality quiz app linked to Facebook accounts, which was sold to Cambridge Analytica in 2014, by a company called GSR, and used to underpin its psychographic profiling of U.S. voters.

In evidence to the DCMS committee last year GSR’s co-founder, Aleksandr Kogan, argued that Facebook did not have a “valid” developer policy at the time, since he said the company did nothing to enforce the stated T&Cs — meaning users’ data was wide open to misappropriation and exploitation.

The UK’s data protection watchdog also took a dim view. In 2018 it issued Facebook with the maximum fine possible, under relevant national law, for the CA data breach — and warned in a report that democracy is under threat. The country’s information commissioner also called for an “ethical pause” of the use of online microtargeting ad tools for political campaigning.

No such pause has taken place.

Meanwhile for its part, since the Cambridge Analytica scandal snowballed into global condemnation of its business, Facebook has made loud claims to be ‘locking down’ its platform — including saying it would conduct an app audit and “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; and “ban any developer from our platform that does not agree to a thorough audit”.

However, close to two years later, there’s still no final report from the company on the upshot of this self ‘audit’.

And while Facebook was slapped with a headline-grabbing FTC fine on home soil, there was in fact no proper investigation; no requirement for it to change its privacy-hostile practices; and blanket immunity for top execs — even for any unknown data violations in the 2012 to 2018 period. So, ummm

In another highly curious detail, GSR’s other co-founder, a data scientist called Joseph Chancellor, was in fact hired by Facebook in late 2015. The tech giant has never satisfactorily explained how it came to recruit one of the two individuals at the center of a voter manipulation data misuse scandal which continues to wreak hefty reputational damage on Zuckerberg and his platform. But being able to ensure Chancellor was kept away from the press during a period of intense scrutiny looks pretty convenient.

Last fall, the GSR co-founder was reported to have left Facebook — as quietly, and with as little explanation given, as when he arrived on the tech giant’s payroll.

So Kaiser seems quite right to be concerned that the data industrial complex will do anything to keep its secrets — given it’s designed and engineered to sell access to yours. Even as she has her own reasons to want to keep the story in the media spotlight.

Platforms whose profiteering purpose is to track and target people at global scale — which function by leveraging an asymmetrical ‘attention economy’ — have zero incentive to change or have change imposed upon them. Not when the propaganda-as-a-service business remains in such high demand, whether for selling actual things like bars of soap, or for hawking ideas with a far darker purpose.

 


0

Twitter offers more support to researchers — to ‘keep us accountable’

13:07 | 6 January

Twitter has kicked off the New Year by taking the wraps off a new hub for academic researchers to more easily access information and support around its APIs — saying the move is in response to feedback from the research community.

The new page — which it’s called ‘Twitter data for academic researchers’ — can be found here.

It includes links to apply for a developer account to access Twitter’s APIs; details of the different APIs offered and links to additional tools for researchers, covering data integration and access; analysis; visualization; and infrastructure and hosting.

“Over the past year, we’ve worked with many of you in the academic research community. We’ve learned about the challenges you face, and how Twitter can better support you in your efforts to advance understanding of the public conversation,” the social network writes, saying it wants to “make it even easier to learn from the public conversation”.

Twitter is also promising “more enhancements and resources” for researchers this year.

It’s likely no accident the platform is putting a fresh lick of paint on its offerings for academics given that 2020 is a key election year in the U.S. — and concerns about the risk of fresh election meddling are riding high.

Tracking conversation flow on Twitter also still means playing a game of ‘bot or not’ — one that has major implications for the health of democracies. And in Europe Twitter is one of a number of platform giants which, in 2018, signed up to a voluntary Code of Practice on disinformation that commits it to addressing fake accounts and online bots, as well as to empowering the research community to monitor online disinformation via “privacy-compliant” access to platform data.

“At Twitter, we value the contributions of academic researchers and see the potential for them to help us better understand our platform, keeping us accountable, while helping us tackle new challenges through discoveries and innovations,” the company writes on the new landing page for researchers while also taking the opportunity to big up the value of its platform — claiming that “if it exists, it’s probably been talked about on Twitter”.

If Twitter lives up to its promises of active engagement with researchers and their needs, it could smartly capitalism on rival Facebook’s parallel missteps in support for academics.

Last year Facebook was accused of ‘transparency-washing’ with its own API for researchers, with a group of sixty academics slamming the ad archive API as as much a hinderance as a help.

Months later Facebook was still being reported to have done little to improve the offering.

 


0

India’s ruling party accused of running deceptive Twitter campaign to gain support for a controversial law

20:48 | 4 January

Bharatiya Janata Party, the ruling party in India, has been accused of running a highly deceptive Twitter campaign to trick citizens into supporting a controversial law.

First, some background: The Indian government passed the Citizenship Amendment Act (CAA) last month that eases the path of non-Muslim minorities from the neighboring Muslim-majority nations of Afghanistan, Bangladesh and Pakistan to gain Indian citizenship.

But, combined with a proposed national register of citizens, critics have cautioned that it discriminates against minority Muslims in India and chips away at India’s secular traditions.

Over the past few weeks, tens of thousands of people in the country — if not more — have participated in peaceful protests across the nation against the law. The Indian government, which has temporarily cut down internet access and mobile communications in many parts of India to contain the protests, has so far shown no signs of withdrawing the law.

On Saturday, it may have found a new way to gain support for it, however.

India’s Home Minister Amit Shah on Thursday

a phone number, urging citizens to place a call to that number in “support of the CAA law.”

Thousands of people in India today, many affiliated with the BJP party, began

with the promise that anyone who places a call would be offered job opportunities, free mobile data, Netflix credentials, and even company with “lonely women.”

Huffington Post India called the move latest “BJP ploy” to win support for its controversial law. BoomLive, a fact checking organization based in India, reported the affiliation of many of these people to the ruling party.

We have reached out to a BJP spokesperson and Twitter spokespeople for comment.

If the allegations are true, this won’t be the first time BJP has used Twitter to aggressively promote its views. In 2017, BuzzFeed News reported that a number of political hashtags that appeared in the top 10 Twitter’s trends column in India were the result of organized campaigns.

Pratik Sinha, co-founder of fact-checking website Alt News, last year demonstrated how easy it was to manipulate many politicians in the country to tweet certain things after he gained accessed to a Google document of prepared statements and tinkered with the content.

Last month, snowfall in Kashmir, a highly sensitive region that hasn’t had internet connection for more than four months, began trending on Twitter in the U.S. It mysteriously disappeared after many journalists questioned how it made it to the list.

When we reached out, a Twitter spokesperson in India pointed TechCrunch to an FAQ article that explained how Trending Topics work. Nothing in the FAQ article addressed the question.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 1564

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short