Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 30

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 39

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 63

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 39

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 24

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Виктор Иванович

Виктор Иванович, 39

Joined: 29 January 2014

About myself: Жизнь удалась!

Interests: Куба и Панама

Alexey Geno

Alexey Geno, 6

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 66

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Social

<< Back Forward >>
Topics from 1 to 10 | in all: 3035

Facebook inks music licensing deal with ICE covering 160 territories, 290K rightsholders on FB, Insta, Oculus and Messenger

13:39 | 21 February

Facebook today took its latest step towards making good on paying out royalties to music rightsholders around tracks that are used across its multiple platforms and networks. The company has signed a deal with ICE Services — a licensing group and copyright database of some 31 million works that represents PRS in the UK, STIM in Sweden and GEMA in Germany — to provide music licensing services for works and artists represented by the group, when their music is used on Facebook, Instagram, Oculus and Messenger.

The deal is significant because, as ICE describes it, it’s the first multi-territorial license Facebook has signed with an online licensing hub: it will cover 160 territories and 290,000 rightsholders.

So what will this be used for? Facebook has moved into a lot of different services over the years, but a streaming music operation to compete with the likes of Spotify, Pandora and Apple Music has not been one of them. However, in recent times it has been laying the groundwork to sign deals with record labels and others to make sure that the music that is used in videos and other items posted to its sites is legit and paid for to avoid lawsuits, takedown requests, and — yes — potentially the creation of new music-based services down the road, as it starts to tap into the opportunities that music affords it.

These days, music is particularly an interesting turn for Facebook. The social network has run into a lot of controversy for its prominent role in aggregating and distributing news to the world — with a significant part of that news turning out to be misleading and potentially damaging to public opinions and larger issues like the democratic process. Facebook, in turn, is looking for new and alternative content to continue driving people to its platform, and music could help it strike the right note, so to speak.

There are no financial terms being announced today by ICE and Facebook (we’ve asked), but a  report in September alleged that the social network is cutting deals in the hundreds of millions of dollars to set this right.

Other deals that Facebook has cut in the past several months have included an agreement with Universal Music Group over user-generated videos; another with Sony/ATV; and a third with Kobalt, HFA/Rumblefish and Global Music Rights. Facebook also has also pursued a secondary route of giving creators access to “no-name” music via a new service it’s launched called Sound Collection. ICE represents a number of artists and labels who would fall outside of those agreements either because of territorial coverage and/or label and licensing ties.

The deal will not only cover videos and the like that are uploaded by its 2.1 billion registered users (1.4 billion daily users) to Facebook, Instagram, Oculus and Messenger, but will also be added in a catalogue that people can tap into when they are creating and adding them to those platforms from scratch. Sound Collection isn’t cited by name in the press release from ICE but it sounds like it could be a part of that effort, meaning that this could be the first time that Facebook will be adding premium music to Sound Collection.

“We are delighted to continue deepening our relationship with music by partnering with ICE in a first-of-its-kind licensing deal,” said Anjali Southward, Head of International Music Publishing Business Development at Facebook, in a statement. “Facebook’s journey with music is just beginning and we look forward to working with ICE and songwriters to build a community together around music.”

ICE says that it will provide will be working with Facebook to build a royalties reporting system as part of the deal. The company already has similar arrangements in place with 40 other streaming platforms and has distributed 300 million euros in royalties to its members since it was established in 2016. (Why only 2016? Previously the three organizations worked independently and saw they could get much better bargaining power if they worked collectively).

Indeed, royalty collecting is a potentially lucrative business as streaming services continue to grow and overtake other formats for music consumption, with startups like Kobalt building services that it claims are better and faster at tracking even the smallest samples to make sure that those who are making their music are getting their due.

“We are excited to work with Facebook to ensure we are delivering value back to creators for the use of their works on Facebook platforms. The future of music depends on our industries working together to enable the development of new models for music consumption in the digital age, to ensure a healthy future for songwriters and composers.” said Ben McEwen, Commercial Director at ICE Services, in a statement.

 


0

Twitter updates its policy on tweets that encourage self-harm and suicide

08:28 | 21 February

Twitter, which is constantly criticized for not doing enough to prevent harassment, has updated its guidelines with more information on how it handles tweets or accounts that encourage other people to hurt themselves or commit suicide.

The update follows an announcement by Twitter Safety last week that users can now report profiles, tweets and direct messages that encourage self-harm and suicide.

While we continue to provide resources to people who are experiencing thoughts of self-harm, it is against our rules to encourage others to harm themselves. Starting today, you can report a profile, Tweet, or Direct Message for this type of content.

— Twitter Safety (@TwitterSafety) February 13, 2018

In a new section on its Help Center titled “Glorifying self-harm and suicide,” Twitter outlined its approach to tweets or accounts that promote or encourage self-harm and suicide. The company says its policy against encouraging other people to hurt themselves is meant to work in tandem with its self-harm prevention measures as part of a “two-pronged approach” that involves “supporting people who are undergoing experiences with self-harm or suicidal thoughts, but prohibiting the promotion or encouragement of self-harming behaviors.” Twitter already has a form that lets users report threats of self-harm or suicide and a team that assesses tweets and reaches out to users they believe are at risk.

Twitter says offenders may be temporarily locked out of their account the first time they violate the policy and their tweets encouraging self-harm or suicide removed. Repeat offenders may have their accounts suspended.

Last fall, Twitter published a new version of its policies toward abuse, spam, self-harm and other issues, following a promise by chief executive officer Jack Dorsey that it would be more aggressive about preventing harassment. Publishing stricter guidelines and putting them into practice, however, are two different things. Many of Twitter’s critics still believe the platform doesn’t do enough to enforce its anti-harassment measures and must provide more information about exactly what kind of content results in a suspension. For example, telling someone to “kill yourself” arguably violates its guidelines, but a quick search of #killyourself returns many recent results, including tweets aimed at specific people.

Featured Image: NurPhoto/Getty Images

 


0

Instagram Direct one-ups Snapchat with replay privacy controls

20:26 | 20 February

Messaging is the heart of Snapchat, so after cloning and augmenting Stories, Instagram is hoping to boost intimate usage of Direct with privacy controls not found elsewhere. Now when you send an ephemeral photo or video from the Instagram Direct camera, you can decide whether recipients can only view it once, replay it temporarily, or will see a permanent thumbnail of it in the chat log.

Previously, all messages could be replayed temporarily but then would completely disappear. Snapchat always lets you temporarily replay a photo or video message, with no way for senders to deactivate the option.

The replay controls could encourage Instagrammers to send more sensitive imagery by allowing them to prevent replays that can give people time to take a photo of their screen with another camera without triggering a screenshot alert to the sender. Whether it’s silly or sexy, some messages are only meant to be seen once. Meanwhile, non-sensitive messages can be set to permanent so it’s easy to look back and reminisce, or prevent a conversation from losing context if someone forgets or misses what was in a visual message.

Instagram tells me it rolled out the new “Keep in chat” option last month after introducing “allow replay”, or “view once” options in November. Remember, senders are able to see if you replay a message.

Snapchat’s private messaging was proved to be its most resilient feature after a leak saw The Daily Beast’s Taylor Lorenz dump a ton of the company’s usage data. In August, Snapchat users were 64 percent more likely to send a private snap to a friend than broadcast to Stories. While the number of daily users who post to Stories stagnated during Q3 last year in the face of Instagram’s competition, the number of users sending messages continues to rise.

That’s why now that Instagram Stories and WhatsApp Status both have over 300 million daily active users, dwarfing the 187 million total daily users on Snapchat, Facebook trying to revamp its ephemeral messaging options. Instagram combined ephemeral and permanent Direct messaging last April, and in December began testing a standalone Direct app. Snapchat has managed to turn around its business and revive growth, so Instagram could use some momentum.

Snapchat’s number of users posting to Stories stagnated last year…

…while daily users sending Snapchat messages kept growing

Instagram and Snapchat continue to see distinct behavior patterns despite the former’s attempt to become the latter. Instagram Stories was supposed to let you share more than the permanent feed highlights of your life. But users still seem to prefer to share private, provocative, and ridiculous Stories and messages on Snapchat, while Instagram gets more polished and posed posts and re-sharing of memes.

Being able to block replays or keep messages from entirely disappearing could let Direct encompass a wider range of visual communication.

 


0

Snapchat adds GIF stickers via Giphy, plus new Friends and Discover screen tabs

17:54 | 20 February

Snapchat is bringing one of the best recent features of Instagram Stories to its own app, with the ability to add GIF stickers from Giphy to your posts. This is a notable reversal of the typical pattern we’ve seen of Instagram cloning Snapchat features, but it’s a good one for users since GIF stickers for Stories are basically the greatest thing ever invented on social media.

The new GIF options, also powered by Giphy as mentioned, are loaded in the Sticker Picker alongside existing options from Snapchat. But that’s not the only change rolling out today: Snapchat is also adding tabs to both the Friends and the Discover screens within the app, which will make it easier for users on the platform to follow along with the Stories they want to see whenever they want to see them, letting you do things like viewing friends with active stories and Group Chats in one tab and subscriptions you maintain in the other.

  1. GIPHY Snapchat 1

  2. GIPHY Snapchat 2

  3. GIPHY Snapchat 3

  4. GIPHY Snapchat Hero Shot

  5. GIPHY Snapchat Sticker Picker

 View Slideshow
Previous Next Exit

Snapchat CEO and founder Evan Spiegel noted on the company’s recent quarterly earnings call that Snapchat remains convinced their recent redesigns has “made our application simpler and easier to use,” and also noted improved ad performance post-overhaul, despite vocal user complaints. Spiegel also noted, however, that Snapchat is “constantly monitoring the rollout of the redesign and making improvements based on what we learn from our community and their usage of Snapchat,” and this design tweak seems to fall into that category.

 


0

Kidtech startup SuperAwesome is now valued at $100+ million and profitable

19:32 | 19 February

Technology companies like Facebook and Google are scrambling to catch up to the fact that the kids have joined a web originally built for adults, and are using it the way adults do – by liking and commenting, sharing, clicking through on personalized recommendations, and viewing ads. But the technology underpinning apps and sites built for kids can’t operate the same way as it does for the grown-ups. That’s where the company SuperAwesome comes in.

SuperAwesome, just less than five years old, has been tapping into the growing need for kid-friendly technology, including kid-safe advertising, social engagement tools, authentication, and parental controls. Its clients include some of the biggest names in the children’s market, including Activision, Hasbro, Mattel, Cartoon Network, Spin Master, Nintendo, Bandai, WB, Shopkins maker Moose Toys, and hundreds of others – many of which it can’t name for legal reasons.

Now, the company is turning a profit.

SuperAwesome says it hit profitability for the first time in Q4 2017, and has reached a booked revenue run-rate of $28 million, after seeing 70 percent growth year-over-year.

This year, it expects to grow 100 percent, with a revenue run rate of $50 million.

Sources close to the company put its valuation at north of $100 million, as a result.

The company says the shift digital is driving its growth, as TV viewing is dropping at 10 to 20 percent per year, while kids digital budgets are growing at 25 percent year over year. At the same time, the kids brands and content owners are realizing that safety and privacy have to be a part of their web and mobile experiences.

SuperAwesome has flown under the radar a bit, and isn’t what you’d call a household name. That’s because its technology isn’t generally consumer-facing – it’s what powering the apps and websites that today’s kids are using, whether that’s a game like Mattel’s Barbie Fashion Closet or Monster High, Hasbro’s My Little Pony Friendship Club, or a website from kids’ author Roald Dahl, to name a few.

Key to all these experiences is a technology platform that allows developers to build kid-safe apps and sites. That includes products like AwesomeAds, which ensures ads in the kids space aren’t tracking personal data and the ads are kid-appropriate; PopJam, a kid-safe social engagement platform that lets developers build experiences where kids can like, comment, share and remix online content; and Kids Web Services, tools that simplify building apps that require parental consent and oversight.

These sorts of tools are increasingly become critical to a web that’s waking up to the fact that the largest tech companies didn’t consider how many kids would be using their products. YouTube, for example, has been scrambling in recent months to combat the threats to kids on its video-sharing site, like inappropriate content targeted towards children, exploitive videos, haywire algorithms, dangerous memes, hate speech, and more.

Meanwhile, kids are lying about their ages – sometimes with parental permission – to join social platforms originally built for the 13-and-up crowd like Facebook, Instagram, Snapchat and Musical.ly.

“It’s very easy to come out and beat up Facebook and Google for some of this stuff, but the reality is that there’s no ecosystem there for developers who are creating content or building services specifically for kids. That’s why we started SuperAwesome,” says SuperAwesome’s CEO Dylan Collins.

Before SuperAwesome, Collins founded gaming platform Jolt, acquired by GameStop, and game technology provider DemonWare, acquired by Activision.

Other SuperAwesome execs have similar successful track records in terms of company-building. Managing director Max Bleyleben was COO at digital marketing agency Beamly, acquired by Coty, and a partner in European VC fund Kennet Partner. COO Kate O’Louglin was previously SVP Media in ad tech company Tapad, acquired by Telenor. Chief Strategy Officer Paul Nunn was previously the Managing Direct in kids app maker Outfit7, acquired by China’s United Luck Group.

Today, the company’s 120-person staff also includes a full-time moderation team to review kids’ content before it goes public. A need to do more hands-on review, instead of leaving everything up to an algorithm, is something the larger companies have just woken up to, as well. For example, YouTube said it was expanding its moderation team in the wake of the site’s numerous controversies to north of 10,000 people.

  1. Screenshot 2018-02-16 17.57.11_PopJam

  2. Screenshot 2018-02-16 17.57.22_PopJam

  3. Screenshot 2018-02-16 17.57.35_PopJam

  4. Screenshot 2018-02-16 17.57.47_PopJam

 View Slideshow
Previous Next Exit

SuperAwesome is much smaller than that, but it has understood the need to double-check kids’ content with a more hands-on approach for some time.

“The content created on [SuperAwesome’s] platform goes through two layers of moderation. It goes through our machine learning moderation. Then it goes through our 24/7 team of human moderators as well,” explains Collins. “With the kids’ audience, it doesn’t work to completely automate all this – you have to have human involvement.”

There are now tens of millions of pieces of content flowing through SuperAwesome’s platform every couple of weeks, to give you an idea of scale.

This online social space is something that kids’ brands want to enter, but safely and in compliance with U.S. and international laws around child protection, like COPPA and Europe’s GDPR-K.

Though SuperAwesome’s focus a couple of years ago was more about helping advertisers and marketers, today, two-thirds of SuperAwesome ads clients have since adopted its social engagement tools from PopJam. (A demonstration of this technology is also live in SuperAwesome’s own kids app by the same name.)

Now, SuperAwesome is leveraging its experience in the kids’ space to help YouTubers come into compliance with Google’s stricter rules, too, so they can ensure brands they’re “kid-safe.” SuperAwesome recently rolled out a content certification standard for under-13 YouTube influencers and those over 13 who target a very young audience with their videos.

This is something SuperAwesome’s brand customers requested, because they’re spending their ad dollars on YouTube – and sometimes finding their messages matched up with inappropriate content. The problem of toxic content on Facebook and Google could have a massive impact on the ad industry if it continues to go unchecked. For example, one of the world’s largest advertisers Unilever this month threatened to pull ads from Facebook and Google if they don’t address the problems with propaganda, hate speech and disturbing content aimed at children.

SuperAwesome’s new, voluntary certification for YouTubers takes into account the content the channel produces, their behavior on the screen, their recording practices, and much more.

“YouTube is not an under-13 platform, so their hands are kind of tied in terms of something like this,” says Collins. The company announced the certification earlier this month, and already has 35 YouTubers on board, representing 35 million subscribers and 8 to 9 billion monthly impressions. “There’s real momentum that’s happening with this,” he adds.

SuperAwesome believes it’s now poised to for rapid growth as more brands and businesses begin to address the needs of keeping kids safe online.

“Kidtech, as a category, has really just been invented in the past three or four years. No one thought they’d have to build specific technology for kids…this is problem that we’re starting to solve,” Collins says.

SuperAwesome has raised $28 million to date according to Crunchbase, including a $21 million Series B from Mayfair Equity Partners in mid-2017, which included Hoxton Ventures and Inspire Ventures. The company has no immediate plans to fundraise again.

Featured Image: Hero Images/Getty Images

 


0

Facebook’s tracking of non-users ruled illegal again

15:47 | 19 February

Another blow for Facebook in Europe: Judges in Belgium have once again ruled the company broke privacy laws by deploying technology such as cookies and social plug-ins to track Internet users across the web.

Facebook uses data it collects in this way to sell targeted advertising.

The social media giant failed to make it sufficiently clear how people’s digital activity was being recorded, the court ruled.

Facebook faces fines of up to €100 million (~$124M), at a rate of €250,000 per day, if it fails to comply with the court ruling to stop tracking Belgians’ web browsing habits. It must also destroy any illegally obtained data, the court said.

Facebook expressed disappointment at the judgement and said it will appeal.

“The cookies and pixels we use are industry standard technologies and enable hundreds of thousands of businesses to grow their businesses and reach customers across the EU,” said Facebook’s VP of public policy for EMEA, Richard Allan, in a statement. “We require any business that uses our technologies to provide clear notice to end-users, and we give people the right to opt-out of having data collected on sites and apps off Facebook being used for ads.”

The privacy lawsuit dates back to 2015 when the Belgium privacy watchdog brought a civil suit against Facebook for its near invisible tracking of non-users via social plugins and the like. This followed an investigation by the agency that culminated in a highly critical report touching on many areas of Facebook’s data handling practices.

The same year, after failing to obtain adequate responses to its concerns, the Belgian Privacy Commission decided to take Facebook to court over one of them: How it deploys tracking cookies and social plug-ins on third party websites to track the Internet activity of users and non-users.

Following its usual playbook for European privacy challenges, Facebook first tried to argue the Belgian DPA had no jurisdiction over its European business, which is headquartered in Ireland. But local judges disagreed.

Subsequently, Belgian courts have twice ruled that Facebook’s use of cookies violates European privacy laws. If Facebook keeps appealing the case could end up going all the way to Europe’s supreme court, the CJEU.

The crux of the issue here is the pervasive background surveillance of Internet activity for digital ad targeting purposes which is enabled by a vast network of embedded and at times entirely invisible tracking technologies — and, specifically in this lawsuit, whether Facebook and its network of partner companies which feed data into its ad targeting system, have obtained adequate consent from their users to be so surveilled when they’re not actually using Facebook.

“Facebook collects information about us all when we surf the Internet,” explains the Belgian privacy watchdog, referring to findings from its earlier investigation of Facebook’s use of tracking technologies. “To this end, Facebook uses various technologies, such as the famous “cookies” or the “social plug-ins” (for example, the “Like” or “Share” buttons) or the “pixels” that are invisible to the naked eye. It uses them on its website but also and especially on the websites of third parties. Thus, the survey reveals that even if you have never entered the Facebook domain, Facebook is still able to follow your browsing behavior without you knowing it, let alone, without you wanting it, thanks to these invisible pixels that Facebook has placed on more than 10,000 other sites.”

Facebook claims its use of cookie tracking is transparent and argues the technology benefits Facebook users by letting it show them more relevant content. (Presumably, it would argue non-Facebook users ‘benefit’ from being shown ads targeted at their interests.) “Over recent years we have worked hard to help people understand how we use cookies to keep Facebook secure and show them relevant content. We’ve built teams of people who focus on the protection of privacy — from engineers to designers — and tools that give people choice and control,” said Allan in his response statement to the court ruling.

But given that some of these trackers are literally invisible, coupled with the at times dubious quality of ‘consents’ being gathered — say, for example, if there’s only a pre-ticked opt-in at the bottom of a lengthy and opaque set of T&Cs that actively discourage the user from reading and understanding what data of theirs is being gathered and why — there are some serious questions over the sustainability of this type of ‘pervasive background surveillance’ ad tech in the face of legal challenges and growing consumer dislike of ads that stalk them around the Internet (which has in turn fueled growth of ab-blocking technologies).

Facebook will also face a similar complaint in a lawsuit in Austria, filed by privacy campaigner and lawyer Max Schrems. In January Schrems prevailed against Facebook’s attempts to stall that privacy challenge after Europe’s top court threw out the company’s claim that his campaigning activities cancelled out his individual consumer rights. (Though the CJEU’s decision did not allow Schrems to pursue a class action style lawsuit against Facebook as he had originally hoped.)

Europe also has a major update to its data protection laws incoming in May, called the GDPR, which beefs up the enforcement of privacy rights by introducing a new system of penalties for data protection violations that can scale as high as 4% of a company’s global turnover.

GDPR means that ignoring the European Union’s fundamental right to privacy — by relying on the fact that few consumers have historically bothered to take companies to court over legal violations they may not even realize are happening — is going to get a lot more risky in just a few months’ time. (On that front, Schrems has also set up a not-for-profit to pursue strategic privacy litigation once GDPR is in place — so start stockpiling the popcorn.)

It’s also worth noting that GDPR strengthens the EU’s consent requirements for processing personal data — so it’s certainly not going to be easier for Facebook to obtain consents for this type of background tracking under the new framework. (The still being formulated ePrivacy Regulation is also relevant to cookie consent, and aims to streamline the rules across the EU.)

And indeed such tracking will necessarily become far more visible to web users, who may then be a lot less inclined to agree to being ad-stalked almost everywhere they go online primarily for Facebook’s financial benefit. (The rise of tools offering tracker blocking offers another route for irate consumers to thwart online mass surveillance by ad targeting giants.)

“We are preparing for the new General Data Protection Regulation with our lead regulator the Irish Data Protection Commissioner. We’ll comply with this new law, just as we’ve complied with existing data protection law in Europe,” added Facebook’s Allan.

It’s still not fully clear how Facebook will comply with GDPR — though it’s announced a new global privacy settings hub is incoming. It’s also running a series of data protection workshops in Europe this year, aimed at small and medium businesses — presumably to try to ensure its advertisers don’t find themselves shut out of GDPR Compliance City and on the hook for major privacy legal liabilities themselves, come May 25.

Of course Facebook’s ad business not only relies on people’s web browsing habits to fuel its targeting systems, it relies on advertisers liberally dollars pumping into it. Which is another reason consumer trust is so vital. (And it’s facing myriad challenges on that front these days.)

In a statement on its website, the Belgium Privacy Commission said it was pleased with the ruling.

“We are of course very satisfied that the court has fully followed our position. For the moment, Facebook is conducting a major advertising campaign where it shares its attachment to privacy. We hope he will put this commitment into practice,” it said. 

Featured Image: Tekke/Flickr UNDER A CC BY-ND 2.0 LICENSE

 


0

Fake news is an existential crisis for social media 

22:12 | 18 February

The funny thing about fake news is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake news’. Some very visibly, a lot a lot less so.

Such as Russia painting the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that Russian-generated fake news is itself “fake news”.

And, indeed, the social media firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the data they release about it — in what can look suspiciously like an attempt to downplay the significance and impact of malicious digital propaganda, because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a tech firm’s favor) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or misinformation as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course. But it’s also clearly deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.

Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate — and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

*

Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed — still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that’s running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.

Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.

The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.

Here they are:

  • How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016?
  • How much have these media platforms spent to build their social followings?
  • Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account?
  • Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content?
  • Did either RT, Sputnik or Ruptly use ‘dark posts’ on either Facebook or Twitter to push their content during the EU referendum, or have they used ‘dark posts’ to build their extensive social media following?
  • What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik.
  • Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?

We put these questions to Facebook and Twitter.

In response, a Twitter spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. 

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period.  

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.

A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn’t peer reviewed, and so on.

But, in an illustrative twist, if you Google “89up Brexit”, Google New injects fresh Kremlin-backed opinions into the search results it delivers — see the top and third result here…

Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the UK staying in the EU. Making it easy for Russian state organs to slur the research as worthless.

The social media firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally ‘Russia Today’. Fake news thrives on shamelessness, clearly.

It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

Frankly, this situation is looking increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious disinformation and even drafting anti-fake news election laws.

And while the social media firms have been a bit more alacritous to respond to domestic lawmakers’ requests for action and investigation into political disinformation, that just makes their wider inaction, when viable and reasonable concerns are brought to them by non-US politicians and other concerned individuals, all the more inexcusable.

The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

But if tech giants have treated requests for information and help about political disinformation from the UK — a close US ally — so poorly, you can imagine how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the homeland.

Earlier this month, in what looked very much like an act of exasperation, the chair of the UK’s fake news enquiry, Damian Collins, flew his committee over the Atlantic to question Facebook, Twitter and Google policy staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of new information. The full impact of Russia’s meddling in the Brexit vote remains unquantified.

One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

*

The partial data about Russia’s Brexit dis-ops, which Facebook and Twitter have trickled out so far, like blood from the proverbial stone, is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news to churn out more maliciously minded content, as RT and Sputnik demonstrably have.

In all probability, it also pours more fuel on Brexit-based societal division. The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU. What role did social media and Kremlin agents play in exacerbating those divisions? Without hard data it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow Russia’s line that so called fake news is a fiction sicked up by over-imaginative Russophobes.

But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

Self interest also compellingly explains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough data and/or failing to interrogate deeply enough their own systems when asked to respond to reasonable data requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out to try to land a knock-out blow in the game of ‘fake news and/or total fiction’, serves no useful purpose in a civilized society. It’s just more FUD for the fake news mill.

Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side. (As they frequently are by and on social media platforms.) So you could do robust math on fake news — if only you had access to the underlying data.

But only the social media platforms have that. And they’re not falling over themselves to share it. Instead, Twitter routinely rubbishes third party studies exactly because external researchers don’t have full visibility into how its systems shape and distribute content.

Yet external researchers don’t have that visibility because Twitter prevents them from seeing how it shapes tweet flow. Therein lies the rub.

Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

Facebook has also built some anti-fake news tools to try to tweak what its algorithms favor, though nothing it’s done on that front to date looks very successfully (even as a more major change to its New Feed, to make it less of a news feed, has had a unilateral and damaging impact on the visibility of genuine news organizations’ content — so is arguably going to be unhelpful in reducing Facebook-fueled disinformation).

In another instance, Facebook’s mass closing of what it described as “fake accounts” ahead of, for example, the UK and French elections can also look problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what content they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin disinformation-spreading bots.

More recently, Facebook has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political advertising on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own VP of ads has admitted that Russian efforts to spread propaganda are ongoing and persistent, and do not solely target elections or politicians…

The main goal of the Russian propaganda and misinformation effort is to divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well. We are quite divided as a nation.

— Rob Goldman (@robjective) February 17, 2018

The Russian campaign is ongoing. Just last week saw news that Russian spies attempted to sell a fake video of Trump with a hooker to the NSA. US officials cut off the deal because they were wary of being entangled in a Russian plot to create discord. https://t.co/jO9GwWy2qH

— Rob Goldman (@robjective) February 17, 2018

The wider point is that social division is itself a tool for impacting democracy and elections — so if you want to achieve ongoing political meddling that’s the game you play.

You don’t just fire up your disinformation guns ahead of a particular election. You work to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot box, whenever there’s an election.

Russia knows this. And that’s why the Kremlin has been playing such a long propaganda game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes digitally amplified disinformation an existential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of social media platforms that this game is being played so effectively — because these platforms have historically preferred to champion free speech rather than root out and eradicate hate speech and abuse; inviting trolls and malicious actors to exploit the freedom afforded by their free speech ideology and to turn powerful broadcast and information-targeting platforms into cyberweapons that blast the free societies that created them.

Social media’s filtering and sorting algorithms also crucially failed to make any distinction between information and disinformation. Which was their great existential error of judgement, as they sought to eschew editorial responsibility while simultaneously working to dominate and crush traditional media outlets which do operate within a more tightly regulated environment (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

But if your platform treats everything and almost anything indiscriminately as ‘content’, then don’t be surprised if fake news becomes indistinguishable from the genuine article because you’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Goldman’s suggested answer to social media’s existential fake news problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that social media platforms are piping into people’s eyeballs.

Lessons in critical thinking are certainly a good idea. But fakes are compelling for a reason. Look at the tenacity with which conspiracy theories take hold in the US. In short, it would take a very long time and a very large investment in critical thinking education programs to create any kind of shielding intellectual capacity able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get existentially worse too.

So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

*

If you’re the target of malicious propaganda you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind. Imagine, for example, your trigger reaction to being sent a deepfake of your wife in bed with your best friend.

That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms that are simultaneously global in reach and capable of delivering individually targeted propaganda campaigns. That’s the crux of the shift here).

Fake news is also insidious because of the lack of civic restrains on disinformation agents, which makes maliciously minded fake news so much more potent and problematic than plain old digital advertising.

I mean, even people who’ve searched for ‘slippers’ online an awful lot of times, because they really love buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers Facebook serves them. They’re also probably unlikely to actively evangelize their slipper preferences to their friends, family and wider society — by, for example, posting about their slipper-based views on their social media feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. There aren’t tribes of slipper owners as there are smartphone buyers. Because slippers are a non-complex, functional comfort item with minimal fashion impact. So an individual’s slipper preferences, even if very liberally put about on social media, are unlikely to generate strong opinions or reactions either way.

Political opinions and political positions are another matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it another way, political opinions are not slippers. People rarely try a new one on for size. Yet social media firms spent a very long time indeed trying to sell the ludicrous fallacy that content about slippers and maliciously crafted political propaganda, mass-targeted tracelessly and inexpensively via their digital ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for example.

Indeed, look back over the last few years’ news about fake news, and social media platforms have demonstrably sought to play down the idea that the content distributed via their platforms might have had any sort of quantifiable impact on the democratic process at all.

Yet these are the same firms that make money — very large amounts of money, in some cases — by selling their capability to influentially target advertising.

So they have essentially tried to claim that it’s only when foreign entities engage with their digital advertising platforms, and used their digital advertising tools — not to sell slippers or a Netflix subscription but to press people’s biases and prejudices in order to sew social division and impact democratic outcomes — that, all of a sudden, these powerful tech tools cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall content ocean they’re serving up so hey why can’t you just stop overreacting?

That’s also pure misdirection of course. The wider problem with malicious disinformation is it pervades all content on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much money paying Twitter and Facebook for Brexit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share content created by RT, for example — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and impact of its digital propaganda without having to send the tech firms any more checks.

And indeed, Russia is still operating ranks of bots on social media which are actively working to divide public opinion, as Facebook freely admits.

Maliciously minded content has also been shown to be preferred by (for example) Facebook’s or Google’s algorithms vs truthful content, because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing techie efforts to fix what they view as some kind of content-sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contextual judgement is very hard for AI — as Zuckerberg himself has conceded. But human review is unthinkable. Tech giants simply do not want to employ the numbers of humans that would be necessary to always be making the right editorial call on each and every piece of digital content.

If they did, they’d instantly become the largest media organizations in the world — needing at least hundreds of thousands (if not millions) of trained journalists to serve every market and local region they cover.

They would also instantly invite regulation as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake news is an existential problem for social media.

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

Because if you sit and think about the full scope of malicious disinformation, coupled with the automated global distribution platforms that social media has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions look viable:

A) bespoke regulation, including regulatory access to proprietary algorithmic content-sorting engines.

B) breaking up big tech so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses — swapping out editorially structured news distribution for machine-powered content hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of abuses and misuses slowly emerges. And as certain damages are felt.

Facebook’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s YouTube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in media distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled information streams.

Really, the case for social media regulation is starting to look unstoppable.

But even with unfettered access to internal data and the potential to control content-sifting engines, how do you fix a problem that scales so very big and broad?

Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

So, again, this problem looks existential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Featured Image: Quinn Dombrowski/Flickr UNDER A CC BY-SA 2.0 LICENSE

 


0

Trump cites Facebook exec’s comments downplaying Russian influence on election

20:53 | 18 February

You’d be forgiven for missing Donald Trump’s multiple retweets of Facebook executive Rob Goldman over the weekend. Perhaps you were spending time with family, watching Black Panther or just attempting to forget politics for a moment by ignoring the manic flurry of social media updates from the leader of the free world.

But in amongst a deluge of tweets that blamed Democrats for failing to preserve DACA, called out the FBI over the recent school shooting in Florida on the FBI and affectionately referred to a member of congress as “Liddle’ Adam Schiff, the leakin’ monster of no control,” the President cited Facebook’s VP of Ads as evidence against claims that his campaign colluded with Russia.

“The Fake News Media never fails,” Trump tweeted over the weekend. “Hard to ignore this fact from the Vice President of Facebook Ads, Rob Goldman!”

“I have seen all of the Russian ads and I can say very definitively that swaying the election was *NOT* the main goal.”
Rob Goldman
Vice President of Facebook Ads https://t.co/A5ft7cGJkE

— Donald J. Trump (@realDonaldTrump) February 17, 2018

Trump was citing Goldman’s own Twitter dump over the past week, responding to Special Counsel Robert Mueller’s recent indictment of 13 Russian citizens charged with interfering in the presidential election.

“Very excited to see the Mueller indictment today,” Goldman wrote. “We shared Russian ads with Congress, Mueller and the American people to help the public understand how the Russians abused our system.  Still, there are keys facts about the Russian actions  that are still not well understood.”

Of course, Mueller’s findings haven’t exactly exonerated Facebook in all this. The site, along with its subsidiary Instagram, were mentioned by name 41 times in the indictment.

Goldman’s Twitter storm acknowledges that the social media behemoth has certainly been a centerpiece of Russia’s misinformation campaign, but adds, “The majority of the Russian ad spend happened AFTER the election.  We shared that fact, but very few outlets have covered it because it doesn’t align with the main media narrative of Tump and the election.”

The Fake News Media never fails. Hard to ignore this fact from the Vice President of Facebook Ads, Rob Goldman! https://t.co/XGC7ynZwYJ

— Donald J. Trump (@realDonaldTrump) February 17, 2018

Trump spotted the opening and quickly cited it as evidence of the “fake news” campaign to link   his election to Russian meddling. While it’s understandable that he would seize upon this sort of statement from a Facebook executive in an on-going effort to put these investigations behind him, among other things, the tweets don’t address the impact that non-advertisement Facebook posts played in the election.

After all, Facebook previously told Congress that Russian-linked ads may have reached as many as 10 million users in the U.S., while the posts from Russian agents were believed to have reached as many as 126 million Americans.

Featured Image: Cheriss May/NurPhoto via Getty Images

 


0

How ad-free subscriptions could solve Facebook

20:22 | 17 February

 At the core of Facebook’s “well-being” problem is that its business is directly coupled with total time spent on its apps. The more hours you pass on the social network, the more ads you see and click, the more money it earns. That puts its plan to make using Facebook healthier at odds with its finances, restricting how far it’s willing to go to protect us from the harms… Read More

 


0

Facebook didn’t mean to send spam texts to two-factor authentication users

20:15 | 17 February

Facebook Chief Security Officer Alex Stamos apologized for spam texts that were incorrectly sent to users who had activated two-factor authentication. The company is working on a fix, and you won’t receive non-security-related text messages if you never signed up for those notifications.

Facebook says it was a bug. But calling it a bug is a bit too easy — it’s a feature that was badly implemented as it’s clear that Facebook has been treating all phone numbers the same way. It doesn’t matter if you add your phone number for security reasons or to receive notifications. Facebook put all of them in the same bucket. It’s poor design, not a bug.

“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused,” Stamos wrote. “We are working to ensure that people who sign up for two-factor authentication won’t receive non-security-related notifications from us unless they specifically choose to receive them, and the same will be true for those who signed up in the past. We expect to have the fixes in place in the coming days. To reiterate, this was not an intentional decision; this was a bug.”

And yet, this is particularly bad because it creates a bad narrative around two-factor authentication. While Facebook lets you use a code generator mobile app or a U2F USB key, many people rely on text messages for two-factor authentication. It’s a second layer of security so that strangers who have your password can’t connect without the second factor.

Everyone should enable two-factor authentication. But people might hesitate now that they know Facebook has used a security feature to improve engagement in the past. I’d recommend turning it on with a code generator.

Does it mean tech publications shouldn’t have shared this information? Of course not (and I’m looking at you, former Facebook security engineer Alec Muffett). If nobody had written about the issue, Facebook would still be spamming users and sharing great engagement numbers in its quarterly earnings release.

The fact that Facebook poorly implemented a security feature is… Facebook’s fault.

In addition to that, Facebook is also disabling posting to Facebook via text messages altogether. Earlier this week, a tweet went viral as Gabriel Lewis tried disabling those text notifications and ended up sharing posts on Facebook:

So I signed up for 2 factor auth on Facebook and they used it as an opportunity to spam me notifications. Then they posted my replies on my wall. 🤦‍♂️ pic.twitter.com/Fy44b07wNg

— Gabriel Lewis 🦆 (@Gabriel__Lewis) February 12, 2018

The company says that this feature may have been useful at some point when smartphones were less popular, but there’s no reason to keep it around now.

Featured Image: Facebook

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 3035

Site search


Last comments

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

An Interview With Shaquille O’Neal: Businessman, Investor And Video Game Star
Anna K
Shaquilee is a mogul! I see him the Gold bond commercials and think that he's doing something right…
Anna K