Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Facebook Policy

<< Back Forward >>
Topics from 1 to 10 | in all: 25

Daily Crunch: Facebook fights (some) election lies

19:13 | 22 October

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. New Facebook features fight election lies everywhere but ads

Facebook made a slew of announcements designed to stop 2020 election interference — including the takedown of some foreign influence campaigns, the labeling of some state-owned or state-controlled media organizations and a new feature called Facebook Protect, which adds extra security.

However, during a press call about these changes, CEO Mark Zuckerberg was hammered with questions about Facebook’s continued unwillingness to fact check political ads.

2. Elon Musk tweets using SpaceX’s Starlink satellite internet

SpaceX CEO Elon Musk used an internet connection provided by his company’s Starlink constellation of broadband satellites early on Tuesday to send a simple tweet.

3. Roku buys adtech platform dataxu for $150M

Roku is beefing up its advertising business with the acquisition of Boston-based dataxu, a demand-side platform that will allow marketers to plan, buy and optimize their video ad campaigns that run on Roku’s devices and services.

4. FTC settles with Devumi, a company that sold fake followers, for $2.5M

The U.S. Federal Trade Commission has put an end to the deceptive marketing tactics of Devumi, a company that sold fake indicators of social media influence.

5. Sandbox VR raises millions more in celebrity party round

Location-based virtual reality startup Sandbox VR announced a huge $68 million Series A led by Andreessen Horowitz at the beginning of the year. Now it’s bringing on some new investors — including Justin Timberlake, Katy Perry, Orlando Bloom and Will Smith — in an $11 million “strategic” round.

6. Medium says it will compensate writers based on reading time, not claps

According to Medium’s Emma Smith, reading time is “a closer measure of quality and resonance with readers.” She also said Medium has now paid out more than $6 million total to 30,000 writers.

7. 6 tips founders need to know about securing their startup

We sat down with three experts on the Extra Crunch stage at Disrupt SF to help startups and founders understand security — what they need to do, when and why. (Extra Crunch membership required.)

 


0

Labor leaders and startup founders talk how to build a sustainable gig economy

01:44 | 17 October

Over the past few years, gig economy companies and the treatment of their labor force has become a hot button issue for public and private sector debate.

At our recent annual Disrupt event in San Francisco, we dug into how founders, companies and the broader community can play a positive role in the gig economy, with help from Derecka Mehrens, an executive director at Working Partnerships USA and co-founder of Silicon Valley Rising — an advocacy campaign focused on fighting for tech worker rights and creating an inclusive tech economy — and Amanda de Cadenet, founder of Girlgaze, a platform that connects advertisers with a network of 200,000 female-identifying and non-binary creatives.

Derecka and Amanda dove deep into where incumbent gig companies have fallen short, what they’re doing to right the ship, whether VC and hyper-growth mentalities fit into a sustainable gig economy, as well as thoughts on Uber’s new ‘Uber Works’ platform and CA AB-5. The following has been lightly edited for length and clarity.

Where current gig companies are failing

Arman Tabatabai: What was the original promise and value proposition of the gig economy? What went wrong?

Derecka Mehrens: The gig economy exists in a larger context, which is one in which neoliberalism is failing, trickle-down economics is proven wrong, and every day working people aren’t surviving and are looking for something more.

And so you have a situation in which the system we put together to create employment, to create our communities, to build our housing, to give us jobs is dysfunctional. And within that, folks are going to come up with disruptive solutions to pieces of it with a promise in mind to solve a problem. But without a larger solution, that will end up, in our view, exacerbating existing inequalities.

 


0

Europe shows the way in online privacy

21:31 | 26 September

Alastair Mitchell Contributor
Alastair Mitchell is a partner at multi-stage VC fund EQT Ventures and the fund's B2B sales, marketing and SaaS expert. Ali also focuses on helping US companies scale into Europe and vice versa.

After passively watching for many years as tech giants developed dominant market positions that threaten consumer privacy and stifle competition, American antitrust regulators seem to have finally grasped what’s happening and decided to take action. 

This increasing scrutiny, which tacitly acknowledges that Europe’s more proactive regulators were perhaps right all along, is helping unleash a wave of tech startups at the expense of big tech. By holding industry titans accountable over the privacy and use of our data, regulators are encouraging long overdue disruption of everything from back-end infrastructure to consumer services.

Over the past decade, Facebook, Google, Amazon and others have tightened their grip on their respective domains by buying up hundreds of smaller rivals, with little U.S. government opposition. But as their dominance has grown, and as egregious privacy violations and mishaps proliferate, regulators can no longer look the other way.

In recent months, American regulators have announced a flurry of new antitrust investigations into big technology companies. The Federal Trade Commission has voted to fine Facebook $5 billion for misusing consumer data, the U.S. House Judiciary Committee is probing the tech industry for antitrust violations and 50 attorneys general announced an antitrust probe into Google. U.S. officials are even considering establishing a digital watchdog agency.

It’s hard to understand why it took so long, though perhaps U.S. officials were loath to target domestic companies that were driving huge economic growth and creating millions of new jobs. In contrast, their counterparts across the pond have been on an antitrust tear under the watch of European Union antitrust commissioner (and now also EVP of digital affairs) Margrethe Vestager.

Now that regulators from both Europe and the United States are pursuing antitrust probes, they have exposed areas where startups can innovate. 

Startups take on big tech

 


0

Facebook tightens policies around self-harm and suicide

16:47 | 10 September

Timed with World Suicide Prevention Day, Facebook is tightening its policies around some difficult topics including self-harm, suicide, and eating disorder content after consulting with a series of experts on these topics. It’s also hiring a new Safety Policy Manager to advise on these areas going forward. This person will be specifically tasked with analyzing the impacts of Facebook’s policies and its apps on people’s health and well-being, and will explore new ways to improve support for the Facebook community.

The social network, like others in the space, has to walk a fine line when it comes to self-harm content. On the one hand, allowing people to openly discuss their mental health struggles with family, friends, and other online support groups can be beneficial. But on the other, science indicates that suicide can be contagious, and that clusters and outbreaks are real phenomena. Meanwhile, graphic imagery of self-harm can unintentionally promote the behavior.

With its updated policies, Facebook aims to prevent the spread of more harmful imagery and content.

It changed its policy around self-harm images to no longer allow graphic cutting images which can unintentionally promote or trigger self-harm. These images will not be allowed even if someone is seeking support or expressing themselves to aid their recovery, Facebook says.

The same content will also now be more difficult to find on Instagram through search and Explore.

And Facebook has tightened its policy regarding eating disorder content on its apps to prevent an expanded range of content that could contribute to eating disorders. This includes content that focuses on the depiction of ribs, collar bones, thigh gaps, concave stomach, or protruding spine or scapula, when shared with terms related to eating disorders. It will also ban content that includes instructions for drastic and unhealthy weight loss, when shared with those same sorts of terms.

It will also display a sensitivity screen over healed self-harm cuts going forward to help unintentionally promote self-harm.

Even when it takes content down, Facebook says it will now continue to send resources to people who posted self-harm or eating disorder content.

Facebook will additionally include Orygen’s #chatsafe guidelines to its Safety Center and in resources on Instagram. These guidelines are meant to help those who are responding to suicide-related content posted by others or are looking to express their own thoughts and feelings on the topic.

The changes came about over the course of the year, following Facebook’s consultations with a variety of the experts in the field, across a number of countries including the U.S., Canada, U.K. Australia, Brazil, Bulgaria, India, Mexico, Philippines, and Thailand. Several of the policies were updated prior to today, but Facebook is now publicly announcing the combined lot.

The company says it’s also looking into sharing the public data from its platform on how people talk about suicide with academic researchers by way of the CrowdTangle monitoring tool. Before, this was made available primarily to newsrooms and media publishers

Suicide helplines provide help to those in need. Contact a helpline if you need support yourself or need help supporting a friend. Click here for Facebook’s list of helplines around the world. 

 


0

NY attorney general will lead antitrust investigation into Facebook

17:20 | 6 September

New York Attorney General Letitia James announced this morning that she’s leading an investigation into Facebook over antitrust issues — in other words, whether Facebook used its social media dominance to engage in anti-competitive behavior.

In a statement, James said:

Even the largest social media platform in the world must follow the law and respect consumers. I am proud to be leading a bipartisan coalition of attorneys general in investigating whether Facebook has stifled competition and put users at risk. We will use every investigative tool at our disposal to determine whether Facebook’s actions may have endangered consumer data, reduced the quality of consumers’ choices, or increased the price of advertising.

According to the announcement, that coalition includes the attorneys general of Colorado, Florida, Iowa, Nebraska, North Carolina, Ohio, Tennessee and the District of Columbia.

Facebook already announced in June that it was facing an antitrust investigation from the Federal Trade Commission (separate from the privacy-related settlement with the FTC that it announced on the same day). It seems that most of the tech giants are facing antitrust scrutiny from the FTC and Department of Justice.

“People have multiple choices for every one of the services we provide,” Facebook’s vice president of state and local policy Will Castleberry said in a statement after the new investigation was announced. “We understand that if we stop innovating, people can easily leave our platform. This underscores the competition we face, not only in the US but around the globe. We will of course work constructively with state attorneys general and we welcome a conversation with policymakers about the competitive environment in which we operate.”

 


0

Facebook’s content oversight board plan is raising more questions than it answers

15:08 | 28 June

Facebook has produced a report summarizing feedback it’s taken in on its idea of establishing a content oversight board to help arbitrate on moderation decisions.

Aka the ‘supreme court of Facebook’ concept first discussed by founder Mark Zuckerberg last year, when he told Vox:

[O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Facebook has since suggested the oversight board will be up and running later this year. And has just wheeled out its global head of policy and spin for a European PR push to convince regional governments to give it room for self-regulation 2.0, rather than slapping it with broadcast-style regulations.

The latest report, which follows a draft charter unveiled in January, rounds up input fed to Facebook via six “in-depth” workshops and 22 roundtables convened by Facebook and held in locations of its choosing around the world.

In all, Facebook says the events were attended by 650+ people from 88 different countries — though it further qualifies that by saying it had “personal discussions” with more than 250 people and received more than 1,200 public consultation submissions.

“In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed,” Facebook writes.

It goes without saying that this input represents a minuscule fraction of the actual ‘population’ of Facebook’s eponymous platform, which now exceeds 2.2BN accounts (an unknown portion of which will be fake/duplicates), while its operations stretch to more than double the number of markets represented by individuals at the events.

The feedback exercise — as indeed the concept of the board itself — is inevitably an exercise in opinion abstraction. Which gives Facebook leeway to shape the output as it prefers. (And, indeed, the full report notes that “some found this public consultation ‘not nearly iterative enough, nor transparent enough, to provide any legitimacy’ to the process of creating the Board”.)

In a blog post providing its spin on the “global feedback and input”, Facebook culls three “general themes” it claims emerged from the various discussions and submissions — namely that: 

  • People want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties, writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists
  • How the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public are key considerations — though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed
  • People also want a board that’s “as diverse as the many people on Facebook and Instagram” — the problem being that’s clearly impossible, given the planet-spanning size of Facebook platforms. Another desire Facebook highlights is for the board to be able to encourage it to make “better, more transparent decisions”. The need for board decisions (and indeed decisions Facebook takes when setting up the board) to be transparent emerges as a major theme in the report. In terms of the board’s make-up, Facebook says it should comprise experts with different backgrounds, different disciplines, and different viewpoints — “who can all represent the interests of a global community”. Though there’s clearly going to be differing views on how or even whether that’s possible to achieve; and therefore questions over how a 40-odd member body, that will likely rarely sit in plenary, can plausibly act as an prism for Facebook’s user-base

The report is worth reading in full to get a sense of the broad spectrum of governance questions and conundrums Facebook is here wading into.

If, as it very much looks, this is a Facebook-configured exercise in blame spreading for the problems its platform hosts, the surface area for disagreement and dispute will clearly be massive — and from the company’s point of view that already looks like a win. Given how, since 2016, Facebook (and Zuckerberg) have been the conduit for so much public and political anger linked to the spreading and accelerating of harmful online content.

Differing opinions and will also provide cover for Facebook to justify starting “narrow”. Which it has said it will do with the board, aiming to have something up and running by the end of this year. But that just means it’ll be managing expectations of how little actual oversight will flow right from the very start.

The report also shows that Facebook’s claimed ‘listening ear’ for a “global perspective” has some very hard limits.

So while those involved in the consultation are reported to have repeatedly suggested the oversight board should not just be limited to content judgement — but should also be able to make binding decisions related to things like Facebook’s newsfeed algorithm or wider use of AI by the company — Facebook works to shut those suggestions down, underscoring the scope of the oversight will be limited to content.

“The subtitle of the Draft Charter — “An Oversight Board for Content Decisions” — made clear that this body would focus specifically on content. In this regard, Facebook has been relatively clear about the Board’s scope and remit,” it writes. “However, throughout the consultation period, interlocutors often proposed that the Board hear a wide range of controversial and emerging issues: newsfeed ranking, data privacy, issues of local law, artificial intelligence, advertising policies, and so on.”

It goes on to admit that “the question persisted: should the Board be restricted to content decisions only, without much real influence over policy?” — before picking a selection of responses that appear intended to fuzz the issue, allowing it to position itself as seeking a reasoned middle ground.

“In the end, balance will be needed; Facebook will need to resolve tensions between minimalist and maximalist visions of the Board,” it concludes. “Above all, it will have to demonstrate that the Oversight Board — as an enterprise worth doing — adds value, is relevant, and represents a step forward from content governance as it stands today.”

Sample cases the report suggests the board could review — as suggested by participants in Facebook’s consultation — include:

  • A user shared a list of men working in academia, who were accused of engaging in inappropriate behavior and/or abuse, including unwanted sexual advances;
  • A Page that commonly uses memes and other forms of satire shared posts that used discriminatory remarks to describe a particular demographic group in India;
  • A candidate for office made strong, disparaging remarks to an unknown passerby regarding their gender identity and livestreamed the interaction. Other users reported this due to safety concerns for the latter person;
  • A government official suggested that a local minority group needed to be cautious, comparing that group’s behavior to that of other groups that have faced genocide

So, again, it’s easy to see the kinds of controversies and indeed criticisms that individuals sitting on Facebook’s board will be opening themselves up to — whichever way their decisions fall.

A content review board that will inevitably remain linked to (if not also reimbursed via) the company that establishes it, and will not be granted powers to set wider Facebook policy — but will instead be tasked with facing the impossible of trying to please all of the Facebook users (and critics) all of the time — does certainly risk looking like Facebook’s stooge; a conduit for channeling dirty and political content problems that have the potential to go viral and threaten its continued ability to monetize the stuff that’s uploaded to its platforms.

Facebook’s preferred choice of phrase to describe its users — “global community” — is a tellingly flat one in this regard.

The company conspicuously avoids talk of communities, pluralinstead the closest we get here is a claim that its selective consultation exercise is “ensuring a global perspective”, as if a singular essence can somehow be distilled from a non-representative sample of human opinion — when in fact the stuff that flows across its platforms is quite the opposite; multitudes of perspectives from individuals and communities whose shared use of Facebook does not an emergent ‘global community’ make.

This is why Facebook has struggled to impose a single set of ‘community standards’ across a platform that spans so many contexts; a one-size-fits all approach very clearly doesn’t fit.

Yet it’s not at all clear how Facebook creating yet another layer of content review changes anything much for that challenge — unless the oversight body is mostly intended to act as a human shield for the company itself, putting a firewall between it and certain highly controversial content; aka Facebook’s supreme court of taking the blame on its behalf.

Just one of the difficult content moderation issues embedded in the businesses of sociotechnical, planet-spanning social media platform giants like Facebook — hate speech — defies a top-down ‘global’ fix.

As Evelyn Douek wrote last year vis-a-via hate speech on the Lawfare blog, after Zuckerberg had floated the idea of a governance structure for online speech: “Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent… This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second.”

A cynic would say Facebook knows it can’t fix planet-scale content moderation and still turn a profit. So it needs a way to distract attention and shift blame.

If it can get enough outsiders to buy into its oversight board — allowing it to pass off the oxymoron of “global governance”, via whatever self-styled structure it allows to emerge from these self-regulatory seeds — the company’s hope must be that the device also works as a bolster against political pressure.

Both over particular problem/controversial content, and also as a vehicle to shrink the space for governments to regulate Facebook.

In a video discussion also embedded in Facebook’s blog post — in which Zuckerberg couches the oversight board project as “a big experiment that we hope can pioneer a new model for the governance of speech on the Internet” — the Facebook founder also makes reference to calls he’s made for more regulation of the Internet. As he does so he immediately qualifies the statement by blending state regulation with industry self-regulation — saying the kind of regulation he’s asking for is “in some cases by democratic process, in other cases through independent industry process”.

So Zuckerberg is making a clear pitch to position Facebook as above the rule of nation state law — and setting up a “global governance” layer is the self-serving vehicle of choice for the company to try and overtake democracy.

Even if Facebook’s oversight board’s structure is so cunningly fashioned as to present to a rationally minded individual as, in some senses, ‘independent’ from Facebook, its entire being and function will remain dependent on Facebook’s continued existence.

Whereas if individual markets impose their own statutory regulations on Internet platforms, based on democratic and societal principles, Facebook will have no control over the rules they impose, direct or otherwise — with uncontrolled compliance costs falling on its business.

It’s easy to see which model sits most easily with Zuckerberg — a man who has demonstrated he will not be held personally accountable for what happens on his platform.

Not when he’s asked by one (non-US) parliament, nor even by representatives from nine parliaments — all keen to discuss the societal fallouts of political disinformation and hate speech spread and accelerated on Facebook. Turns out that’s not the kind of global perspective Facebook wants to sell you.

 


0

Facebook adds new limits to address the spread of hate speech in Sri Lanka and Myanmar

11:21 | 21 June

As Facebook grapples with the spread of hate speech on its platform, it is introducing changes that limit the spread of messages in two countries where it has come under fire in recent years: Sri Lanka and Myanmar.

In a blog post on Thursday evening, Facebook said that it was “adding friction” to message forwarding for Messenger users in Sri Lanka so that people could only share a particular message a certain number of times. The limit is currently set to five people.

This is similar to a limit that Facebook introduced to WhatsApp last year. In India, a user can forward a message to only five other people on WhatsApp . In other markets, the limit kicks in at 20. Facebook said some users had also requested this feature because they are sick of receiving chain messages.

In early March, Sri Lanka grappled with mob violence directed at its Muslim minority. In the midst of it, hate speech and rumors started to spread like wildfire on social media services, including those operated by Facebook. The government in the country then briefly shut down citizen’s access to social media services.

In Myanmar, social media platforms have faced a similar, long-lasting challenge. Facebook, in particular, has been blamed for allowing hate speech to spread that stoked violence against the Rohingya ethnic group. Critics have claimed that the company’s efforts in the country, where did does not have a local office or employees, are simply not enough.

In its blog post, Facebook said it has started to reduce the distribution of content from people in Myanmar who have consistently violated its community standards with previous posts. Facebook said it will use learnings to explore expanding this approach to other markets in the future.

“By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence,” Facebook’s Samidh Chakrabarti, director of product management and civic integrity, and Rosa Birch, director of strategic response, wrote in the blog post.

In cases where it identifies individuals or organizations “more directly promote or engage violence”, the company said it would ban those accounts. Facebook is also extending the use of AI to recognize posts that may contain graphic violence and comments that are “potentially violent or dehumanizing.”

The social network has, in the past, banned armed groups and accounts run by the military in Myanmar, but it has been criticized for reacting slowly and, also, for promoting a false narrative that suggested its AI systems handle the work.

Last month, Facebook said it was able to detect 65% of the hate speech content that it proactively removed (relying on users’ reporting for the rest), up from 24% just over a year ago. In the quarter that ended in March this year, Facebook said it had taken down 4 million hate speech posts.

Facebook continues to face similar challenges in other markets, including India, the Philippines, and Indonesia. Following a riot last month, Indonesia restricted the usage of Facebook, Instagram, and WhatsApp in an attempt to contain the flow of false information.

 


0

Indian PM Narendra Modi’s reelection spells more frustration for US tech giants

21:30 | 23 May

Amazon and Walmart’s problems in India look set to continue after Narendra Modi, the biggest force to embrace the country’s politics in decades, led his Hindu nationalist Bharatiya Janata Party to a historic landslide re-election on Thursday, reaffirming his popularity in the eyes of world’s largest democracy.

The re-election, which gives Modi’s government another five years in power, will in many ways chart the path of India’s burgeoning startup ecosystem, and the local play of Silicon Valley companies that have grown increasingly wary of recent policy changes.

At stake is also the future of India’s internet, the second largest in the world. With more than 550 million internet users in India, the nation has emerged as one of the last great growth markets for Silicon Valley companies. Google, Facebook, and Amazon count India as one of their largest and fastest growing markets. And until late 2016, they enjoyed great dynamics with the Indian government.

But in recent years, New Delhi has ordered more internet shutdowns than ever before; and puzzled many over crackdowns on sometimes legitimate websites. To top that, the government recently proposed a law that would require any intermediary — telecom operators, messaging apps, and social media services among others — with more than 5 million users to introduce a number of changes to how they operate in the nation. More on this shortly.

Growing tension

 


0

Facebook will reveal who uploaded your contact info for ad targeting

23:42 | 6 February

Facebook’s crack down on non-consensual ad targeting last year will finally produce results. In March, TechCrunch discovered Facebook planned to require advertisers pledge that they had permission to upload someone’s phone number or email address for ad targeting. That tool debuted in June, though there was no verification process and Facebook just took businesses at their word despite the financial incentive to lie. In November, Facebook launched a way for ad agencies and marketing tech developers to specify who they were buying promotions ‘on behalf of’. Soon that information will finally be revealed to users.

Facebook’s new Custom Audiences transparency feature shows when your contact info was uploaded by who, and if it was shared between brands and partners

Facebook previously only revealed what brand was using your contact info for targeting, not who uploaded it or when

Starting February 28th, Facebook’s “Why am I seeing this?” button in the drop-down menu of feed posts will reveal more than the brand who paid for the ad, some biographical details they targeted, and if they’d uploaded your contact info. Facebook will start to show when your contact info was uploaded, if it was by the brand or one of their agency/developer partners, and when access was shared between partners.

This new level of transparency could help users pinpoint what caused a brand to get ahold of their contact info. That might help them to change their behavior to stay more private. The system could also help Facebook zero in on agencies or partners who are constantly uploading contact info and might not have attained it legitimately. Apparently seeking not to dredge up old privacy problems, Facebook didn’t publish a blog post about the change but simply announced it in a Facebook post to the Facebook Advertiser Hub Page.

The move comes in the wake of Facebook attaching immediately visible “paid for by” labels to more political ads to defend against election interference. With so many users concerned about how Facebook exploits their data, the Custom Audiences transparency feature could provide a small boost of confidence in a time where people have little faith in the social network’s privacy practices.

 


0

10 critical points from Zuckerberg’s epic security manifesto

11:00 | 13 September

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.

Balancing free speech and security is complicated and expensive

These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 25

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short