Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 67

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: Government

<< Back Forward >>
Topics from 1 to 10 | in all: 555

Trump’s new cyber strategy eases rules on use of government cyberweapons

17:00 | 21 September

The Trump administration’s new cyber strategy out this week isn’t much more than a stringing together of previously considered ideas.

In the 40-page document, the government set out its plans to improve cybersecurity, incentivizing change, and reforming computer hacking laws. Election security about a quarter of a page, second only to “space cybersecurity.”

The difference was the tone. Although the document had no mention of “offensive” action against actors and states that attack the US, the imposition of “consequences” was repeated.

“Our presidential directive effectively reversed those restraints, effectively enabling offensive cyber-operations through the relevant departments,” said John Bolton, national security advisor, to reporters.

“Our hands are not tied as they were in the Obama administration,” said Bolton, throwing shade on the previous government.

The big change, beyond the rehashing of old policies and principles, was the tearing up of an Obama-era presidential directive, known as PPD-20, which put restrictions on the government’s cyberweapons. Those classified rules were removed a month ago, the Wall Street Journal reported, described at the time as an “offensive step forward” by an administration official briefed on the plan.

In other words, it’ll give the government greater authority to hit back at targets seen as active cyberattackers — like Russia, North Korea, and Iran — all of which have been implicated in cyberattacks against the US in the recent past.

Any rhetoric that ramps up the threat of military action or considers use of force — whether in the real world or in cyberspace — is all too often is met with criticism, amid concerns of rising tensions. This time, not everyone hated it. Even ardent critics like Sen. Mark Warner of the Trump administration said the new cyber strategy contained “important and well-established cyber priorities.”

The Obama administration was long criticized for being too slow and timid after recent threats — like North Korea’s use of the WannaCry and Russian disinformation campaigns. Some former officials pushed back, saying the obstacle to responding aggressively to a foreign cyberattack was not the policy, but the inability of agencies to deliver a forceful response.

Kate Charlet, a former government cyber policy chief, said that policy’s “chest-thumping” rhetoric is forgivable so long as it doesn’t mark an escalation in tactics.

“I felt keenly the Department’s frustration over the challenges in taking even reasonable actions to defend itself and the United States in cyberspace,” she said. “I have since worried that the pendulum would swing too far in the other direction, increasing the risk of ill-considered operations, borne more of frustration than sensibility.”

Trump’s new cyber strategy, although a change in tone, ratchets up the rhetoric but doesn’t mean the government will suddenly become trigger-happy overnight. While the government now has greater powers to strike back, it may not have to if the policy serves as the deterrent it’s meant to be.

 


0

Symantec offers free anti-spoofing services to US political campaigns and election groups

16:00 | 18 September

Symantec is the latest private security company to offer its expertise to vulnerable political targets on the house. Today the company announced that it would extend its “Project Dolphin” service (dolphins eat phish, get it) to political campaigns, candidates and election officials, all “prime target[s] for malicious actors seeking to influence the outcome of the upcoming U.S. midterm elections.” The service allows for anyone to run a check on their own website to make sure no illegitimate or “spoofed” versions of it are floating around and luring unsuspecting victims.

Individuals in those qualifying groups can sign up for free for Project Dolphin, Symantec’s AI-powered system that scans for and notifies users of illegitimate websites pretending to be the real thing — just one flavor of the common hacking technique called “spoofing.” Through spoofed sites, much like spoofed email accounts, hackers can steal login credentials and other sensitive data and wreak whatever kind of havoc they want, much like they did with the DNC prior to the 2016 US presidential election.

The company will also offer some educational services on a new dedicated election security site, including best practice for poll workers and election officials, anti-tampering training, and an election security news hub.

Whether the intended audience for these materials and services will actually take note of them remains to be seen, but cobbling together election security guides now could help smooth the path to more secure elections by 2020.

“The issues that plagued the 2016 election are still prevalent today and are likely to continue to persist through the midterm elections, into 2020, and into elections globally,” Symantec CEO Greg Clark said.

“It is important for all parties, public and private, to contribute to protecting the security and integrity of our elections and democracy.”

While it’s quite late to the game — at least for 2018 midterms — Symantec joins a number of security companies that have extended free or deeply discounted services to candidates and election bodies, including Cloudflare, Valimail and Synack.

 


0

This is what Americans think about the state of election security right now

02:00 | 18 September

A wide-ranging new poll yields some useful insight into how worried the average American feels about election threats as the country barrels toward midterms.

The survey, conducted by NPR and researchers with Marist College, polled 949 adult US residents in early September across regions of the country, contacting participants through both landlines and mobile devices. The results are a significant glimpse into current attitudes around the likelihood of foreign election interference, election security measures and how well social media companies have rebounded in the public eye.

Attitudes toward Facebook and Twitter

As the most recent dust settles around revelations that Russia ran influence campaigns targeting Americans on social media platforms, just how much do US voters trust that Facebook and Twitter have cleaned up their acts? Well, they’re not convinced yet.

In response to a question asking about how much those companies had done since 2016 “to make sure there is no interference from a foreign country” in the US midterm elections, 24% of respondents believed that Facebook had done either “a great deal” or “a good amount” while 62% believed the company had done “not very much” or “nothing at all.”

When asked the same question about Twitter, only 19% thought that the company had made significant efforts, while 57% didn’t think the company had done much. Unlike nearly every other question in the broad-ranging survey, answers to this set of questions didn’t show a divide between Republicans and Democrats, making it clear that in 2018, disdain for social media companies is a rare bipartisan position.

When it comes to believing what they read on Facebook, only 12% of voters had “a great deal” or “quite a lot” of confidence that content on the platform is true, while 79% expressed “not very much confidence” or none at all. Still, those numbers have perked up slightly from polling in 2018 that saw only 4% of those polled stating that they were confident in the veracity of content they encountered on Facebook.

Midterm perspectives

In response to the question “Do you think the U.S. is very prepared, prepared, not very prepared or not prepared at all to keep this fall’s midterm elections safe and secure?” 53% of respondents felt that the US is prepared while 39% believed that it is “not very prepared” or not prepared at all. Predictably, this question broke down along party lines, with 36% of Democrats and 74% of Republicans falling into the “prepared” camp (51% of independents felt the US is prepared).

An impressive 69% of voters believed that it was either very likely or likely that Russia would continue to “use social media to spread false information about candidates running for office” during the midterm elections, suggested that voters are moving into election season with a very skeptical eye turned toward the platforms they once trusted.

When it came to hacking proper, 41% of respondents believed that it was very likely or likely that “a foreign country will hack into voter lists to cause confusion” over who can vote during midterm elections, while 55% of respondents said that hacked voter lists would be not very likely or not at all likely. A smaller but still quite significant 30% those polled believed that it was likely or very likely that a foreign country would “tamper with the votes cast to change the results” of midterm elections.

Election Security pop-quiz

Political divides were surprisingly absent from some other questions around specific election security practices. Democrats, Republicans and independent voters all indicated that they had greater confidence in state and local officials to “protect the actual results” of the elections and trusted federal officials less, even as the Department of Homeland Security takes a more active role in providing resources to protect state and local elections.

A few of the questions had a right answer, and happily most respondents did get a big one right. Overall, 55% of voters polled said that electronic voting systems made US elections less safe from “interference or fraud” — a position largely backed by election security experts who advocate for low-tech options and paper trails over vulnerable digital systems. Only 31% of Democrats wrongly believed that electronic systems were safer, though 49% of Republicans trusted electronic systems more.

When the question was framed a different (and clearer) way, the results were overwhelmingly in favor of paper ballots — a solution that experts widely agree would significantly secure elections. 68% of voters thought that paper ballots would make elections “more safe” — an attitude that both Republican and Democratic Americans could get behind. Unfortunately, legislation urging states nationwide to adopt paper ballots has continued to face political obstacles in contrast to the wide support observed in the present poll.

On one last election security competence question, respondents again weighed in with the right answer. A whopping 89% of those polled correctly believed that online voting would be a death knell for US election security — only 8% said, incorrectly, that connecting elections to the internet would make them more safe.

For a much more granular look at these attitudes and many others, you can peruse the poll’s full results here. For one, there’s more interesting stuff in there. For another, confidence — or the lack thereof — in U.S. voting systems could have a massive impact on voter turnout in one of the most consequential non-presidential elections the nation has ever faced.

 


0

10 critical points from Zuckerberg’s epic security manifesto

11:00 | 13 September

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.

Balancing free speech and security is complicated and expensive

These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”

 


0

Europe to push for one-hour takedown law for terrorist content

13:44 | 12 September

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

There is no way a hosting provider (including your private website, if it includes comments section) can comply with these obligations without

. It’s not limited to large platforms. The
has ignored 100% of the
discussion.
pic.twitter.com/WeV0GwDZVD

— Julia Reda (@Senficon)

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”

 


0

Bay Area city blocks 5G deployments over cancer concerns

17:20 | 10 September

The Bay Area may be the center of the global technology industry, but that hasn’t stopped one wealthy enclave from protecting itself from the future.

The city council of Mill Valley, a small town located just a few miles north of San Francisco, voted unanimously late last week to effectively block deployments of small-cell 5G wireless towers in the city’s residential areas.

Through an urgency ordinance, which allows the city council to immediately enact regulations that affect the health and safety of the community, the restrictions and prohibitions will be put into force immediately for all future applications to site 5G telecommunications equipment in the city. Applications for commercial districts are permitted under the passed ordinance.

The ordinance was driven by community concerns over the health effects of 5G wireless antennas. According to the city, it received 145 pieces of correspondence from citizens voicing opposition to the technology, compared to just five letters in support of it — a ratio of 29 to 1. While that may not sound like much, the city’s population is roughly 14,000, indicating that about 1% of the population had voiced an opinion on the matter.

Blocks on 5G deployments are nothing new for Marin County, where other cities including San Anselmo and Ross have passed similar ordinances designed to thwart 5G expansion efforts over health concerns.

These restrictions on small cell site deployments could complicate 5G’s upcoming nationwide rollout. While 5G standards have yet to be standardized, one model that has broad traction in the telecommunications industry is to use so-called “small cell” antennas to increase bandwidth and connection quality while reducing infrastructure and power costs. Smaller antennas are easier to install and will be loss obtrusive, reducing the concerns of urban preservationists to unsightly tower masts that have long plagued the deployment of 4G antennas in communities across the United States.

Perhaps most importantly, these small cells emit less radiation, since they are not designed to provide as wide of coverage as traditional cell sites. The telecom industry has long vociferously denied a link between antennas and health outcomes, although California’s Department of Public Health has issued warnings about potential health effects of personal cell phone antennas. Reduced radiation emissions from 5G antennas compared to 4G antennas would presumably further reduce any health effects of this technology.

Restrictions like Mill Valley’s will make it nearly impossible to deploy 5G in a timely manner. As one industry representative told me in an interview a few months ago, “It takes 18 months to get the permit to deploy, and 2 hours to install.” Multiplied by the hundreds of sites required to cover a reasonably-sized urban neighborhood, and the 5G rollout goes beyond daunting to well-near impossible.

While health concerns have bubbled in various municipalities, those concerns are not shared globally. China, through companies like Huawei, is investing billions of dollars to design and build 5G infrastructure, in hopes of stealing the industry crown from the United States, which is the market leader in 4G technologies.

Those competitive concerns have increasingly been a priority at the FCC, where chairman Ajit Pai and his fellow Republican commissioners have pushed hard to overcome local concerns around health and historical preservation. The commission voted earlier this year on new siting rules that would accelerate 5G adoption.

Mill Valley’s ordinance is designed to frustrate those efforts, while remaining within the letter of federal law, which preempts local ordinances. Mill Valley’s mayor has said that the city will look to create a final ordinance over the next year.

 


0

Interview with Priscilla Chan: Her super-donor origin story

19:02 | 9 September

Priscilla Chan is so much more than Mark Zuckerberg’s wife. A teacher, doctor, and now one of the world’s top philanthropists, she’s a dexterous empath determined to help. We’ve all heard Facebook’s dorm-room origin story, but Chan’s epiphany of impact came on a playground.

In this touching interview this week at TechCrunch Disrupt SF, Chan reveals how a child too embarrassed to go to class because of their broken front teeth inspired her to tackle healthcare. “How could I have prevented it? Who hurt her? And has she gotten healthcare, has she gotten the right dental care to prevent infection and treat pain? That moment compelled me, like, ‘I need more skills to fight these problems.'”

That’s led to a $3 billion pledge towards curing all disease from the Chan Zuckerberg Initiative’s $45 billion-plus charitable foundation. Constantly expressing gratitude for being lifted out of the struggle of her refugee parents, she says “I knew there were so many more deserving children and I got lucky”.

Here, Chan shares her vision for cause-based philanthropy designed to bring equity of opportunity to the underserved, especially in Facebook’s backyard in The Bay. She defends CZI’s apolitical approach, making allies across the aisle despite the looming spectre of the Oval Office. And she reveals how she handles digital well-being and distinguishes between good and bad screen time for her young daughters Max and August. Rather than fielding questions about Mark, this was Priscilla’s time to open up about her own motivations.

Most importantly, Chan calls on us all to contribute in whatever way feels authentic. Not everyone can sign the Giving Pledge or dedicate their full-time work to worthy causes. But it’s time for tech’s rank-and-file rich to dig a little deeper. Sometimes that means applying their engineering and product skills to develop sustainable answers to big problems. Sometimes that means challenging the power structures that led to the concentration of wealth in their own hands. She concludes, “You can only try to break the rules so many times before you realize the whole system’s broken.”

[gallery ids="1706485,1706479,1706480,1706481,1706483,1706484,1706492,1706491,1706487,1706486,1706478,1706477"]

 


0

Trump wants to just tariff the hell out of China

22:35 | 7 September

Another day, another whopper of a tariff. The Trump administration has been busy finalizing the rulemaking process to put 25% tariffs on $200 billion of Chinese goods, which will almost certainly affect the prices of many critical technology components and have on-going repercussions for Silicon Valley supply chains. That followed the implementation of tariffs on $50 billion of goods earlier this year.

Now, President Trump, as reported by reporters on Air Force One this morning, has said that he is prepared to triple down on his tariffs strategy, saying that he is ready to add tariffs to another $267 billion worth of Chinese goods. Although the president has a flair for the dramatic in many of his policies, the China tariffs are one arena in which his rhetoric has matched the actions of his administration.

Each set of these tariffs have been vociferously opposed by tech industry trade groups, but their concerns seem to have had little effect on the administration’s final thinking. Jose Castaneda, a spokesperson for the Information Technology Industry Council, called this next wave of potential tariffs “grossly irresponsible and possibly illegal.”

Yet, despite the constant threat of more tariffs, CFIUS reforms, and the ZTE debacle, China continues to dominate trade with America. Numbers released by the Department of Commerce this week showed that America’s trade deficit with other nations reached five-year highs in July, surpassing $50 billion for the month, with the China trade goods deficit hitting $36.8 billion. These numbers may well have triggered the president’s latest comments.

They may also have been triggered by the recent anonymous op-ed in the New York Times, in which a Trump “senior administration official” said that “Although he was elected as a Republican, the president shows little affinity for ideals long espoused by conservatives: free minds, free markets and free people….In addition to his mass-marketing of the notion that the press is the ‘enemy of the people,’ President Trump’s impulses are generally anti-trade and anti-democratic.”

Anti-trade or not, it is clear that the package of tariffs and other policy reforms have done little to dampen the trade deficit or trigger a broad restructuring of the supply chains underpinning American brands.

In my discussions at the Disrupt SF 2018 conference the past few days, one persistent theme has been the durability of certain Chinese cities — particularly Shenzhen but not exclusively — to weather these trade storms. The depth of expertise, fast turnaround times, extreme flexibility, and cheap costs of hardware supply chains there are sustainable advantages that the U.S. can’t hope to fight with a couple of measly tariffs — even on $500 billion worth of goods.

Indeed, as one prominent venture capitalist put it to me, hardware investing is now significantly easier for those with the right knowledge of the Chinese ecosystem. Just a few years ago, a couple of millions in capital could get a startup a working prototype. Now, startups can raise $1-2 million in some cases and get a working product into sales channels. The Chinese ecosystem around hardware has just continued to improve with alacrity.

For Trump, a much more robust policy will be needed to move the trade numbers in the other direction. Better funding for universities to produce the right talent. Pushing for a region in the U.S. to become the “Shenzhen of America” through a combination of private and public funding. Greater preferential treatment around taxes for keeping manufacturing in the U.S.

And maybe tariff the hell out of them.

 


0

UK media giants call for independent oversight of Facebook, YouTube, Twitter

16:27 | 3 September

The UK’s leading broadcasters and ISPs have called for the government to introduce independent regulatory oversight of social media content.

The group of media and broadband operators in the tightly regulated industries spans both the state-funded and commercial sector — with the letter to the Sunday Telegraph being inked with signatures from the leaders of the BBC, ITV, Channel 4, Sky, BT and TalkTalk.

They argue there’s an “urgent” need for independent oversight of social media, and counter suggestions that such a move would amount to censorship by pointing out that tech companies are already making choices about what to allow (or not) on their platforms.

They are argue independent oversight is necessary to ensure “accountability and transparency” over those decisions, writing: “There is an urgent need for independent scrutiny of the decisions taken, and greater transparency. This is not about censoring the internet, it is about making the most popular internet platforms safer, by ensuring there is accountability and transparency over the decisions these private companies are already taking.”

“We do not think it is realistic or appropriate to expect internet and social media companies to make all the judgment calls about what content is and is not acceptable, without any independent oversight,” they add.

Calls for regulation of social media platforms have been growing from multiple quarters and countries, and politicians clearly feel there is political capital to spend here. (Indeed, Trump’s latest online punchbag is Google.)

Yet policymakers the world over face the challenge of how to regulate platforms that have become so popular and therefore so powerful. (Germany legislated to regulate social media firms over hate speech takedowns last year but it’s in the vanguard of government action.)

The UK government has made a series of proposals around Internet safety in recent years, and the media & telco group argues this is a “golden opportunity” to act against what they describe as “all potential online harms” — further suggesting that “many of which are exacerbated by social media”.

The government is working on a white paper on Internet safety, and the Telegraph says potential interventions currently under private debate include the creation of a body along the lines of the UK’s Advertising Standards Authority (which reports to Ofcom), which it says could oversee Facebook, Google and Twitter to decide whether to remove material in response to complaints from users.

The newspaper adds that it is envisaged by proponents of this idea that such a regime would be voluntary but backed with the threat of a legislative crackdown if the online environment does not improve. (The EU has been taking this approach with hate speech takedowns.)

Commenting on the group’s letter, a government spokesperson told the Telegraph: “We have been clear that more needs to be done to tackle online harms. We are committed to further legislation.”

For their part, tech platforms claim they are platforms not publishers.

Yet their algorithms indisputably create hierarchies of information — which they also distribute at vast scale. At the same time they operate their own systems of community standards and content rules, which they enforce (typically imperfectly and inconsistently), via after-the-fact moderation.

The cracks in this facade are very evident — whether it’s a high profile failure such as the Kremlin-backed mass manipulation of Facebook’s platform or this smaller scale but no less telling individual moderation failure. There are very clearly severe limitations to the self-regulation the companies typically enjoy.

Meanwhile, the impacts of bad content decisions and moderation failures are increasingly visible — as a consequence of the the vast scale of (especially) Facebook and Google’s YouTube.

In the UK, a parliamentary committee which has been probing the impact of social media amplified disinformation on democracy recently recommended a third category be created to regulate tech giants that’s not necessarily either a platform or a publisher but which tightens their liabilities.

The committee’s first report, following a long and drama-packed enquiry this year (thanks to the Cambridge Analytica Facebook data misuse scandal), also called for social media firms to be taxed to pay for major investment in the UK’s data protection watchdog so it is better resourced to be able to police data-related malfeasance.

The committee also suggested there should be an education levy also raised off social media firms to pay for the digital literacy skills necessary for citizens to navigate all the stuff being amplified by their platforms.

In their letter to the Sunday Telegraph the group emphasizes their own investment in the UK, whether in the form of tax payments, original content creation or high-speed broadband infrastructure.

Whereas U.S. tech giants stand accused of making lower contributions to national coffers as a result of how they structure their businesses.

The typical tech firm response to tax-related critiques is to say they always pay the tax that is due. But technical compliance with the intricacies of tax law will do nothing to alleviate the reputational damage they could suffer if their businesses become widely perceived as leaching off (rather than contributing to) the nation state.

And that’s the political lever the media firms and ISPs look to be seeking to pull here.

We’ve reached out to Facebook, Twitter and Google for comment.

 


0

Privacy groups ask senators to confirm US surveillance oversight nominees

21:14 | 29 August

A coalition of privacy groups are calling on lawmakers to fill the vacant positions on the government’s surveillance oversight board, which hasn’t fully functioned in almost two years.

The Privacy and Civil Liberties Oversight Board, known as PCLOB, is a little-known but important group that helps to ensure that intelligence agencies and executive branch policies are falling within the law. The board’s work allows them to have access to classified programs run by the dozen-plus intelligence agencies and determine if they’re legal and effective, while balancing Americans’ privacy and civil liberties rights.

In its most recent unclassified major report in 2015, PCLOB called for an end of the NSA’s collection of Americans’ phone records.

But the board fell out of quorum when four members left the board last year, leaving just the chairperson. President Obama did not fill the vacancies before he left office, putting PCLOB’s work largely on ice.

A report by The Intercept said, citing obtained emails, that the board was “basically dead,” butt things were looking up when President Trump earlier this year picked a bipartisan range of five nominees to the board, including a computer science and policy professor and a former senior Justice Department lawyer were named in March. If confirmed by the Senate Judiciary Committee, the newly appointed members would put the board back into full swing.

Except the committee has dragged its feet. Hearings have only been heard on three nominees but a vote has yet to be scheduled.

A total of 31 privacy organizations and rights groups, including the ACLU, Open Technology Institute and the Center for Democracy & Technology signed on to the letter calling on the senate panel to push forward with the hearings and vote on the nominees.

“During the eleven years since Congress created the PCLOB as an independent agency, it has only operated with a quorum for four and one-half years,” the letter said. “Without a quorum, the PCLOB cannot issue oversight reports, provide the agency’s advice, or build upon the agency foundations laid by the original members. It is also critical that the PCLOB operate with a full bipartisan slate of qualified individuals.”

The coalition called the lack of quorum a “lost opportunity to better inform the public and facilitate Congressional action.”

Given the continuing aftermath of the massive leak of classified documents by NSA whistleblower Edward Snowden, the board’s work is more important than ever, the letter said.

Spokespeople for the Senate Judiciary Committee did not respond to a request for comment.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 555

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short