Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Linus torvalds

All topics: 9

Canonical’s Anbox Cloud puts Android in the cloud

21:09 | 21 January

Canonical, the company behind the popular Ubuntu Linux distribution, today announced the launch of Anbox Cloud, a new platform that allows enterprises to run Android in the cloud.

On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise app to cloud gaming solutions.

The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.

Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.

What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, Director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.

Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.

Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .

 


0

Microsoft wants to bring exFAT to the Linux kernel

20:30 | 28 August

ExFAT, the Extended File Allocation Table, is Microsoft’s file system for flash drives and SD cards, which launched in 2006. Because it was proprietary, mounting these drives and cards on Linux machines generally involved installing additional software. Today, however, Microsoft announced that it is supporting the addition of exFAT to the Linux kernel and publishing the technical specifications for exFAT.

“It’s important to us that the Linux community can make use of exFAT included in the Linux kernel with confidence. To this end, we will be making Microsoft’s technical specification for exFAT publicly available
to facilitate development of conformant, interoperable implementations.”

In addition to wanting it to become part of the Linux kernel, Microsoft also says that it hopes that the exFAT specs will become part of the Open Invention Network’s  Linux definition. Once accepted, the code would benefit “from the defensive patent commitments of OIN’s 3040+ members and licensees,” the company notes.

Microsoft and Linux used to be mortal enemies — and some in the Linux community definitely still think of Microsoft as anti-open source. These days, though, Microsoft has clearly embraced open source and Linux, which is now the most popular operating system on Azure and, optionally, part of Windows 10, thanks to its Windows Subsystem for Linux. It’ll still be interesting to see how the community will react to this proposal. The aftertaste of Microsoft’s strategy of  “embrace, extend and extinguish” still lingers in the community, after all, and not too long ago, this move would’ve been interpreted as yet another example of this.

 


0

The ClockworkPi GameShell is a super fun DIY spin on portable gaming

20:38 | 13 August

Portable consoles are hardly new, and thanks to the Switch, they’re basically the most popular gaming devices in the world. But ClockworkPi’s GameShell is something totally unique, and entirely refreshing when it comes to gaming on the go. This clever DIY console kit provides everything you need to assemble your own pocket gaming machine at home, running Linux-based open-source software and using an open-source hardware design that welcomes future customization.

The GameShell is the result of a successfully Kickstarter campaign, which began shipping to its backers last year and is now available to buy either direct from the company, or from Amazon. The $159.99 ($139.99 as of this writing on sale) includes everything you need to build the console, like the Clockwork Pi quad-core Cortex A7 motherboard with integrated Wi-Fi, Bluetooth, 1GB of DDR3 RAM, but it comes unassembled.

GameShell Clockwork Pi 3

You won’t have to get out the soldering iron – the circuit boards come with all components attached. But you will be assembling screen, keypad, CPU, battery and speaker modules, connecting them with included cables, and then installing them in the slick, GameBoy-esque plastic shell. This might seem like an intimidating task, depending on your level of technical expertise: I know I found myself a bit apprehensive when I opened the various boxes and laid out all the parts in front of me.

But the included instructions, which are just illustrations, like those provided by Lego or Ikea, are super easy to follow and break down the task into very manageable tasks for people of all skill levels. All told, I had mine put together in under an hour, and even though I did get in there with my teeth at one point (to remove a bit of plastic nubbin when assembling the optional Lightkey component, which adds extra function keys to the console), I never once felt overwhelmed or defeated. The time-lapse below chronicles my enter assembly process, start to finish.

What you get when you’re done is a fully functional portable gaming device, which runs Clockwork OS, a Linux-based open-source OS developed by the company. It includes Cave Storyone of the most celebrated indie games of the past couple of decades, and a number of built-in emulators (use of emulators is ethically and legally questionable, but it does provide an easy way to play some of those NES and SNES games you already own with more portability).

There’s a very active community around the GameShell that includes a number of indie games to play on the console, and tips and tricks for modifications and optimal use. It’s also designed to be a STEM educational resource, providing a great way for kids to see what’s actually happening behind the faceplate of the electronics they use everyday, and even getting started coding themselves to build software to run on the console. Loading software is easy, thanks to an included microSD storage card and the ability to easily connect via WiFi to move over software from Windows and Mac computers.

[gallery ids="1868132,1868139,1868138,1868137,1868136,1868135,1868133"]

Everything about the GameShell is programable, and it features micro HDMI out, a built-in music player and Bluetooth support for headphone connection. It’s at once instantly accessible for people with very limited tech chops, and infinitely expandable and hackable for those who do want to go deeper and dig around with what else it has to offer.

Swappable face and backplates, plus open 3D models of each hardware component, mean that community-developed hardware add-ons and modifications are totally possible, too. The modular nature of the device means it can probably get even more powerful in future too, with higher capacity battery modules and improved development boards.

I’ve definitely seen and used devices like the GameShell before, but few manage to be as accessible, powerful and customizable all at once. The GameShell is also fast, has great sound and an excellent display, and it seems to be very durable with decent battery life of around three hours or slightly ore of continuous use depending on things like whether you’re using WiFi and screen brightness.

 


0

Canonical’s Mark Shuttleworth on dueling open-source foundations

00:50 | 30 April

At the Open Infrastructure Summit, which was previously known as the OpenStack Summit, Canonical founder Mark Shuttleworth used his keynote to talk about the state of open-source foundations — and what often feels like the increasing competition between them. “I know for a fact that nobody asked to replace dueling vendors with dueling foundations,” he said. “Nobody asked for that.”

He then put a point on this, saying, “what’s the difference between a vendor that only promotes the ideas that are in its own interest and a foundation that does the same thing. Or worse, a foundation that will only represent projects that it’s paid to represent.”

Somewhat uncharacteristically, Shuttleworth didn’t say which foundations he was talking about, but since there are really only two foundations that fit the bill here, it’s pretty clear that he was talking about the OpenStack Foundation and the Linux Foundation — and maybe more precisely the Cloud Native Computing Foundation, the home of the incredibly popular Kubernetes project.

It turns out, that’s only part of his misgivings about the current state of open-source foundations, though. I sat down with Shuttleworth after his keynote to discuss his comments, as well as Canonical’s announcements around open infrastructure.

One thing that’s worth noting at the outset is that the OpenStack Foundation is using this event to highlight that fact that it has now brought in more new open infrastructure projects outside of the core OpenStack software, with two of them graduating from their pilot phase. Shuttleworth, who has made big bets on OpenStack in the past and is seeing a lot of interest from customers, is not a fan. Canonical, it’s worth noting, is also a major sponsor of the OpenStack Foundation. He, however, believes, the foundation should focus on the core OpenStack project.

“We’re busy deploying 27 OpenStack clouds — that’s more than double the run rate last year,” he said. “OpenStack is important. It’s very complicated and hard. And a lot of our focus has been on making it simpler and cleaner, despite the efforts of those around us in this community. But I believe in it. I think that if you need large-scale, multi-tenant virtualization infrastructure, it’s the best game in town. But it has problems. It needs focus. I’m super committed to that. And I worry about people losing their focus because something newer and shinier has shown up.”

To clarify that, I asked him if he essentially believes that the OpenStack Foundation is making a mistake by trying to be all things infrastructure. “Yes, absolutely,” he said. “At the end of the day, I think there are some projects that this community is famous for. They need focus, they need attention, right? It’s very hard to argue that they will get focus and attention when you’re launching a ton of other things that nobody’s ever heard of, right? Why are you launching those things? Who is behind those decisions? Is it a money question as well? Those are all fair questions to ask.”

He doesn’t believe all of the blame should fall on the Foundation leadership, though. “I think these guys are trying really hard. I think the common characterization that it was hapless isn’t helpful and isn’t accurate. We’re trying to figure stuff out.” Shuttleworth indeed doesn’t believe the leadership is hapless, something he stressed, but he clearly isn’t all that happy with the current path the OpenStack Foundation is on either.

The Foundation, of course, doesn’t agree. As OpenStack Foundation COO Mark Collier told me, the organization remains as committed to OpenStack as ever. “The Foundation, the board, the community, the staff — we’ve never been more committed to OpenStack,” he said. “If you look at the state of OpenStack, it’s one of the top-three most active open-source projects in the world right now […] There’s no wavering in our commitment to OpenStack.” He also noted that the other projects that are now part of the foundation are the kind of software that is helpful to OpenStack users. “These are efforts which are good for OpenStack,” he said. In addition, he stressed that the process of opening up the Foundation has been going on for more than two years, with the vast majority of the community (roughly 97 percent) voting in favor.

OpenStack board member Allison Randal echoed this. “Over the past few years, and a long series of strategic conversations, we realized that OpenStack doesn’t exist in a vacuum. OpenStack’s success depends on the success of a whole network of other open-source projects, including Linux distributions and dependencies like Python and hypervisors, but also on the success of other open infrastructure projects which our users are deploying together. The OpenStack community has learned a few things about successful open collaboration over the years, and we hope that sharing those lessons and offering a little support can help other open infrastructure projects succeed too. The rising tide of open source lifts all boats.”

As far as open-source foundations in general, he surely also doesn’t believe that it’s a good thing to have numerous foundations compete over projects. He argues that we’re still trying to figure out the role of open-source foundations and that we’re currently in a slightly awkward position because we’re still trying to determine how to best organize these foundations. “Open source in society is really interesting. And how we organize that in society is really interesting,” he said. “How we lead that, how we organize that is really interesting and there will be steps forward and steps backward. Foundations tweeting angrily at each other is not very presidential.”

He also challenged the notion that if you just put a project into a foundation, “everything gets better.” That’s too simplistic, he argues, because so much depends on the leadership of the foundation and how they define being open. “When you see foundations as nonprofit entities effectively arguing over who controls the more important toys, I don’t think that’s serving users.”

When I asked him whether he thinks some foundations are doing a better job than others, he essentially declined to comment. But he did say that he thinks the Linux Foundation is doing a good job with Linux, in large parts because it employs Linus Torvalds . “I think the technical leadership of a complex project that serves the needs of many organizations is best served that way and something that the OpenStack Foundation could learn from the Linux Foundation. I’d be much happier with my membership fees actually paying for thoughtful, independent leadership of the complexity of OpenStack rather than the sort of bizarre bun fights and stuffed ballots that we see today. For all the kumbaya, it flatly doesn’t work.” He believes that projects should have independent leaders who can make long-term plans. “Linus’ finger is a damn useful tool and it’s hard when everybody tries to get reelected. It’s easy to get outraged at Linus, but he’s doing a fucking good job, right?”

OpenStack, he believes, often lacks that kind of decisiveness because it tries to please everybody and attract more sponsors. “That’s perhaps the root cause,” he said, and it leads to too much “behind-the-scenes puppet mastering.”

In addition to our talk about foundations, Shuttleworth also noted that he believes the company is still on the path to an IPO. He’s obviously not committing to a time frame, but after a year of resetting in 2018, he argues that Canonical’s business is looking up. “We want to be north of $200 million in revenue and a decent growth rate and the right set of stories around the data center, around public cloud and IoT.” First, though, Canonical will do a growth equity round.

 


0

How open source software took over the world

20:00 | 12 January

Mike Volpi Contributor
Mike Volpi is a general partner at Index Ventures. Before co-founding the firm's San Francisco office with Danny Rimer, Volpi served as the chief strategy officer at Cisco Systems.

It was just 5 years ago that there was an ample dose of skepticism from investors about the viability of open source as a business model. The common thesis was that Redhat was a snowflake and that no other open source company would be significant in the software universe.

Fast forward to today and we’ve witnessed the growing excitement in the space: Redhat is being acquired by IBM for $32 billion (3x times its market cap from 2014); Mulesoft was acquired after going public for $6.5 billion; MongoDB is now worth north of $4 billion; Elastic’s IPO now values the company at $6 billion; and, through the merger of Cloudera and Hortonworks, a new company with a market cap north of $4 billion will emerge. In addition, there’s a growing cohort of impressive OSS companies working their way through the growth stages of their evolution: Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs and many others. Given the relative multiples that Wall Street and private investors are assigning to these open source companies, it seems pretty clear that something special is happening.

So, why did this movement that once represented the bleeding edge of software become the hot place to be? There are a number of fundamental changes that have advanced open source businesses and their prospects in the market.

David Paul Morris/Bloomberg via Getty Images

From Open Source to Open Core to SaaS

The original open source projects were not really businesses, they were revolutions against the unfair profits that closed-source software companies were reaping. Microsoft, Oracle, SAP and others were extracting monopoly-like “rents” for software, which the top developers of the time didn’t believe was world class. So, beginning with the most broadly used components of software – operating systems and databases – progressive developers collaborated, often asynchronously, to author great pieces of software. Everyone could not only see the software in the open, but through a loosely-knit governance model, they added, improved and enhanced it.

The software was originally created by and for developers, which meant that at first it wasn’t the most user-friendly. But it was performant, robust and flexible. These merits gradually percolated across the software world and, over a decade, Linux became the second most popular OS for servers (next to Windows); MySQL mirrored that feat by eating away at Oracle’s dominance.

The first entrepreneurial ventures attempted to capitalize on this adoption by offering “enterprise-grade” support subscriptions for these software distributions. Redhat emerged the winner in the Linux race and MySQL (thecompany) for databases. These businesses had some obvious limitations – it was harder to monetize software with just support services, but the market size for OS’s and databases was so large that, in spite of more challenged business models, sizeable companies could be built.

The successful adoption of Linux and MySQL laid the foundation for the second generation of Open Source companies – the poster children of this generation were Cloudera and Hortonworks. These open source projects and businesses were fundamentally different from the first generation on two dimensions. First, the software was principally developed within an existing company and not by a broad, unaffiliated community (in the case of Hadoop, the software took shape within Yahoo!) . Second, these businesses were based on the model that only parts of software in the project were licensed for free, so they could charge customers for use of some of the software under a commercial license. The commercial aspects were specifically built for enterprise production use and thus easier to monetize. These companies, therefore, had the ability to capture more revenue even if the market for their product didn’t have quite as much appeal as operating systems and databases.

However, there were downsides to this second generation model of open source business. The first was that no company singularly held ‘moral authority’ over the software – and therefore the contenders competed for profits by offering increasing parts of their software for free. Second, these companies often balkanized the evolution of the software in an attempt to differentiate themselves. To make matters more difficult, these businesses were not built with a cloud service in mind. Therefore, cloud providers were able to use the open source software to create SaaS businesses of the same software base. Amazon’s EMR is a great example of this.

The latest evolution came when entrepreneurial developers grasped the business model challenges existent in the first two generations – Gen 1 and Gen 2 – of open source companies, and evolved the projects with two important elements. The first is that the open source software is now developed largely within the confines of businesses. Often, more than 90% of the lines of code in these projects are written by the employees of the company that commercialized the software. Second, these businesses offer their own software as a cloud service from very early on. In a sense, these are Open Core / Cloud service hybrid businesses with multiple pathways to monetize their product. By offering the products as SaaS, these businesses can interweave open source software with commercial software so customers no longer have to worry about which license they should be taking. Companies like Elastic, Mongo, and Confluent with services like Elastic Cloud, Confluent Cloud, and MongoDB Atlas are examples of this Gen 3.  The implications of this evolution are that open source software companies now have the opportunity to become the dominant business model for software infrastructure.

The Role of the Community

While the products of these Gen 3 companies are definitely more tightly controlled by the host companies, the open source community still plays a pivotal role in the creation and development of the open source projects. For one, the community still discovers the most innovative and relevant projects. They star the projects on Github, download the software in order to try it, and evangelize what they perceive to be the better project so that others can benefit from great software. Much like how a good blog post or a tweet spreads virally, great open source software leverages network effects. It is the community that is the source of promotion for that virality.

The community also ends up effectively being the “product manager” for these projects. It asks for enhancements and improvements; it points out the shortcomings of the software. The feature requests are not in a product requirements document, but on Github, comments threads and Hacker News. And, if an open source project diligently responds to the community, it will shape itself to the features and capabilities that developers want.

The community also acts as the QA department for open source software. It will identify bugs and shortcomings in the software; test 0.x versions diligently; and give the companies feedback on what is working or what is not.  The community will also reward great software with positive feedback, which will encourage broader use.

What has changed though, is that the community is not as involved as it used to be in the actual coding of the software projects. While that is a drawback relative to Gen 1 and Gen 2 companies, it is also one of the inevitable realities of the evolving business model.

Linus Torvalds was the designer of the open-source operating system Linux.

Rise of the Developer

It is also important to realize the increasing importance of the developer for these open source projects. The traditional go-to-market model of closed source software targeted IT as the purchasing center of software. While IT still plays a role, the real customers of open source are the developers who often discover the software, and then download and integrate it into the prototype versions of the projects that they are working on. Once “infected”by open source software, these projects work their way through the development cycles of organizations from design, to prototyping, to development, to integration and testing, to staging, and finally to production. By the time the open source software gets to production it is rarely, if ever, displaced. Fundamentally, the software is never “sold”; it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions.

In other words, open source software permeates itself through the true experts, and makes the selection process much more grassroots than it has ever been historically. The developers basically vote with their feet. This is in stark contrast to how software has traditionally been sold.

Virtues of the Open Source Business Model

The resulting business model of an open source company looks quite different than a traditional software business. First of all, the revenue line is different. Side-by-side, a closed source software company will generally be able to charge more per unit than an open source company. Even today, customers do have some level of resistance to paying a high price per unit for software that is theoretically “free.” But, even though open source software is lower cost per unit, it makes up the total market size by leveraging the elasticity in the market. When something is cheaper, more people buy it. That’s why open source companies have such massive and rapid adoption when they achieve product-market fit.

Another great advantage of open source companies is their far more efficient and viral go-to-market motion. The first and most obvious benefit is that a user is already a “customer” before she even pays for it. Because so much of the initial adoption of open source software comes from developers organically downloading and using the software, the companies themselves can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle. The sales pitch is more along the lines of, “you already use 500 instances of our software in your environment, wouldn’t you like to upgrade to the enterprise edition and get these additional features?”  This translates to much shorter sales cycles, the need for far fewer sales engineers per account executive, and much quicker payback periods of the cost of selling. In fact, in an ideal situation, open source companies can operate with favorable Account Executives to Systems Engineer ratios and can go from sales qualified lead (SQL) to closed sales within one quarter.

This virality allows for open source software businesses to be far more efficient than traditional software businesses from a cash consumption basis. Some of the best open source companies have been able to grow their business at triple-digit growth rates well into their life while  maintaining moderate of burn rates of cash. This is hard to imagine in a traditional software company. Needless to say, less cash consumption equals less dilution for the founders.

Photo courtesy of Getty Images

Open Source to Freemium

One last aspect of the changing open source business that is worth elaborating on is the gradual movement from true open source to community-assisted freemium. As mentioned above, the early open source projects leveraged the community as key contributors to the software base. In addition, even for slight elements of commercially-licensed software, there was significant pushback from the community. These days the community and the customer base are much more knowledgeable about the open source business model, and there is an appreciation for the fact that open source companies deserve to have a “paywall” so that they can continue to build and innovate.

In fact, from a customer perspective the two value propositions of open source software are that you a) read the code; b) treat it as freemium. The notion of freemium is that you can basically use it for free until it’s deployed in production or in some degree of scale. Companies like Elastic and Cockroach Labs have gone as far as actually open sourcing all their software but applying a commercial license to parts of the software base. The rationale being that real enterprise customers would pay whether the software is open or closed, and they are more incentivized to use commercial software if they can actually read the code. Indeed, there is a risk that someone could read the code, modify it slightly, and fork the distribution. But in developed economies – where much of the rents exist anyway, it’s unlikely that enterprise companies will elect the copycat as a supplier.

A key enabler to this movement has been the more modern software licenses that companies have either originally embraced or migrated to over time. Mongo’s new license, as well as those of Elastic and Cockroach are good examples of these. Unlike the Apache incubated license – which was often the starting point for open source projects a decade ago, these licenses are far more business-friendly and most model open source businesses are adopting them.

The Future

When we originally penned this article on open source four years ago, we aspirationally hoped that we would see the birth of iconic open source companies. At a time where there was only one model – Redhat – we believed that there would be many more. Today, we see a healthy cohort of open source businesses, which is quite exciting. I believe we are just scratching the surface of the kind of iconic companies that we will see emerge from the open source gene pool. From one perspective, these companies valued in the billions are a testament to the power of the model. What is clear is that open source is no longer a fringe approach to software. When top companies around the world are polled, few of them intend to have their core software systems be anything but open source. And if the Fortune 5000 migrate their spend on closed source software to open source, we will see the emergence of a whole new landscape of software companies, with the leaders of this new cohort valued in the tens of billions of dollars.

Clearly, that day is not tomorrow. These open source companies will need to grow and mature and develop their products and organization in the coming decade. But the trend is undeniable and here at Index we’re honored to have been here for the early days of this journey.

 


0

Facebook’s GraphQL gets its own open source foundation

20:05 | 6 November

GraphQL, the Facebook -incubated data query language, is moving into its own open source foundation. Like so many other similar open source foundation, the aptly named GraphQL Foundation will be hosted by the Linux Foundation.

Facebook announced GraphQL back in 2012 and open sourced it in 2015. Today, it’s being used by companies that range from Airbnb to Audi, Github, Nextifx, Shopify, Twitter and the New York Times. At Facebook itself, the GraphQL API powers billions of API calls every day. At its core, GraphQL is basically a language for querying databases from client-side applications and a set of specifications for how the API on the backend should present this data to the client. It presents an alternative to REST-based APIs and promises to offer developers more flexibility and the ability to write faster and more secure applications. Virtually every major programming language now supports it through a variety of libraries.

“GraphQL has redefined how developers work with APIs and client-server interactions. We look forward to working with the GraphQL community to become an independent foundation, draft their governance and continue to foster the growth and adoption of GraphQL,” said Chris Aniszczyk, Vice President of Developer Relations at the Linux Foundation.

As Aniszczyk noted, the new foundation will have an open governance model, similar to that of other Linux Foundation projects. The exact details are still a work in progress, though. For now, the list of founding members is also still in flux, but for now, it includes Airbnb, Apollo, Coursera, Elementl, Facebook, GitHub, Hasura, Prisma, Shopify and Twitter.

“We are thrilled to welcome the GraphQL Foundation into the Linux Foundation,” said Jim Zemlin, the Executive Director of the Linux Foundation. “This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.”

For now, the founding members expect that the GraphQL specificationGraphQL.js reference implementation, DataLoader library, and GraphiQL developer tool will become the core technical projects of the foundation, but that, too, could still change.

At this point, the Linux Foundation is essentially a foundation for foundations. It provides support dozens of projects now, with Linux itself being just one of those. Those other foundations include the likes of the Cloud Native Computing Foundation (the home of Kubernetes), the Cloud Foundry Foundation, Automotive Grade Linux, the JS Foundation (which is about to merge with the Node.js Foundation) and more.

As more large companies release open source projects, those projects that become popular often get to the point where having a single company govern the project’s life cycle is neither feasible nor in the best interest of the community. Spinnaker, the continuous delivery platform backed by Netflix and Google, recently reached this point, for example. Surely, GraphQL is also now at this point where it’s stable and has wide adoption but could benefit from being separated from the mothership and get its own vendor-neutral foundation.

 


0

Linus Torvalds declares Intel fix for Meltdown/Spectre ‘COMPLETE AND UTTER GARBAGE’

23:24 | 22 January

The always outspoken Linus Torvalds, best known for his continuing work on the innermost code of Linux systems, has harsh words to say and accusations to level against Intel. His evaluation of Intel’s latest proposed fix for the Meltdown/Spectre issue: “the patches are COMPLETE AND UTTER GARBAGE.” As a potential line of inquiry, he suggests: “Has anybody talked to them and told them they are f*cking insane?” (asterisk his.)

These and other kind epithets are awarded by Torvalds in a public email chain between him and David Woodhouse, an engineer at Amazon in the U.K., regarding Intel’s solution as relating to the Linux kernel. The issue is (as far as I can tell as someone far out of their depth) a clumsy and, Torvalds argues, “insane” implementation of a fix that essentially does nothing while also doing a bunch of unnecessary things.

The fix needs to address Meltdown (which primarily affects Intel chips), but instead of just doing so across the board, it makes the whole fix something the user or administrator has to opt into at boot. Why even ask, if this is such a huge vulnerability? And why do it at such a low level when future CPUs will supposedly not require it, at which point the choice would be at best unnecessary and at worst misleading or lead to performance issues?

Meanwhile, a bunch of other things are added in the same patch that Torvalds points out are redundant with existing solutions, for instance adding protections against an exploit already mitigated by Google Project Zero’s “retpoline” technique.

Why do this? Torvalds speculates that a major part of Intel’s technique, in this case “Indirect Branch Restricted Speculation” or IBRS, is so inefficient that to roll it out universally would result in widespread performance hits. So instead, it made the main Meltdown fix optional and added the redundant stuff to make the patch look more comprehensive.

Is Intel really planning on making this shit architectural? Has anybody talked to them and told them they are f*cking insane?

They do literally insane things. They do things that do not make sense. That makes all your [i.e. Woodhouse’s] arguments questionable and suspicious. The patches do things that are not sane.

…So somebody isn’t telling the truth here. Somebody is pushing complete garbage for unclear reasons. Sorry for having to point that out.

Woodhouse (who in a long-suffering manner asks they “be done with the shouty part), later in the thread acknowledges Torvalds’ criticism, calling IBRS is “a vile hack” and agreeing that “There’s no good reason for it to be opt-in.” But he but notes some points that are, if not exactly in favor of Intel’s approach, at least explain it a bit.

At any rate, this is all very deep discussion and really only a small slice of it. I’m not highlighting this because I think it’s technically interesting (I’m not really qualified to say so) or consequential in terms of what users will see (it’s hard to say at this point) but rather to simply point out that the Meltdown/Spectre debacle is far from over — in fact, it’s barely begun.

What we saw a few weeks back was the initial wave of craziness and the first line of defense being established. But the work of protecting the billions of devices affected by these problems is going to go on for years as conflicts like this work themselves out. And Linus Torvalds, as profane as his criticisms are wont to be, is one of the many people working hard on behalf of the open-source community and the people who ultimately benefit from it down the line.

If there weren’t detail-oriented, no-BS, old-school coders out there watching out for the likes of you and me, the great complacent unwashed out here in userland, we would have to take whatever Intel and the others hand us and thank them in our ignorance. I for one am glad to have people smarter and more uncompromising than myself fighting on our behalf, however “shouty” they may be.

Featured Image: ronstik/Shutterstock

 


0

Happy 25th birthday, Linux

22:50 | 22 August

Linux will turn 25 years old on August 25, the day Linus Torvalds sent out his fateful message asking for help with a new operating system. “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things),” he wrote in the comp.os.minix message board. And the rest, as they say, is history.

What’s particularly interesting about Torvalds’ note is that it was followed not by snark or derision but with general interest. While we can chalk that up to Torvalds actually having a product ready to show potential users, we are also reminded that the internet in 1991 was a far different place than it is today.

The Linux Foundation has just released a detailed report on the OS with highlights from the past 25 years. They write that 13,500 developers from 1,300 companies have contributed to the Kernel since the entire project went up on Git in 2005. The most interesting bit of data?

“During the period between the 3.19 and 4.7 releases, the kernel community was merging changes at an average rate of 7.8 patches per hour; that is a slight increase from the 7.71 patches per hour seen in the previous version of this report, and a continuation of the longterm trend toward higher patch volumes.” That means the Linux kernel is almost constantly being patched and updated all by a volunteer army of programmers dedicated to seeing the glue of the Internet succeed.

You can read the entire report here.

Linux now runs most of the websites you visit and runs on everything from gas pumps to smartwatches. The OS teaches kids to program thanks to the Raspberry Pi and it helped the French police save millions of euros. Heck, even Microsoft is releasing code for Linux. If you can’t beat ’em, join ’em.

For a bit more insight into the history of the OS, I’d recommend Rebel Code and
Just For Fun. These books, released around the time Linux was coming into prominence, tell the fascinating story of Torvalds and his not “big and professional” side project.

Featured Image: Jim Sugar/Corbis Documentary/Getty Images

 


0

Money And Politics: Bitcoin’s Governance Crisis

02:00 | 23 August

Guy CoremCrunch Network Contributor

Guy Corem is CEO of SponDoolies-Tech.

How to join the network

A lot of intriguing and conflicting things have been said about Bitcoin in the past few years. Some see it as the salvation of the financial system, others as a new toy, appealing only to the technologically savvy.

Say what you will, but so far, Bitcoin is a technological success. Minor glitches aside, the developer community that originally rallied to launch this project forward turned an immature, yet mind-opening computer protocol into a functioning monetary system, operated and used worldwide.

These developers’ personal investments have been enormous. It took incredible creativity and innovation to combine knowledge in software engineering and cryptography with a high political sense, game theory with international diplomacy and computer networking with coalition forming and mass persuasion.

Strong leadership does in fact help forge consensus.

All of these skills are required to lead Bitcoin’s core development. As the stakes go up and the authority of early involved developers gets stretched thin, the big question is, “Can the Bitcoin community create new tools to reduce the burden of politics?”

Bitcoin, a decentralized monetary system, aims to be the new money for the Internet arena, eliminating lack of trust and reducing risks to the monetary system that are brought on by human involvement and decision making.

But even Bitcoin is a machine that is programmed by mere humans. Under the hood, politics can still be divisive when developers disagree on important code changes.

To qualify for entering the debate, one needs to invest themselves heavily and create the proposed fix; this tends to filter out all but the most opinionated developers. The reward is, of course, proving the other side wrong, and saving the day for millions of users. Oh, and those Bitcoins in your pocket, too.

Making A Decision

So how can such integral decisions be made in this new challenging environment containing so many different players?

Until the end of 2010, Bitcoin had a single voice: “Satoshi Nakamoto,” its mysterious and unidentified creator. The Bitcoin Protocol was embodied in Nakamoto’s software. The community involved was small and Nakamoto’s vision was revered as near prophecy.

By the end of 2010, Nakamoto disappeared, leaving a large vacuum in leadership. Since then, the Bitcoin community has been left with flesh and blood developers whose legitimacy is tested with every decision they make.

The debate is exacerbated by the huge financial stakes for participants and by a trust-no-one mentality — ironically, the same mentality that brought forth the need for the Bitcoin system.

Under the hood, politics can still be divisive when developers disagree on important code changes.

Unlike other software projects, a small tweak in code will affect the entire ecosystem. For example, the current crisis deals with fixing unforeseen effects of an early change that attempted to fix “spam” transactions by introducing limits to the number of transactions.

There are billions of dollars at stake, so who has the authority and immense responsibility to take risks in such a situation?

Learning From Other Software Projects

It has been proven over time that strong leadership does in fact help forge consensus. For example, Linus Torvalds, the inventor of Linux, has served as “benevolent dictator” for 24 years of continuous development.

The technological bets he took have propelled the once-hobbyist operating system into billions of smartphones and millions of servers. With a clear vision of scaling the project, Torvalds stepped in countless times, accepting or rejecting changes. As the original inventor, never bowing to political correctness, his authority was unchallenged.

If Torvalds didn’t like your code, tough luck. For years, Torvalds was, in a practical sense, the final guardian of Linux’s code. You could always create your own type of Linux (known as a fork) — a departure from the official Linux, which you would control — and include your code.

That would be considered a dramatic move, a competition to “mainstream Linux.” The tools to move code between the two versions were cumbersome, and code tended to diverge, forcing developers and users to make hard choices.

To handle the growing number of Linux types and the difficulty of managing them, Torvalds created Git, a decentralized repository of code, allowing people to interact and share code easily.

Bitcoin core developers, like neurosurgeons, won’t poke around unless absolutely necessary.

Bitcoin and Git have a main similarity, as they both deal with getting to a consensus: Bitcoin is the agreement on which transactions are considered valid; Git is about the agreement on what code changes should be included in the latest version, on top of which developers should work.

Git has changed the politics of code development. You no longer need to beg for a central authority like Torvalds to accept your code change.

With Git, it’s trivial to post your own version, and it’s trivial for Torvalds to pull it into his code, if he or his lieutenants find it worthy. Every code change is essentially a little fork, but merges became much easier. With changes moving around freely, developers compete on the merits and usefulness of their version.

What’s a useful version? How does a piece of code get merit? Those are still subjective questions that Git can’t solve. The authority behind an official version still underlines a struggle of power, but more voices can be heard, and forks are less dramatic.

By contrast, the consensus on the state of the Bitcoin ledger is mathematical, with rules on which Bitcoin participants agree. In practice, Bitcoin rules are set in the code of the official version.

Although the Bitcoin world is not quite where the Linux world is today, perhaps there is a lesson to be learned from Linux’s success versus reinventing the wheel.

Where Things Differ

The truth is, Bitcoin can’t afford a code fork. Subtle differences between implementations can lead to several competing versions of the Bitcoin ledger, the main database of all Bitcoin transactions.

Such a schizophrenic beast wouldn’t survive, as users would receive Bitcoins spent on one ledger and unspent on another. That’s why Bitcoin core developers, like neurosurgeons, won’t poke around unless absolutely necessary. They have this unwritten contract with the Bitcoin community: keep the fundamental rules of Bitcoin, keep the system running, keep it scaling and build up the system that could contend with traditional finance.

Where To Go From Here

There’s no doubt that testing needs to be done and changes have to be put in the system, but this has to be done responsibly. Some developers choose to create new coins and test their code there, in an environment separate from Bitcoin. These are commonly known as Alt-coins and do not enjoy the same popularity as Bitcoin.

Another possible solution is being promoted by the team at Blockstream. The VC-funded company pushes for experiments to happen within the Bitcoin currency, but not as part of the main system. This introduces flexibility, allowing the core developers to innovate without compromising the currency.

Opponents see it as a backdoor to coerce the Bitcoin protocol in a more government-friendly direction, taming the beast. Much like Git, these experiments, known as side-chains, separate the human consensus debate of what should be the authorized version from the underlying code-change mechanism.

Meanwhile, they keep the one consensus that seems to unite Bitcoiners: Keep Bitcoin alive and strong!

Can we really create a human-free consensus?

As the debates heat up, we’re reminded that consensus-building remains a difficult, human task. Previous attempts by the community have been extremely problematic, at best. VC-funded companies could play a larger role, but most would rather pretend that Bitcoin will be maintained by others.

Companies that step up to the job will have their work cut out for them, proving their value to the community while being deemed agenda-free.

Can we really create a human-free consensus? Nakamoto left us with an experiment. The current value of the experiment is US$4 billion, and the Bitcoin community is following closely to see how this turns out.

Bitcoin may not need a “benevolent dictator,” but as the system reaches new scales, it will need strong leadership, along with new development tools to experiment in bold directions.

Featured Image: Russell Werges

 


0
All topics: 9

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short