Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 49

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 32

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 37

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 27

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

technetonlines

technetonlines

Joined: 24 January 2019

Interests: No data



Main article: Aws lambda

<< Back Forward >>
Topics from 1 to 10 | in all: 19

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

22:13 | 25 November

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alex functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solution for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

 


0

New Relic snags early stage serverless monitoring startup IOpipe

17:46 | 1 November

As we move from a world dominated by virtual machines to one of serverless, it changes the nature of monitoring, and vendors like New Relic certainly recognize that. This morning the company announced it was acquiring IOpipe, an early-stage Seattle serverless monitoring startup to help beef up its serverless monitoring chops. Terms of the deal weren’t disclosed.

New Relic gets what it calls “key members of the team,” which at least includes co-founders Erica Windisch and Adam Johnson, along with the IOpipe technology. The new employees will be moving from Seattle to New Relic’s Portland offices.

“This deal allows us to make immediate investments in onboarding that will make it faster and simpler for customers to integrate their [serverless] functions with New Relic and get the most out of our instrumentation and UIs that allow fast troubleshooting of complex issues across the entire application stack,” the company wrote in a blog post announcing the acquisition.

It adds that initially the IOpipe team will concentrate on moving AWS Lambda features like Lambda Layers into the New Relic platform. Over time, the team will work on increasing support for Serverless function monitoring. New Relic is hoping by combining the IOpipe team and solution with its own, it can speed up its serverless monitoring chops .

As TechCrunch’s Frederic Lardinois pointed out in his article about the company’s $2.5 million seed round in 2017, Windisch and Johnson bring impressive credentials.

“IOpipe co-founders Adam Johnson (CEO) and Erica Windisch (CTO), too, are highly experienced in this space, having previously worked at companies like Docker and Midokura (Adam was the first hire at Midokura and Erica founded Docker’s security team). They recently graduated from the Techstars NY program,” Lardinois wrote at the time.

The startup has been helping monitor serverless operations for companies running AWS Lambda. It’s important to understand that serverless doesn’t mean that there are no servers, but the cloud vendor — in this case AWS — provides the exact resources to complete an operation and nothing more.

IOpipe co-founders Erica Windisch and Adam Johnson

Photo: New Relic

Once the operation ends, the resources can simply get redeployed elsewhere. That makes building monitoring tools for such ephemeral resources a huge challenge. New Relic has also been working on the problem and released New Relic Serverless for AWS Lambda offering earlier this year.

IOpipe was founded in 2015, which was just around the time that Amazon was announcing Lambda. At the time of the seed round the company had eight employees. According to Pitchbook data, it currently has between 1 and 10 employees, and has raised $7.07 million since its inception.

New Relic was founded in 2008 and raised over $214 million, according to Crunchbase, before going public in 2014. Its stock price was $65.42 at the time of publication up $1.40.

 


0

New Relic takes a measured approach to platform overhaul

23:02 | 14 May

New Relic, the SaaS applications performance management platform, announced a major update to that platform today. Instead of ripping off the band-aid all at once, the company has decided to take a more measured approach to change, giving customers a chance to ease into it.

The new platform, called New Relic One has been designed to replace the original platform, which was developed over the previous decade. The company says that by moving slowly to the new platform, customers will be able to take advantage of new features that it couldn’t have built on the old platform without having to learn a new a way of working.

Jim Gochee, chief product officer at New Relic says that all of the existing tooling and functionality will eventually be ported over or reimagined on top of New Relic One.”What it is under the covers for us is a new technology stack and a new platform for our offering. We are still running our existing technology stack with our existing products. So we’re [essentially] running two platforms in two stacks in parallel, but all of the new stuff is going to be built on New Relic One over time,” he explained.

By redesigning the existing platform from scratch, New Relic created a new, modern, more extensible model that will allow it to plug in new functionality more easily over time, and eventually even allow customers to do the same thing. For now, it’s about changing what’s happening under the hood and providing a new user experience in a redesigned user interface.

“New Relic One is very pluggable and extensible, which makes it easier for our own teams to build on, and to extend and expand, and also down the road we will eventually get to the point where partners and customers will be able to extend our UI themselves, which is something that we’re very excited about,” he said.

Among the new features is support for AWS Lambda, the company’s serverless offering. It also enables users to search across multiple accounts. It’s not unusual for customers to be monitoring multiple accounts and sub-accounts. With New Relic One, customers can now search across these accounts and find if issues have cascaded more easily.

In a blog post introducing the new platform, CEO Lew Cirne acknowledged the growing complexity of the monitoring landscape, something the new platform has been specifically designed to address.

“Unlike today’s fragmented tools that can deliver a bag of charts and metrics with a bunch of seemingly unrelated numbers, New Relic One is designed to cut through complexity, provide context, and let you see across artificial organizational boundaries so you can quickly find and fix problems,” Cirne wrote.

Nancy Gohring, a senior analyst at 451 Research says this flexibility is a key strength of the new approach. “One of the most important updates here is the reworked data model which allows New Relic to offer customers more flexibility in how they can search the operations data they’re collecting and build dashboards. This kind of flexibility is more important in modern app environments that are more complex and dynamic than they used to be. Everyone’s environment is different and digging for the cause of a problem is more complicated than it used to be,” Gohring told TechCrunch. The new ability to search across accounts should help with that.

She concedes that having parallel platforms is not ideal, but sees why the company chose to go this route. “Having two UIs is never great. But the approach New Relic is taking lets them get something totally new out all at once, rather than spending time gradually introducing it. It will let customers try out the new stuff at their own pace,” she said.

New Relic One goes live tomorrow, and will be available at no additional cost to New Relic subscribers.

 


0

Serverless monitoring startup Espagon expands to cover broader microservices

16:13 | 7 May

When Espagon launched last Fall, the Israeli startup had an idea to monitor serverless, architectures, specifically AWS Lambda. But the company didn’t want to confine itself to monitoring a narrow class of applications, and today it announced it is now able to monitor a broader set of microservice development approaches.

CEO and co-founder Nitzan Shapira says when it launched, the startup wanted to take aim at serverless and Lambda seemed to be the prime tool for doing that. “Our product was basically a tracing, troubleshooting and monitoring tool that was automatically doing all of that for the Lambda ecosystem. And and since then, we’ve seen actually a bigger shift [beyond] just Lambda,” he said.

That shift was to a broader view of this kind of deployment across a set of modern applications involving microservices. When developers move to these modern approaches, it becomes impossible to launch an agent to help monitor what’s happening. Yet the developers still need visibility into the applications.

To help, the company is launching a tracing and logging tool together, a first for this type of monitoring, according to Shapira. “Today, with engineering and DevOps working closer than ever, being able to automatically trace microservices applications without using an agent and combine the tracing and the logging in one platform is extremely valuable,” he said.

Shapria says that over time the company plans to expand this idea and support more frameworks out of the box to allow this kind of open tracing across different tools. “We need to provide support for more and more frameworks becoming popular. Lambda is just one framework,” he explained.

Serverless is somewhat of a misnomer. The servers are still there, but instead of programming to launch on a particular server, virtual machine (VM) or set of VMs, the cloud infrastructure vendor provides whatever infrastructure resources a developer requires at any given moment automatically.

Microservices encompass the idea that instead of building a monolithic application, you break it down into a series of smaller services, typically launching them in containers and orchestrating the containers in a tool like Kubernetes.

The company only came out of stealth last October, so it’s still early days, but it is already expanding, opening a sales office in the US with a 4-person staff. The engineering team remains in Israel. It is approaching 20 employees in total.

Shapria wouldn’t talk about exact customer numbers, but says the company has hundreds of users now and is doubling the number of paying customers every month.

 


0

The business case for serverless

02:30 | 16 December

Zack Kanter Contributor
Zack Kanter is the co-founder of Stedi.
More posts by this contributor

While serverless is typically championed as a way to reduce costs and scale massively on demand, there is one extraordinarily compelling reason above all others to adopt a serverless-first approach: it is the best way to achieve maximum development velocity over time. It is not easy to implement correctly and is certainly not a cure-all, but, done right, it paves an extraordinary path to maximizing development velocity, and it is because of this that serverless is the most under-hyped, under-discussed tech movement amongst founders and investors today.

The case for serverless starts with a simple premise: if the fastest startup in a given market is going to win, then the most important thing is to maintain or increase development velocity over time. This may sound obvious, but very, very few startups state maintaining or increasing development velocity as an explicit goal.

“Development velocity,” to be specific, means the speed at which you can deliver an additional unit of value to a customer. Of course, an additional unit of customer value can be delivered either by shipping more value to existing customers, or by shipping existing value—that is, existing features—to new customers.

For many tech startups, particularly in the B2B space, both of these are gated by development throughput (the former for obvious reasons, and the latter because new customer onboarding is often limited by onboarding automation that must be built by engineers). What does serverless mean, exactly? It’s a bit of a misnomer. Just as cloud computing didn’t mean that data centers disappeared into the ether — it meant that those data centers were being run by someone else, and servers could be provisioned on-demand and paid for by the hour — serverless doesn’t mean that there aren’t any servers.

There always have to be servers somewhere. Broadly, serverless means that you aren’t responsible for all of the configuration and management of those servers. A good definition of serverless is pay-per-use computing where uptime is out of the developer’s control. With zero usage, there is zero cost. And if the service goes down, you are not responsible for getting it back up. AWS started the serverless movement in 2014 with a “serverless compute” platform called AWS Lambda.

Whereas a ‘normal’ cloud server like AWS’s EC2 offering had to be provisioned in advance and was billed by the hour regardless of whether or not it was used, AWS Lambda was provisioned instantly, on demand, and was billed only per request. Lambda is astonishingly cheap: $0.0000002 per request plus $0.00001667 per gigabyte-second of compute. And while users have to increase their server size if they hit a capacity constraint on EC2, Lambda will scale more or less infinitely to accommodate load — without any manual intervention. And, if an EC2 instance goes down, the developer is responsible for diagnosing the problem and getting it back online, whereas if a Lambda dies another Lambda can just take its place.

Although Lambda—and equivalent services like Azure Functions or Google Cloud Functions—is incredibly attractive from a cost and capacity standpoint, the truth is that saving money and preparing for scale are very poor reasons for a startup to adopt a given technology. Few startups fail as a result of spending too much money on servers or from failing to scale to meet customer demand — in fact, optimizing for either of these things is a form of premature scaling, and premature scaling on one or many dimensions (hiring, marketing, sales, product features, and even hierarchy/titles) is the primary cause of death for the vast majority of startups. In other words, prematurely optimizing for cost, scale, or uptime is an anti-pattern.

When people talk about a serverless approach, they don’t just mean taking the code that runs on servers and chopping it up into Lambda functions in order to achieve lower costs and easier scaling. A proper serverless architecture is a radically different way to build a modern software application — a method that has been termed a serverless, service-full approach.

It starts with the aggressive adoption of off-the-shelf platforms—that is, managed services—such as AWS Cognito or Auth0 (user authentication—sign up and sign in—as-a-service), AWS Step Functions or Azure Logic Apps (workflow-orchestration-as-a-service), AWS AppSync (GraphQL backend-as-a-service), or even more familiar services like Stripe.

Whereas Lambda-like offerings provide functions as a service, managed services provide functionality as a service. The distinction, in other words, is that you write and maintain the code (e.g., the functions) for serverless compute, whereas the provider writes and maintains the code for managed services. With managed services, the platform is providing both the functionality and managing the operational complexity behind it.

By adopting managed services, the vast majority of an application’s “commodity” functionality—authentication, file storage, API gateway, and more—is handled by the cloud provider’s various off-the-shelf platforms, which are stitched together with a thin layer of your own ‘glue’ code. The glue code — along with the remaining business logic that makes your application unique — runs on ultra-cheap, infinitely-scalable Lambda (or equivalent) infrastructure, thereby eliminating the need for servers altogether. Small engineering teams like ours are using it to build incredibly powerful, easily-maintainable applications in an architecture that yields an unprecedented, sustainable development velocity as the application gets more complex.

There is a trade-off to adopting the serverless, service-full philosophy. Building a radically serverless application requires taking an enormous hit to short term development velocity, since it is often much, much quicker to build a “service” than it is to use one of AWS’s off-the-shelf. When developers are considering a service like Stripe, “build vs buy” isn’t even a question—it is unequivocally faster to use Stripe’s payment service than it is to build a payment service yourself. More accurately, it is faster to understand Stripe’s model for payments than it is to understand and build a proprietary model for payments—a testament both to the complexity of the payment space and to the intuitive service that Stripe has developed.

But for developers dealing with something like authentication (Cognito or Auth0) or workflow orchestration (AWS Step Functions or Azure Logic Apps), it is generally slower to understand and implement the provider’s model for a service than it is to implement the functionality within the application’s codebase (either by writing it from scratch or by using an open source library). By choosing to use a managed service, developers are deliberately choosing to go slower in the short term—a tough pill for a startup to swallow. Many, understandably, choose to go fast now and roll their own.

The problem with this approach comes back to an old axiom in software development: “code isn’t an asset—code is debt.” Code requires an entry on both sides of the accounting equation. It is an asset that enables companies to deliver value to the customer, but it also requires maintenance that has to be accounted for and distributed over time. All things equal, startups want the smallest codebase possible (provided, of course, that developers aren’t taking this too far and writing clever but unreadable code). Less code means less surface area to maintain, and also means less surface area for new engineers to grasp during ramp-up.

Herein lies the magic of using managed services. Startups get the beneficial use of the provider’s code as an asset without holding that code debt on their “technical balance sheet.” Instead, the code sits on the provider’s balance sheet, and the provider’s engineers are tasked with maintaining, improving, and documenting that code. In other words, startups get code that is self-maintaining, self-improving, and self-documenting—the equivalent of hiring a first-rate engineering team dedicated to a non-core part of the codebase—for free. Or, more accurately, at a predictable per-use cost. Contrast this with using a managed service like Cognito or Auth0. On day one, perhaps it doesn’t have all of the features on a startup’s wish list. The difference is that the provider has a team of engineers and product managers whose sole task is to ship improvements to this service day in and day out. Their exciting core product is another company’s would-be redheaded stepchild.

If there is a single unifying principle amongst a startup’s engineering team, it should be to write as little code—and be responsible for as few non-core services—as humanly possible. By adopting this philosophy, a startup can build a platform that can process billions of transactions at an extremely predictable, purely-variable cost with nearly zero devops oversight.

Being this lazy takes a surprising amount of discipline. Getting good at managing a serverless codebase and serverless infrastructure is nontrivial. It means building extensive practices around testing and automation, which means an even larger upfront time investment. Integrating with a managed service can be unbelievably painful, with days spent trying to understand all of the gaps, gotchas, and edge cases. The temptation to implement a proprietary solution can be incredible, especially when it means a story can be done in a matter of minutes or hours instead of days or longer.

It means writing wonky workarounds when a service only accommodates 80% of a developer’s needs. And as the missing 20% of functionality is released, it means refactoring code to remove the workaround, even when it is working just fine and there is no near-term benefit to changing it. The substantial early time investment means that a serverless/managed-service-first approach is not right for every startup. The most important question to ask is, over what time scale do we need to be fast? If the answer is days or weeks, as is the case for many very early-stage startups, it is probably not the right approach.

But if the timescale for velocity optimization has shifted from days or weeks to months or years, it is worth taking a close look at going serverless.

Recruiting great engineers is extraordinarily hard—and only getting harder. It is a tremendous competitive advantage to task those engineers with building differentiated business functionality while your competitors build services that do commoditized, undifferentiated heavy lifting, and then remain stuck with the maintenance of those services for years to come. Of course, there are certain cases where serverless just doesn’t make sense, but those are disappearing at a rapid rate (for example, Lambda’s 5-minute timeout was recently tripled to 15 minutes)—and reasons such as lock-in or latency are generally nonsense or a thing of the past.

Ultimately, the job of a software startup—and therefore the job of the founder—is to deliver customer value above and beyond the capability of the competition. That job comes down to maximizing development velocity, which, in turn, comes down to mitigating complexity wherever possible. It may be that every codebase, and therefore every startup, is destined to become “a big ball of mud”—the term coined in a 1997 paper to describe the “haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle” that every software project seems eventually destined to become.

One day, complexity will grow past a breaking point and development velocity will begin to decline irreversibly, and so the ultimate job of the founder is to push that day off as long as humanly possible. The best way to do that is to keep your ball of mud to the minimum possible size— serverless is the most powerful tool ever developed to do exactly that.

 


0

Solo.io raises $11M to help enterprises adopt cloud-native technologies

19:00 | 10 December

Solo.io, a Cambridge, MA-based startup that helps enterprises adopt cloud-native technologies, is coming out of stealth mode today and announcing both its Series A funding round and the launch of its Gloo Enterprise API gateway.

Redpoint Ventures led the $11 million Series A round, with participation from seed investor True Ventures . Like most companies at the Series A state, Solo.io plans to use the money to invest in the product development of its enterprise and open source tools, as well as to grow its sales and marketing teams.

Solo.io offers a number of open source tools like the Gloo function gateway, the Sqoop GraphQL server and the SuperGloo (see a theme here?) service mesh orchestration platform. In addition, the team has also, among others, open-sourced its Kubernetes debugger, a tool for building and running unikernels.

Its first commercial offering, though, is an enterprise version of the Gloo function gateway. Built on top of the Envoy proxy, Gloo can handle the routing necessary to connect incoming API requests to microservices, serverless applications (on the likes of AWS Lambda) and traditional monolithic applications behind the proxy. Gloo handles the load balancing and other functions necessary to aggregate the incoming API requests and route them to their destinations.

“Costumers who use Gloo to connect between microservices and serverless found that invocation of [AWS] Lambda is 350ms faster than the AWS API Gateway,” Idit Levine, the founder and CEO of Solo.io, told me. “Gloo also offers them direct money saving, since AWS bills per invocation. In general, Gloo offers money saving because it allows our clients to use the less expensive technologies — like their legacy apps, and sometimes containers — whenever they can, and limit the use of more expensive stuff to whenever it’s necessary.”

The enterprise version adds features like audit controls, single sign-on and more advanced security tools to the platform.

In addition to broadening its customer base, the company plans to invest heavily into its customer success and support teams, as well as its evangelism and education efforts, Levine tells me.

“Helping enterprises easily adopt innovative technologies like microservices, serverless and service mesh is our goal at Solo.io,” Levine in today’s announcement. “Melding different technologies into one coherent environment, by supplying a suite of tools to route, debug, manage, monitor and secure applications, lets organizations focus on their software without worrying about the complexity of the underlying environment.”

 


0

AWS announces a slew of new Lambda features

23:34 | 29 November

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Amazon CTO Werner Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

 


0

Why Blissfully decided to go all in on serverless

23:17 | 1 October

Serverless has become a big buzzword of late, and with good reason. It has the potential to completely alter how developers write code. They can simply write a series of event triggers, while letting the cloud vendor worry about providing whatever amount of compute resources are required to complete the job. It represents a huge shift in how programs are developed, but it’s been difficult to find companies who were built from the ground up using this methodology because it’s fairly new.

Blissfully, a startup that helps customers manage their Software as a Service usage inside their companies, is one company that decided to do just that. Aaron White, co-founder and CTO, says that when he was building early versions of Blissfully, he found he needed quick bursts of compute power to deliver a list of all the SaaS products an organization is using.

He figured he could set aside a bunch of servers to provide that burst of power as needed, but that would have required a ton of overhead on his part to manage. At this point, he was a lone programmer trying to prove his SaaS management idea was even possible. As he looked at the pros and cons of serverless versus traditional virtual machines, he began to see serverless as a viable approach.

What he learned along the way was that serverless offers many advantages to a company with a bursty approach like Blissfully, scaling up and down as needed. But it isn’t perfect and there are issues around management and tooling and handling the pros and cons of that scaling ability that he had to learn about on the fly, especially coming in as early as he did with this approach.

Serverless makes sense

Blissfully is a service where serverless made a lot of sense. It wouldn’t have to manage or pay for servers it wasn’t using. Nor would it have to worry about the underlying infrastructure at all. That would be up to the cloud provider, and it would only pay for the bursts as they happened.

Serverless is actually a misnomer in that it doesn’t mean that there are no servers. It actually means that you don’t have to set up a servers in order to run your program, which is a pretty mind-blowing transformation. In traditional programming you have to write your code and set up all the underlying hardware ahead of time, whether it’s in your data center or in the cloud. With serverless, you just write the code and the cloud provider handles all of that for you.

The way it works in practice is that programmers set up a series of event triggers, so when a certain thing happens, the cloud provider sees this and provides the necessary resources on demand. Most of the cloud vendors are offering this type of service, whether AWS Lambda, Azure Functions or Google Functions.

At this point, White began to think about serverless as a way of freeing him from thinking about managing and maintaining infrastructure and all that entailed. “I started thinking, let’s see how far we can take this. Can we really we do absolutely everything serverless, and if so that reduces a ton of traditional DevOps-style work you have to do in practice. There’s still plenty, but that was the thinking at the beginning,” he said.

Overcoming obstacles

But there were issues, especially getting into serverless as early as he did. For starters, White needed to find developers who could work in this fashion, and in 2016 when it launched there weren’t a large number of people out there with  serverless skills. White said he wasn’t looking for direct experience so much as people who were curious to learn and were flexible enough to deal with new technology, regardless of how Blissfully implemented that.

Once he figured out the basics, he needed to think about how this would work structurally. “Part of the challenge is figuring out where do you draw the boundaries between different serverless functions? How do you think about how much you want to overload the capability of one function versus another? How do you want to split it up? You could go way too specific, and you can of course, go way too broad. So there’s a lot of judgement calls to be made in terms of how you want to split your code base to work in this way,” he said.

The other challenge he faced going with a serverless approach so early was a dearth of tooling around it. White found Serverless, Inc right way, which helped him with a basic framework for developing, but he lacked good logging tools and says that the company still struggles with this even now. “DevOps doesn’t go away. This is still running on a server somewhere (even if you don’t control that) and you will run into issues.” One such issue he calls a “cold start issue.”

Getting the resources right

Blissfully uses AWS Lambda, and as their customers require resources, it isn’t as though Amazon has a set of dedicated resources set aside waiting for such an event. If it needs to start servers cold, that could result in latency. To compensate for that, Blissfully runs a job that pings Lambda continually, so that it’s always ready to run the actual application, and there isn’t a lag time related to starting from scratch.

The other issue could be the opposite problem. You can scale much faster than you’re ready to deal with and that can be a problem for a small team. He says in that case, you want to put a limiter on the speed of the calls so you don’t end up spending more than you can afford, and it doesn’t scale beyond your team’s ability to manage it, “I think, in some ways, this actually accelerates you running into problems where you would normally be larger scale before you really had to think about them,” White said.

The other piece is that once Lambda gets everything going, it can move data faster than your external APIs can handle, and that could require limiters to actually slow things down. “I never had that problem in the past where I was provisioning so many computational resources that  Google was yelling at me for being too fast. Being too fast for Google takes a lot of effort, but it doesn’t take a lot of effort with Lambda. When it does decide to spool up whatever resources, you can do some serious outbound damaged to other APIs.” That meant he and his team actually had to think very early on about building sophisticated rate limiting schemes.

As for costs, White estimates that his costs are much lower now that he has the service built and in place. “Our costs are so low right now, and far lower than if we had server-based infrastructure. Our computational pattern is very bursty.” That’s because it re-parses the SaaS database once a day or when the customer first signs up, and in between, usage is fairly low beyond interacting with the data.

“So for us that was perfect for serverless because I don’t really need to keep capacity around that would be pure waste.”

 


0

With its Snowball Edge, AWS now lets you run EC2 on your factory floor

18:51 | 17 July

AWS’s Snowball Edge devices aren’t new, but they are getting a new feature today that’ll make them infinitely more interesting than before. Until now, you could use the device to move lots of data and perform some computing tasks on them, courtesy of the AWS Greengrass service and Lambda that run on the device. But AWS is stepping it up and you can now run a local version of EC2, the canonical AWS compute service, right on a Snowball Edge.

With that, you can now take one of these devices, put it right on your factory floor and then run all of your standard Amazon Machine Images on it. That cuts down on bandwidth since you can either handle all of the processing on the device or pre-process it before you send it on to the cloud. And to manage it, you simply rely on the regular AWS management console (or use the command line). Every Snowball Edge comes with an Intel Xeon processor that runs at 1.8 GHz and that can support any combination of instances up to 24 vCPUs and 32 GiB of memory.

Since you could use any server for most of these functions, being able to manage all of your services from one console — and have it work just like any other machine in the cloud — is the selling point here. It’s worth noting that this was also the original idea behind OpenStack (though setting that up is far more complicated than ordering a Snowball Edge) and that Microsoft, with Azure Stack and its various edge computing services, offers similar capabilities.

Using a Snowball Edge isn’t cheap either, though. on-demand fees for jobs that are mostly about data transfer start at $500, but if you want to keep a machine for a year, that’ll set you back at least $15,330. Chances are then, most companies will only use the extra compute power on the machine to manipulate some of their data before they send the Snowball Edge back to Amazon to import their data.

 


0

Rookout releases serveless debugging tool for AWS Lambda

16:02 | 4 June

The beauty of serverless computing services like AWS Lambda is that they abstract away the server itself. That enables developers to create applications without worrying about the underlying infrastructure, but it also creates a set of new problems. Without a static server, how do you debug a program that’s running? It’s a challenge that Israeli startup Rookout has solved in its latest release.

The company has achieved this by providing a way to mark the serverless code with “breakpoints.” Rookout can then collect developer-defined information about the serverless code, allowing them to track issues even while the application is live running in a serverless environment.

This ability to run a trace, which is common in traditional applications, is much more difficult in a serverless one because there is no permanent underlying machine on which the application is running, says Rookout CEO Or Weis.

Rookout running serverless debugger. The information at the bottom of the screen gives developers insight to debug code running on AWS Lambda. Photo: Rookout

“Specifically with serverless, it is extremely hard to predict how your software will behave in that new environment [because] it’s extremely hard to know where software is running and [it has been] almost impossible to see how it’s behaving in production,” Weiss explained.
.
He said the only way to solve that to this point has been writing more code in the form of log lines and SDK calls, which creates a whole administrative layer that Rookout wanted to eliminate from the process. By providing an interface to see what’s happening inside the code, the company is giving developers a way to debug live code running in a serverless environment in the same fashion they have debugged more traditional applications.

They can share this information with a myriad of popular adjacent tools including application performance management (APM) like New Relic, log management like Splunk or alerting like PagerDuty. They can also use it to simply go back in and fix the code issue if that’s what’s required.

While serverless computing isn’t truly serverless, there isn’t a dedicated server running the application. Instead, the vendor provides the required amount of server resources based on a particular event trigger. When that event happens, the code runs and the customer gets charged. This is in stark contrast to traditional development where you allocated a server to run the application and you pay for it, regardless of whether you use it or not.

__

Rookout Lambda debugging demo:

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 19

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short