In our previous article, we introduced you to the fundamentals of WebAssembly. Now that we know what WebAssembly is, it’s time to discuss where it belongs, which, we firmly believe, is in the cloud.
What Makes WebAssembly Cloudworthy?
In the preceding article, we discussed a series of WebAssembly’s defining attributes. In this article, let’s take a look at those attributes with an eye toward what makes them so appealing in the cloud.
In the cloud, workloads aren’t static. Unlike the old days of virtual machines and data centres, we don’t start things that stay up forever and run in the same place. In the cloud, our workloads move from host to host. When we consider the mythical creature known as “the edge”, we also want our workloads to be able to freely move between back ends and whatever end the edge is.
To make this possible, we need to know that our workloads will work on any host with sufficient resources. This is why portability is so important in the cloud.
Time is money. The longer we have to wait for some piece of work to finish, the more it costs us to perform that work. Worse, if we keep our customers waiting then we risk losing their money as well. In the previous article we discussed the highly efficient, fast nature of the WebAssembly virtual machine. We know from experiments and benchmarks that we can get near-native (or fully native, depending on the engine) performance out of WebAssembly modules.
When the unit of deployment is large, there are things we simply cannot do with it. For example, you can’t expect to ship a 1GB Docker image around quickly and easily, and you certainly can’t expect to be able to download and start such an image on demand without paying a price in terms of bandwidth and speed.
Today’s modern, Cloud Native applications practise defense in depth--the assumption that an intruder is capable of piercing any layer of your infrastructure, therefore you build security in all layers. We have tools that can monitor systems at runtime to try to keep them from performing unauthorised actions, but self-contained, sandboxed entities like WebAssembly modules can’t do anything unless the host runtime lets them, nor can they access any memory outside their own. You can even interrogate a module and determine exactly which features it will request from the host runtime. There are no surprises here, and there’s no way to hijack a module and make it do something it couldn’t otherwise do.
WebAssembly in the cloud
While many people reading this article may have heard about WebAssembly running in the browser, there are some amazing innovations happening everywhere from the cloud to the edge, all leveraging the compelling qualities of WebAssembly that we’ve just discussed.
There are companies providing edge compute that are offering WebAssembly-based function execution, open source projects and companies working on entire high-level development frameworks that aim to change the way we build distributed applications, traditional cloud providers like AWS and Google offering WebAssembly function execution in their cloud, and there is even a budding portion of the Kubernetes community exploring ways to integrate WebAssembly-based workloads with the container scheduling platform.
Edge computing is code execution that happens near the physical location of the user or the source of data necessary for execution. People want edge computing because our demands for computing power and performance never let up. We want it all, and we want it faster than ever before.
Think of a machine learning application that analyses every frame of video to detect objects. If the analyser is close (in network terms) to the source of the video frames, we can do this work in real-time, where the user is, while they watch the video. Without edge computing, this becomes a slow, tedious, process where the whole video is submitted for processing and we eventually get the results aftering waiting in line for coffee.
Edge computing has been gaining in popularity lately and there are many companies that provide edge networks for everything from static content delivery to code execution out on the edges of those delivery networks. These edge networks now support WebAssembly, embracing its tiny size, security, and performance.
Fastly, known for its edge computing services, has recently enabled running code compiled to WASI (WebAssembly System Interface) on their edge network in its Compute@Edge product. You can use languages like Rust or AssemblyScript to write code that runs at the edge, near the user, that utilises Fastly’s libraries, processes nearby data, and communicates with traditional back ends. Fastly is a member of the Bytecode Alliance (a multi-industry group of companies with a vested interest in the success of WebAssembly) and through that membership actively supports the WebAssembly community by contributing to runtime and tooling development.
For many in the industry, Netlify is the gold standard when it comes to the generation and hosting of static websites via GitHub integrations. Their simple interface makes it so you’re never more than a few clicks away from creating a new website, complete with SSL and a custom domain.
Recently, Netlify has opened up a feature that lets you create serverless functions hosted on their edge network. Remember that unlike a “cloud function” or any other service that you might deploy in a traditional cloud back end, copies of this function are deployed so as to be geographically close to the user, reducing the number of hops between the user and your code and, in theory, the number of hops between your code and the data it relies upon. For more information, check out this blog post on Netlify serverless functions.
Application Development Frameworks
In addition to a huge surge in interest in using edge computing as a deployment target for WebAssembly code, there are a number of interesting projects underway attempting to harness the size, power, security, and portability of WebAssembly to enable entirely new ways to build applications. These folks want to give us all Cloud Native, distributed building blocks that we can easily snap together to get the job done.
There now exists an entire class of framework that builds on top of low-level WebAssembly engines, exposing an opinionated application development framework. Atmo from Suborbital is one such framework. Atmo aims to make it easy to create a powerful server application without needing to worry about scalability, infrastructure, or complex networking.
With Atmo, you stitch together a series of runnables with a directive, and then you can create a runnable bundle, deploy it, and run your application. The runnables that Atmo uses for distributed business logic execution are actually compiled WebAssembly modules that use the Atmo SDK to gain access to enhanced functionality. This kind of declarative tethering of small units of compute might feel familiar to people who have used pipelines of lambdas or cloud functions in the past, or even declarative workflow-style frameworks.
For more information, check out the Suborbital home page.
wasmCloud from Cosmonic
Another foray into the world of application development frameworks building on top of WebAssembly runtimes is wasmCloud. wasmCloud is a distributed application runtime that embraces the actor model to dynamically link actors with capability providers, aiming to not only strip unnecessary boilerplate from the development process, but to make it easy and fun to create secure-by-default, powerful, easily scalable applications that can span multiple infrastructures from the cloud to the edge and yes, even web browsers.
With wasmCloud, you create your actors in Rust or any other supported WebAssembly-targeting language. Your actors communicate via an abstraction or a contract for your non-functional requirements like database access, web servers, logging, etc. This way, the same application code can have different capability providers during development than it does in production without ever being recompiled. Security for wasmCloud’s actors is provided by a decentralised system of signed JSON Web Tokens (JWTs) embedded directly into the WebAssembly modules.
Cloud functions, extensions, and plugins
Another class of innovation in the non-browser WebAssembly space is in traditional cloud back ends as well as at a lower, systems networking level that includes Kubernetes and adjacent tooling and frameworks.
Envoy, another member of the Cloud Native Computing Foundation (CNCF), is an open source edge and service proxy. In short, Envoy stands between traffic on one side of a boundary and traffic on the other, as the name implies, acting as a proxy for the so-called “real” endpoint.
Since its inception, Envoy has supported extensions in the form of custom filters that could be applied to traffic passing through the proxy. These extensions were originally written in C++ but can now be written in any language capable of producing a WebAssembly module.
To help developers find and share these extension filters, there is even a marketplace-like hub where developers can publish and search for WebAssembly extensions.
Cloud provider functions
Just about every major cloud provider that offers some kind of Functions-as-a-Service (FaaS) or lambda functionality has started supporting the ability to create those functions in WebAssembly. These providers have come to see the writing on the wall, that the road to the future is paved by WebAssembly modules.
Kubernetes and WebAssembly with Krustlet
You can’t really have a discussion about anything Cloud Native these days without mentioning Kubernetes and its accompanying (and quite sprawling) ecosystem.
Krustlet is a kubelet written in Rust and is responsible for a number of powerful tools in use throughout the ecosystem. This project allows you to specify the image name of a WebAssembly module stored in an OCI-compliant registry when defining a Kubernetes pod specification.
This means that, with nodes running Krustlet, you can schedule WebAssembly modules directly in Kubernetes seamlessly with no middleware hacks. Krustlet supports scheduling WASI modules and wasmCloud actors.
Krustlet is also a recent addition to the Cloud Native Computing Foundation landscape.
Open Policy Agent
Open Policy Agent is a product that aims to provide policy-based control for Cloud Native environments. OPA would like everyone to use the same tool and policy language for policy (e.g. rules) evaluation across your entire stack from services to CI/CD to data, SSH boundaries, UI, and more. People use OPA as Kubernetes admission controllers, rules to determine user authorisation, a source of refined and post-processed business data, and much more.
Ordinarily, an Open Policy Agent policy is a set of text files written in the Rego language. However, you can also compile OPA policies into WebAssembly modules. These portable, self-contained WebAssembly policy files can then be securely and easily shipped around environments and executed against live data wherever appropriate.
For more background on OPA refer to Anders Eknert’s article on WTF.
What we’re looking at now is like the early formation of a planet—thousands of tiny little pieces of innovation swirling around the new WebAssembly center of gravity. When taken separately, they all may appear to be minor steps forward, but when taken as a whole, they represent the possibility of Cloud Native frontiers driven forward by the engines of WebAssembly.
Not only do we think that WebAssembly is the future of distributed, Cloud Native development, we feel that with enough support, tooling, and community, it could be the next logical evolution in a post-Docker world.