Join us for all the WTF's at Blueprint LDN and Jamie Dobson as keynote - 22 Sept

WTF Is Cloud Native

WTF is WebAssembly?

Back when everyone was deploying code directly to “bare metal”, our demands exceeded the capacity of the infrastructure we had, so our infrastructure evolved into virtual machines. The use of virtual machines allowed us to run many machines on a single piece of hardware, and this gave birth to data centers.

As we exceeded the capacity of data centers to handle us deploying countless virtual machines, we entered the age of the docker image, and now we deploy docker images on top of virtual machines which are deployed on top of bare metal.

WebAssembly is the next evolution of the unit of deployment. We’ve gone from custom-imaged hard drives to clever bundles on bare metal to virtual machines to docker images, and now we have a smaller, faster, more portable, more secure unit of deployment—the WebAssembly module

We think you should be paying attention to WebAssembly because we think it is entirely possible that it is the future of computing as we know it.

So WTF is WebAssembly?

WebAssembly is neither web nor assembly 

If you’ve done any research on WebAssembly on the web then you’ll no doubt have encountered this popular little saying. It tells us a lot about what WebAssembly is not, but it doesn’t give us any idea of what it is. Hopefully this post can clear up some of the confusion. 

Let’s first start with the notion that WebAssembly is a stack-based virtual machine. Every machine (virtual or otherwise) has a core set of primitive instructions that can be used to tell it what to do. No matter what high-level programming language you’re using, ultimately the output of your compiler is going to be a file with machine-level code in it.

WebAssembly is a virtual machine. It has its own set of instructions, but the machine responsible for executing them is virtual—it’s just another process on your computer. This has a profound, and often underrated, impact on the importance of WebAssembly.

The instruction set used by WebAssembly is portable. This means that it’s impossible to express something in WebAssembly bytecode that will work on one machine but not another. Stack-based virtual machines like WebAssembly are fast because the code to manage stacks and perform operations on those stacks is actually very simple and can be highly optimised. WebAssembly binary files (.wasm files, I may refer to WebAssembly simply as “wasm” throughout this post) are also small. Depending on what you’re building, fully functioning wasm files are measured in kilobytes, not megabytes or gigabytes.


WebAssembly is primitive, but in a good way. I’ve always been a fan of languages that do less: languages whose designers have taken the time and the effort to whittle down what should not be in a language. Much of what makes WebAssembly so powerful—as I’ll discuss throughout this post—is what it cannot do.

The first thing developers often notice (and fear) about wasm is that the only data types are numbers. Functions can only accept and return values that are either integers or floats, and of those, only 32- and 64-bit values are allowed.

This means that things like strings, hash maps, arrays, trees, tuples—all the goodies we take for granted on a daily basis—are not part of the core WebAssembly specification. It is instead left up to the higher level languages to translate them into WebAssembly bytecode during compilation.

Let’s see what WebAssembly actually looks like at the core. Let’s assume that we’ve got a portion of our .wasm binary file that contains the following bytes:

0x41 0x09 0x41 0xA0 0x6A

This isn’t necessarily spec-accurate (the binary format has some pretty hairy encoding requirements that would just confuse us at this point), but this will suffice as an example. Let’s break it down:

  • 0x41 - i32 constant
  • 0x09 - the value 9
  • 0x41 - i32 constant
  • 0xA0 - the value 160
  • 0x6A - i32 add

The first instruction is an i32.const instruction (remember that WebAssembly only supports i32, f32, i64, and f64 numbers). This places a constant value on the stack of 9. The next instruction places the value 160 on the stack. The third instruction is i32.add, which pops two values off the stack, adds them, and then places the sum back on the stack. So, when the virtual machine is done processing these byte codes and parameters, it will place the value 169 on the stack.


I’ve mentioned already that none of the instructions in the WebAssembly specification are specific either to an operating system or to a CPU architecture. This means that, assuming the host runtime (e.g. web browser or custom embedder) conforms to the specification, the same WebAssembly file can be interpreted anywhere, regardless of the operating system or CPU architecture.

This has huge implications because it means that anything that you can express in terms of WebAssembly bytecode can be compiled once and deployed to multiple targets without modification. Most of us have heard the “write once deploy everywhere” mantra made popular by proponents of Java byte code, but wasm isn’t just another JVM, or .NET CLR. For one thing, Java byte code isn’t really portable, and different JVMs can perform in radically different ways that break portability. Microsoft’s .NET Framework makes similar claims about portability, but both the JVM and the .NET CLR suffer from having (in my opinion) too many instructions to be portable, with many of those instructions violating portability concerns. For example, both the JVM and the .NET Framework allow access to the operating system, which immediately makes portability (and security, as I’ll discuss) a problem.


The job of the host runtime is simple—read opcodes, manage the stack and linear memory, and perform whatever task the opcode indicates. It is this simplicity that allows the processing of a WebAssembly file to be so incredibly fast. While there are just-in-time (JIT) compilers available that will compile a wasm module into native code, many runtimes (like your browser) can be incredibly fast relying solely on interpretation.


Another way WebAssembly performance is enhanced is that wasm is streamable. The nature of the instruction set and the organisation of code within a .wasm file is such that it can be streamed.

An interpreter can start executing the first instruction in the WebAssembly file before the rest has finished downloading. The virtual machine interpreter doesn’t need to worry about jump instructions pointing to locations that haven’t yet been downloaded, or attempts to access as-yet-undiscovered resources. There’s a subtle beauty to the organisation of a WebAssembly file that allows for this kind of streaming, and all of the major browsers that support WebAssembly also support streaming execution.


Wasm is tiny. Even the language that typically produces the largest WebAssembly binaries (Rust) still produces files that are orders of magnitude smaller than docker images and even standalone, OS-and-CPU-specific binaries produced by languages like Go, and indeed Rust. As I’ll discuss in my next blog post on WebAssembly, there are a number of frameworks and custom embedders taking advantage of WebAssembly’s speed and (lack of) size to support hundreds or thousands of small modules running inside a single host. This kind of compute density simply isn’t possible using the languages and frameworks available today for “traditional” compilation targets.


WebAssembly is secure. It is secure by default and this is due to the capabilities of the language and specification. First and foremost, a WebAssembly module is reactive. It cannot do anything until and unless the host runtime requests it. Secondly, wasm modules don’t have access to the host runtime’s memory; they utilise their own private linear memory space which ultimately boils down to a big long vector of bytes.

WebAssembly does not have built-in instructions to access the file system, write to sockets, manipulate host memory, access network services, or interact in any way whatsoever with the operating system. While standards like WASI allow limited operating system access, this access is also secured using capability tokens, and host runtimes still have the option of simply denying access to WASI-based function calls.

Wasm modules are often pure compute with either no side-effects or side-effects that are strictly controlled by the host runtime. Side effects allowed by the browser runtime include access to the JavaScript API through shims, and ability to manipulate the DOM (also through proxy code often created by code generation tools).


WebAssembly has only been around a few years, and already it’s in every one of our browsers whether we were aware of that or not. Companies like Fastly and CloudFlare are experimenting with running WebAssembly at the edge, and in my next article I’ll be talking about how myriad industries are looking at using WebAssembly in the cloud.

The characteristics that we have long considered the holy grail of computing—small size, portability, security, performance—are all things that we get with WebAssembly and I hope that, if you haven’t already, you go out and start playing with this new technology because it is very likely going to grow to be in or around everything we do in the future as software developers.

Leave your Comment