Building a Cloud Native edge platform
Container Solutions is currently working with Bell Mobility, one of Canada’s largest telco providers, building out their edge network. This will provide a platform for developers that minimises latency and gives faster access to larger volumes of data for both their own 5G network and customer-focused use cases like graphics-intensive gaming or IoT applications.
Founded in Montréal in 1880, Bell is wholly owned by BCE Inc, which provides innovative broadband wireless, TV, Internet, and business communication services across Canada.
The edge network, or MEC (Mobile Edge Computing) as it’s often called, brings resources closer to the user, allowing for faster data processing and better performance than conventional cloud computing.
How is edge different from cloud?
With all the benefits MEC brings, it comes with its own, very specific set of challenges. While hyperscalers like AWS and GCP give you almost unlimited resources, edge locations are by definition more limited. It’s up to the edge provider to make sure the right types of resource can be made available to potential customers.
The next challenge is the ability to spin up workloads across different locations, move them between different edge sites, and automatically pick the best locations depending on factors like distance or resource availability: ideally with a self-service environment to provide more flexibility and autonomy to users.
To enable all this you also need the ability to provision edge sites as part of the platform, leading to interesting questions on how to cloud-natify hardware. Provisioning servers and network equipment, as well as maintaining it, adds an additional layer of complexity.
The needs for edge computing are simple. Data usage is growing exponentially and more and more use cases (including AR/VR, connected cars, video streaming) rely on a lot of data being transmitted. 5G, the newest advance in wireless technology, is meant to ensure that these needs can be met between devices and nearby radio towers. By moving compute further to the edge we reduce the distance between the customers and the processing of data even further, ensuring ultra-low latency.
Can you be agile at the edge?
Creating edge data centers or sites won’t always be as quick and flexible as spinning up a cluster in the cloud, but with the usage of virtualisation, the progress of smarter networking equipment and software-defined networking, the edge can be a great extension to larger data centers.
Telco operators have been using Virtual Network Functions (VNFs) for almost ten years. These are the virtualised versions of network functions (like switches, gateways, load balancers, firewalls and many others). As a next logical step we are now moving from VNFs to Cloud Native Network Functions (CNFs), their containerised equivalent, which are able to run on Docker or any other container runtime. While we’re still in the early stages of moving everything to CNFs, there are many initiatives supporting the effort. The CNCF testbed for CNFs is just one example.
What is a Cloud Native platform?
Often people assume a Cloud Native platform is just your old environment, packaged with the latest cool tech (let’s say Kubernetes). But it is so much more than that. A Cloud Native platform unifies infrastructure and workload automation with self-service capabilities whilst still allowing for scalability and flexibility in the tooling used.
Whilst a platform like this often relies on a cloud infrastructure for scaling and faster access to new compute resources, it’s not impossible to replicate at the edge.
A key difference with Telco edge platforms is that with most cloud platforms you don’t rely as much on the actual hardware as other services. But here you still need to be able to configure the underlying servers for certain containerised workloads, whether it is enabling real-time kernels or hugepages, configuring SR IOV, or other networking and performance enhancements. The same thing goes for enabling GPU-workloads, passing them as vGPUs to containers or VMs.
Other concerns for an edge platform, such as observability and security, are less unusual, and tend to be similar to in any other production environment.
Ways of working
While providing technical expertise, Container Solutions also helps the team with ways of working. This includes risk reduction, short- and long-term experimentation, and quick iterations to increase innovation whilst still delivering on the actual platform.
After starting with short research stints into different pre-boxed options, including proprietary solutions such as Robin.io and OpenShift Container Platform, and open source options focussing on telco-oriented platforms such as OpenNESS and OpenCORD, the team determined a preliminary plan for a first MEC platform.
These short experiments continue to happen to evaluate new and additional options. This is both to re-evaluate alternatives we didn’t choose for lack of features or maturity, especially open source tools that are growing and changing quickly and to extend the capabilities of the platform itself. Those range anywhere from running VMs inside Kubernetes, to additional operators or even managing different types of clusters with the same tools.
Then, after the first research stages, longer PoCs with different use cases were run, looking specifically at gaming and 5G workloads.
After establishing baselines, Bell’s core team began creating the first MVP to be deployed across different sites, whilst the Container Solutions team is continuously working with different groups at Bell Canada on achieving the next step in Telco networking.
Container orchestration, building pipelines and driving GitOps for declarative provisioning is just part of the work of bringing Cloud Native to the edge. As we’ve explored elsewhere on WTF, culture and ways of working are equally important in ensuring a successful project.
With the final platform being a combination of Cloud Native best practices and the ability to deploy workloads across edge locations, it is meant to pave the way for 6G.
The utopia of being able to deploy workloads across cloud and edge, across different vendors and locations without having to worry about the underlying infrastructure is coming one step closer to reality with MEC.