Edge Services Part 2: Edge Platforms and Services Evolving into 2030

IEEE Future Networks Podcasts with the ExpertsFNPodcast INGR Edge Platforms 2030 ieeetv
An IEEE Future Directions Digital Studios Production
 

 

Edge Services Part 2: Edge Platforms and Services Evolving into 2030

 
Following on from the podcast on The Edge Ecosystem From Now Through 2030, posted in March 2021, four edge subject matter experts now discuss Edge Automation Platform and Edge Service Platform frameworks. Edge Service includes the necessary Platform and Applications that are distributed and delivered to consumers and enterprises. So, this podcast includes envisioning use cases fertilized by current and coming network generations. And, the impact of a heterogeneous infrastructure on edge platforms and services is discussed as well as a unified programming model to simplify development across diverse architectures. Challenges at the edge are anticipated, such as networking topologies, scalability, standardization, and security at the edge, especially at the virtual level. These are the building blocks for the era of domain specific services and futuristic edge native solutions.

 

Click here to listen. 
Click here to download. 

 

Subscribe to our feed on Apple Podcasts, Google Podcasts, or Spotify

 

Subject Matter Experts

Sujata Tibrewala ProfileSujata Tibrewala
OneAPI Worldwide Community Development Manager
Intel

 

 

Prakash Ramchandran ProfilePrakash Ramchandran
Secretary & Founding Director
Open Technology Foundation

 


TK Lala ProfileTK Lala
Founder
ZecureZ Consulting Company

 

 

Frederick Kautz ProfileFrederick Kautz
Head of Edge Infrastructure
doc.ai

 



Podcast Transcript 

Host: Welcome to the IEEE Future Networks Podcast Series, an IEEE Future Directions Digital Studio Production. In March, we convened four edge computing subject matter experts to discuss the edge ecosystem from now to 2030. Today, these experts will speak to the evolving edge automation platform and edge service platform frameworks. Our subject matter experts are with us today in their capacity as co-chairs of the Edge Automation Platform Working Group of the Future Networks International Network Generations Roadmap. As well, Frederick Kautz is the Head of Infrastructure for doc.ai, TK Lala is the Founder of ZecureZ, a consulting company in edge computing and cybersecurity, Prakash Ramchandran is the Founding Director and Secretary of the Open Technology Foundation, and Sujata Tibrewala is the OneAPI Worldwide Community Development Manager at Intel. To our panel, thank you for sharing your time and expertise. Let’s begin our conversation today by asking the question-- how do 5G and follow-on next generation networks 6G and beyond embolden and perhaps fertilize edge use cases?

Prakash Ramchandran: Thank you very much for the question. What I see from the industry being part of the next generation roadmap is that there is an evolution beyond 5G. At the level of 5G, we are release 15, release 16 now, moving towards release 17 into the 3GPP. At the same time, we are seeing that network throughputs are multiplying, obviously the 5G target against 4G 10x to 100x. So, you see there are a lot of changes on the throughput side of it. But, as far as the edge is concerned, we are focused on the latency side of it, latency, latency, latency, as we call it. very similar to real estate, location, location, location. So, we see that the 5G to 6G as well as WiGig and WiFi 6 all are looking for first throughput followed by low latency or optimized latency and then power consumption and energy saving and all that. Now, with respect to the next generation, obviously, we see that in the context that we have the edge automation platform, which has more underlay, more towards the resource platform, the resource angle, we have moved to edge service platforms. So, our directional sense based on what is happening because of the volume of data that is going to be moving to the edge. We are seeing visionary that as we will have more to do with how do we get the access at the edge or access to the edge for the various data networks. So, from the end to end -- because edge is anywhere between the user device or the user themselves, whether they are in factory or whether they are in enterprise, to the core and cloud is beyond the core in the data centers. So, there is a requirement-- in 4G, we used to call it local breakout. Instead of that, we need an edge user plane function and that is based on the various access gateways. So, you can have an WiGig gateway, a WiFi 6 gateway, or you have an access gateway that caters to eNodeB as well as gNodeB. So, basically, eNode and gNode are the respective cell towers, which come in between and then you have the portion which we call on the radio side the frontal or fixed network, we call the 502 from home to the optical networks and fiber, etcetera. So, essentially, there is a convergence of different access on one side and there is a distribution of edge data centers or the nano datacenters, what we call our micro to nano data centers, depending on the form factor and the location where they are. So, obviously, we see that this edge use case requires the technology aspect to amplify, accelerate, the computational factors to the proximity, of the processing where you want to consume it. So, what we see is it’s all data-driven and if we have voice-- we used to take voice and then just for voice, peer-to-peer was good, and voice has disappeared into data now because everything is data now. So, essentially, we have to look at what the stacks are now, how stacks are going to evolve, whether they are going to just be stuck at where they were or how does it change. So, from the 5G and beyond perspective, we see that cloud native services as you see in the cloud is coming to the edge and since it is coming to the edge, how do we slice and dice the various networks to provide the radio slice, map it to the transport relays and to the data network, which in this case, data network is local, which means it is at the edge, not at the cloud in the facility beyond the master data center towards the nano or micro data center. How do we support this as native services? So, those are the various issues we see coming up as we move from the automation platform, which we consider Kubernetes orchestration as the key, followed by the edge service platform, where you have more to deal with the services, multi-services-- so, it is multi-service, multi-tenant, multi-application-- and then obviously, you also have multiple ways to handle the security and the encryption authentication. So, from all this, we see that service is the-- service platform, edge service platform-- the framework which we are trying to build is what we are moving towards.

TK Lala: Just to add to that, basically, I will just talk briefly on the 6G portion. So, we’re already seeing how the edge use cases are being fostered by the 5G, right? 5G ultra low latency and higher bandwidth-- so, same thing continues on the 6G is going to bring in, for example, hundreds of gigahertz to the analog bandwidth, which translates to the digital domain 1 terabyte or even closer to the 1 terabyte per second, so forth. It’s not unthinkable, even though it seems like quite a bit right now, but it is happening and once that happens and the latency also is at the same time going down further, to a sub microsecond, if those are happening, then those are going to tremendously help the AI and ML portion of the edge intelligence because the edge intelligence is going to be powered by the AIML and what that does is that you will have-- even though we’ll have tight EML or small memory footprint in different platforms, which are ideal for the edge cases, they’re going to be fostered much higher bandwidth to connect with each other and form kind of a mesh network and so forth and it scales well and provides the intelligence and the automation that is needed in the edge computing. So, the AI and ML is a centerpiece of the edge computing and it’s going to be more and more so in the 6G and beyond domain and the 6G with the ultra-high bandwidth rather and very low latency would help tremendously these types of growth. This will bring in the intelligent internet of things and also, the mobility software, like the liquid software we are hearing about that will flow from one device to the other. This is going to propel all these things in making this edge computing to be very compelling and very useful for the industries.
Brian Walker: This next question is for Sujata. What is the impact of heterogenous infrastructure innovation in industry? How do you characterize the impact of heterogenous infrastructure innovation for edge platforms and services?

Sujata Tibrewala: So, as we have been talking earlier, edge services has multiple use cases, and they work across diverse platforms. So, heterogenous computing becomes a key to support that. So, the applications may need to sit on a micro data center or the cloud or at the user device and each of these devices will have a variety of hardware platforms that could be sitting on them. It could be CPUs, GPUs, FPGA, or any number of accelerators that are there in the market. So, today, what happens is that if a developer is developing a code for CPU, there is a separate set of those libraries, languages that they use. Similar is case with the GPUs and in some cases, these tools and libraries and languages are also being developed and proprietary means they are not open source. So, there is a need in the industry to open this up so that a developer when develops software for one set of hardware, it can be easily ported to another set of hardware and can also work on multiple or different architecture. So, you can think about a code running on CPUs, GPUs, and FPGA at the same time and there is initiative, industry initiative going on right now based on SYCL called OneAPI. It was launched last year. It is based on something called SYCL, which has been around for a while and has become a de facto standard for heterogenous computing, particularly the FPGA developers find it easier to program using SYCL programming paradigm versus the typical programming languages that they have been using, which has a huge learning curve. So, what we see is that emergence of this sort of open standards, open source software that can run on multiple platforms at the same time will open up the edge ecosystem and will let the developers develop their application once and then not worry about which particular hardware it is going to run on. It is, of course, maybe there may be some hand optimization required, but at least the basic or the initial barrier to entry to code on a different platform is removed using this.

Prakash Ramchandran: Thank you very much. Where Sujata left off on the acceleration-- so, one of the key factors we have been facing challenges on the platform side has been, how do we enable these services to use the underlying capability of the hardware? So, the acceleration, that requires standardization because unless you have standards to be able to use that for the platform or for the infrastructure, these services cannot expand. So, standardization has been the key challenge for us in the early last two years when we dealt with the VM, last year, we dealt with the container and this year, we are moving all the platforms to service standardization of 4G infrastructure, platform to infrastructure and service to platform. So, in that context, just to pick up one sample from acceleration, if you have the GPU, FPGAs and all that, you need the labeling to be able to select and place the workloads. So, to place the workloads based on the label, the labels must be exposed so that we know what node, what cluster is being offered to the service to be able to use then. So, standardization on the platform side to the infrastructure side, fortunately, is a last resort for us in terms-- Kubernetes has taken over as the key for container orchestration now. Besides container, even it is being used for cloud control planes or edge control planes, whether you do it with any of the providers, all providers have some kind of a Kubernetes offering, whether it’s GKE or AWS has it, Azure has something. So, all cloud providers are providing Kubernetes as an edge-- so, a platform standardization aspect has narrowed down in the footprint of micro data centers. In terms of nano data center standardization, we do still have some gaps, but most of them are already covered by the Arduino and the embedded toolkits with the Pi and all that. So, you do have some hardware infrastructure and ability to use the various libraries to do the standardization for the platform. For services, we are looking at challenges. Yes, it doesn’t mean the platform resolves-- all the standardization is resolved. Still, there are portability and interoperability that continue to be challenges. From the other aspect, if I look at-- before I go to the other one, I will cover what I have with nano-- that is you find that management and network orchestration was one of the key factors. When we started on the edge, everybody thought that CNF, eVNF architecture is a data architecture, let’s just use it everywhere and that is how MEC came in the picture, the mobile edge computing or multi-access edge computing, as they call it. Now, multi-access edge computing did not take off in the earlier stages when it was called mobile edge computing because of platform issues. Having been overcome now with Kubernetes, next is how does the telco operators use edges versus how the cloud operators use it because somebody has to provide that facility and this is where the standard challenge we had in the nano industry, which was OSS, BSS, and the CNTDs and all that became mentioned. They are still trying to standardize the challenges with how will containers be used in onboarding a particular service and how do we use slicing and segmentation and how do we differentiate between underlay and overlay? So, there are some challenges in the workload distribution and also about API management, where you have Kubernetes API along with the custom resource descriptor being used because Kubernetes is so standard, everybody understands and slowly and slowly it gets seeped in everywhere. So, you just want to use the Kubernetes standard for underlay and extend it, and to extend it, you need to have a custom resource descriptor, which can be consumed by the service or the platform to be able to use that to the API gateway. So, there are API gateway aspects, which is another challenge, but before I move on to that, networking is very important. Without networking, we cannot do anything. So, I will hand this over at this time to Frederick to talk about network mesh and topology, etc. for routing.

Frederick Kautz: Fantastic. So, when we look at networking on the edge, we’re seeing multiple things occur. One is there’s this trend towards network slicing. We’re going to see network slicing make a very big play in areas where you have very high density. When you start to look at areas, like you have a stadium and you need to ensure that certain devices always maintain their bandwidth, things like network slicing are fantastic for those type of use cases, leading into an edge data center that can then do the processing of video or other low latency workloads. When we start to look at the general use case, the network slicing portion requires interoperability with the service provider and it does provide some form of rigidity there, and one area that we can use to relax some of those constraints and push them into the software stack is to start incorporating things such as service meshes. So, service mesh ends up-- there’s two different layers that we want to focus on. When you’re in a cluster or you’re operating within a small set of clusters, then you can establish a service mesh, which will look at things like, what’s the identity of the thing I need to connect to. So, in other words, I need to connect to a database. How do I find where that database is? It controls things like policy-- how do I tell it that these things are allowed to communicate? We’ll get more into security later on. Then we also have areas around control and automation of that control and that control could be things like I have a load balancer, how do I automate the connections coming in across multiple systems. Simultaneously, there’s also a series of related things that are going on at the lower level. So, the previous ones more L4, L7, think of it like TCP and HTTP or similar level of protocols. You also-- the thing with these types of application service meshes is they make the assumption that the low-level connectivity already exists, and we cannot make this assumption, especially on the edge. So, we’re looking at things such as layer 2, layer 3 service meshes, which are capable of negotiating connections such as what protocol should I use to establish a tunnel? What type of things do I need to go through if I’m doing something like service function chaining? Like, maybe I need to go through a certain set of firewalls, through a certain set of intrusion detection systems, and a specific VPN gateway and concentrator. These types of things need to be defined in a declarative way and allow the system to consume those specifications and then render them based upon the infrastructure that’s underneath them and this creates a very nice layer of abstractions because that means you have the L2, L3 concerns that are cleanly taken care of. Layered on top of it are your L4 and L7 networking concerns and they’re able to-- in the same way that when we look at the OSI model with standard applications, that same model serves us very well in separating out the concerns based upon what is necessary for the operators of the clusters to provide versus what are the application requirements that you really need to have the application dev ops and developers jump into in order to ensure that their needs are being met and creates a very nice demarcation point between the two of them.

Host: Here’s a question for the panel. What are some of the key trends and challenges for complex edge security in this context?

TK Lala: I will start with this one and then I will ask Frederick to join me later on to add to that. I think by now, we got the kind of realization that edge computing does help in significant manner for many of our applications and services, but at the same time, it does bring quite a bit of complexity on the security side of it, how to manage security in this context. Now, one thing we all realize, and we should keep it in mind-- as the information that we are trying to keep closer to generation and utilization, meaning keeping it closer to the proximity where it actually originates and also being consumed, it helps in terms of security and privacy to be more robust by reducing the attack surface that is exposed for pertinent information. So, the information doesn’t get exposed unnecessarily to some of the other surfaces that it needs to travel through in a more traditional infrastructure. In the edge computing infrastructure, it helps many of information to be contained or confined into a closer proximity and thus, it’s a very positive thing, in that sense. However, there are a lot of other issues that it brings in, challenges. First, also, I wanted to mention that the lightweight and the distributor security mechanism designs are very critical to ensure user authentication, access control, model and data integrity, and mutual platform verification for edge intelligence. These are extremely important, and lightweight is key and distributed model is another key aspect of edge computing and security has to work with this infrastructure. Now, trusting network topologies for the edge intelligent service delivery when considering the coexistence of trusted edge nodes and the malicious ones that exist, we have to be able to manage the trust where you have trusted nodes and you could have some malicious nodes scattered around, and how do you maintain the operations of the trusted nodes smoothly without being affected by the malicious ones that will try to infiltrate that? Now, edge intelligence plays a very key role in that part of the identification. The third part of this is privacy. The privacy issue is referred from the personally identified information, PI, and you’ve seen-- we have seen verticals have different roles for preserving their privacy. For example, HIPAA is in the medical industry. It’s almost like a bible. You can’t deal with any of the information in medical unless you maintain that amount of privacy as governed by HIPAA and that may not even go far enough, and it might be even getting stronger. It general, there is something like GDPR, which is European, very similar ones we have in California and also nationwide in many states. So, privacy is definitely a more and more dominant factor that we have to maintain within our edge computing. Now, again, being in proximity, keeping it within the origination and the consumption for this information helps significantly, but that doesn’t necessarily completely solve the problem because many of this information needs to travel even farther than just the edge node, even though they are being consumed there but it probably needs help, or it needs to have some meta information that needs to travel somewhere else and then get also the reception. So, anonymity, immutability, non-repudiation, and we’re using the PKI and these functions are going to play a significant role. Ultra-secure and in the defense industry, what we’ve seen is the cross domain guard and there are different classes of information that needs to be maintained. So, even though within the proximity, you have multiple level of users and because the multiple levels of users and their roles and their access capability, you have to maintain those domains that are separated from each other. You cannot have one information mixed with another and so, in order to maintain those things, typically, the industry uses cross domain guard and some sort of filtration systems and these things will probably become much more sophisticated powered by AI/ML and will help this grow. Securing APIs, virtualized domains, devices along with the IoT sensors, these are some of the key things also going to happen. We are also seeing most likely that it’s going to be software like SIM card that are going to be used, especially in 6G and beyond and one of the things that we have to mention here is in light of network slicing, which is going to be dominant in the next generation network, we are going to see tailored and dynamically configurable-- adaptable security, I call it, that will adapt with the intelligence, empowered by intelligence to the context. So, in other words, you don’t need one level of security for all applications or all use cases or all network slices. There is going to be a basic infrastructure in that security. However, there is going to be a programmable portion or adaptable portion and configurable portion and ideally, it’s going to be dynamically configured. So, a human doesn’t have to go in there and start to manipulate those things, which are going to be done through the machine learning and then artificial intelligence to power that, to make it suitable for the context. I think Frederick also mentioned earlier about the service mesh that is going to be there and that needs to be protected, no question about it, contained or protections is needed and all this. So, the use case and context awareness and cognition security are going to be the key. Now, the real time transparencies of security is extremely important. There’s another big can of worms and I didn’t want to go too much into it, but it’s called post-quantum computing security. Right now, most use a symmetric cryptography. There’s going to be asymmetric cryptography that we started to see and that’s also going to be playing a role in that with computing, post-quantum edge for security because security is going to be turned upside down with that kind of technology and we need to be prepared for that as well. Blockchain is going to play some role and identity and access management is a big issue and I’m going to let Frederick expand on some of this, on the identity access management especially. Frederick, if you like?

Frederick Kautz: So, when it comes to identity and security, there is a new set of patterns that we’re going to see as these technologies continue to progress. The original patterns are based on the assumption of very well defined perimeters. We had originally on-premise perimeters. We extended that perimeter to the cloud and we established a fantastic set of perimeter defense through firewalls, through other similar types of defense mechanisms in order to protect the applications and the networks that they sit on top of. As we proceed into edge computing and we start to see 5G start to take off, 6G in the future and we start to see much more IoT in our environments, these set of assumptions that we relied upon are no longer as valid as they once were and this gap that presents itself is an opportunity for attackers to make use of in order to break into the systems. So, our assumptions have to change so that we can defend these systems properly. One of the key assumptions that we need to change is how we look at things like identity. So, instead of saying the network is the thing that we trust, instead, we’re going to move towards a workload is the thing that we need to defend. The resource is the thing we need to defend. The user is a thing we need to defend so that we can give each of them cryptographic identities and then we can use those cryptographic identities to create policy that defines the interactions between them and these policies are not based upon the network, though you may include the network as part of the attestation process, but instead, you have this fundamental thing that you can defend and that workload is able to then establish its connection with other systems and have verifiable proof that it is who it says that it is, both for what it’s connecting to and the information that it presents to others as well. So, this has other implications on the infrastructure as well, because things like access control lists, which originally, your IP and port combinations was your identity. Now, you shift that over to this cryptographic identity, which then frees you up to re-IP or reconfigure your network, relocate the workloads to places where they need to be and it also has a very nice tie-in with additional types of workload environments that we’re going to see come along in the near future, such as secure enclaves, where we can have a remote attestation with a cryptographic identity within it that it can then use to identify itself to the outside world, even in a more hostile environment and some of the protocols that we see that are coming up to help along these are things-- they’re not the only ones, but we’re seeing things within the CNCF. We’re seeing the SPIFFE protocol with the SPIRE reference implementation. We’re also seeing things like Open Policy Agent come along, which is providing language for, how do you declaratively define these types of interactions, and we’re seeing things get added across different parts of the infrastructure, ranging from the application at the L7 layer down to the things that control the L2 and L3 layers. In short, what we’re seeing is a move towards what’s being referred to as a zero trust security model, where you constantly validate and verify the workloads and the connections that are attached to them. We believe this is going to be a fundamental shift in how security is handled in the future, not only in the service provider level, but also in the enterprise application level.
Brian Walker: Let’s now talk with Sujata about where people who are listening to this podcast can go if they are interested in getting more information, or if they would like to become involved in this Edge Automation Working Group.

Sujata Tibrewala: So, anyone listening, please go to the INGR website. We will be providing the link on the description below. On the INGR website, you can find information about our working group, which is the EAP Working Group or Edge Automation Platform Working Group. As far as participating and contributing to this working group, we welcome you here. We value all sort of different, diverse voices. So, we’ll be happy to get you to become part of this group. There is an email address that you can send an email to if you want to be part of this group and that email also will be included in the description below. Thank you. So, looking forward to talking to you as part of this group. If not, I hope that you find some value in reading our roadmap and from the podcast today. Thank you.

Host: Thank you for listening to this edition of the IEEE Future Networks Podcast with the Experts. Discover more about the IEEE Future Networks Initiative and inquire about participating in this effort by visiting our web portal at futurenetworks.ieee.org.

 

For more information: https://futurenetworks.ieee.org/roadmap
To participate, contact: This email address is being protected from spambots. You need JavaScript enabled to view it.

----