The Edge Ecosystem From Now Through 2030

IEEE Future Networks Podcasts with the ExpertsFNPodcast INGR Edge Ecosystem 2030 ieeetv
An IEEE Future Directions Digital Studios Production
 

 

The Edge Ecosystem From Now Through 2030

 
Four subject matter experts on edge computing gathered to discuss the coming decade of expected developments for the edge ecosystem as enabled by current and future network generations. Contemporary and emerging use cases and evolving technologies are discussed as well as critical differentiators for Edge Platforms and Services. A key focal point today is the interface between hardware and software modules that will make the edge platform easy to build, operate, and consume through provisioning and lifecycle automation. Considerations of functional and non-functional requirements of the edge are examined, as well as other factors.

From enabling truly programmable IoT, which will make business operations much more agile, adaptable, and responsive, to edge traffic overtaking cloud, the team predicts disruption coming from quantum intelligent computing by year ten. The impacts of evolution at the edge will be felt globally.

 

Click here to listen. 
Click here to download. 

 

Subscribe to our feed on Apple Podcasts, Google Podcasts, or Spotify

 

Subject Matter Experts

Sujata Tibrewala ProfileSujata Tibrewala
OneAPI Worldwide Community Development Manager
Intel

 

 

Prakash Ramchandran ProfilePrakash Ramchandran
Secretary & Founding Director
Open Technology Foundation

 


TK Lala ProfileTK Lala
Founder
ZecureZ Consulting Company

 

 

Frederick Kautz ProfileFrederick Kautz
Head of Edge Infrastructure
doc.ai

 



Podcast Transcript 

Host: Welcome to the IEEE Future Networks Podcast Series, an IEEE Future Directions Digital Studio production. In this episode, we have gathered four edge subject matter experts to discuss the coming decade of expected developments for the edge ecosystem as enabled by current and future network generations. Our subject matter experts are with us today in their capacity as co-chairs of the Edge Automation Platform Working Group of the Future Networks International Network Generations Roadmap. As well, Frederick Kautz is the head of edge Infrastructure for DOC.AI; T.K. Lala is the founder of ZcureZ, a consulting company in edge computing and cybersecurity; Prakash Ramchandran is the founding director and secretary of the Open Technology Foundation; and Sujata Tibrewala is the OneAPI worldwide community development manager at Intel. To our panel today, thank you for sharing your time and expertise. To get started, can you explain what is the vision within the International Network Generations Roadmap, of the Edge Automation Platform Working Group now and 10 years down the road?

Sujata Tibrewala: Our working group vision is to capture ongoing infrastructure and edge service transformation that is occurring across the globe. We want to enable the edge ecosystem by promoting the interface between hardware and software modules that make the edge platform easy to build, operate, and consume through provisioning and lifecycle automation. We want to make things easy for edge application developers so that they can work on different hardware and can make use of software without worrying about the underlying interfaces. The changing ecosystem landscape has widened the scope of our working group to cover edge native services in 2021, so we are likely to continue on this innovation path over the next several years. As edge takes over the cloud, we will see more and more micro services and container takeover as part of edge as a service and, further along, we are seeing that serverless computing with AI, machine learning, and deep learning would play an important role by the fifth year. Eventually, we will see emergence of local utility computing with disruption coming from quantum computing by year 10. So, in summary, edge micro services, micro data centers, cloud, all working in tandem and covering different hardware and software modules.

T.K. Lala: To add to that, basically, the core or the crux of edge computing lies in low latency. It's really ultralow latency and super high bandwidth and both are being fostered now by 5G and the upcoming 6G, as people are noticing. And why does ultralow latency matter? Because there are many applications we have seen that they are really dependent on a just-in-time type of computing. So, in other words, all the computing needs to be done right now, when it's needed, and so forth. And proximity-based computing, if we can bring the computing power closer to the utilization and also the generation of the information then it solves a significant amount of latency problem. There are still other problems associated with the computing itself, which is also changing the computing paradigm. The computing paradigm is changing to photonic computing and also quantum computing as Sujata just mentioned. So, all these things coming together to basically capitalize on the ultralow latency and super high bandwidth of the network that we are seeing and evolving in the horizon. And one of the key things in edge computing is, as we have moved to the application, basically moving to the containerized infrastructure and also micro services as well as serverless computing, these all require working very closely with a very low latency environment; however, they require more and more from the scalability perspective distributed over multiple compute sources. In edge computing it brings the computing resources closer; however, the additional resources that are needed for the scalability purpose requires adjacent nodes or even the hybrid cloud environment to be incorporated when you are trying to scale out either vertically or horizontally. So, all these things pose a significant challenge as far as how to distribute this infrastructure of containers and micro services to work properly or to provide a service to end users. And so, these are the vision that we have, as to how edge computing is going to evolve in beyond 5G and into 6G where terahertz or at least hundreds of gigahertz spectrum translating to terabits per second in the digital domain is going to materialize, which brings ultrahigh bandwidth but at the same time, a very low latency environment, less than 100 microseconds and 10 microseconds, even closer to the 1 microsecond capability at some point. So, we are embarking on this spot and we are hoping that the industry is going to be moving fairly fast on this and we see that is already happening and we are going to know a lot more as we move along.

Host: So, how do you see evolution for the scope and demarcations for edge computing in the context of cloud and hybrid computing?

Prakash Ramchandran: In the evolution of edge, we look for similarities and dissimilarities with respect to cloud. Now cloud has proliferated with different players starting from Amazon, Google, Microsoft, 10-Cents, Alibaba and there is regional and national government portals, too. So, with the variety of the players and architecture, we tend to call them as multi-cloud, hybrid cloud and talk about the interoperability and portability of the workups. So, in that context, where is the edge? And edge with respect to cloud and the hybrid cloud? So, when we look at the various locations it ranges from anywhere close to the user up to the core, the 5G core, let's call it, and deploy the core. So, then the question becomes what type of form factors do we need then for serving the applications on the edge, as services from the edge? So, the form factor depends whether you are at the IoT gateway, which is closest. You can do it aggregated at your home or it could be aggregated in networks that are nearby or it could be at the power poles where you see a lot of applications related to intelligent transportation systems that you would interface. And essentially the gateways can be placed for the edge anywhere near the provider, far from the provider, far-far from the provider into the aggregator, et cetera. So, now let's say anywhere in the location on the edge, then that is where the question of form factor being there can solve what kind of service we are looking at, what kind of resources we need and what types of resources and how do we instantiate them based on the platform availability. Therefore, there is definitely a split between the underlay and the overlay. And so, from the edge computing context what we see is that 75 percent of all the data consumed in a range of ‘20, ‘24, ‘25 you are going to see -- and the CAGR of around 10 to 12 percent -- most of the workloads that is currently handled by cloud will move to edge or essentially even if the cloud handles what it handles, still the edge will have a huge amount of data being handled by the applications and services. And so, in the scope of things we have seen in this year that the clusters and nodes have become the common factor for the standardization for the EAP platform as a framework for us. And this is where we are here at this time and when we move from here towards delivering other services, we need to worry about the model, what we are using, who controls, like what is a control plane, what is a management plane, how are we handling the user data plane? With at least these native services using the edge UPF, user plane function, or core 5G user plane functions. So that the changes that are happening in the front-haul, mid-haul and the back-haul, called the x-haul is accelerating what we need to do in the edge. So, the scope has widened, and I think in the coming year, setting the context, we will be working on the edge service platform framework for 2021, whereas we finished with the edge automation platform in 2020.

T.K. Lala: I think you covered it well. The key thing is, again, the infrastructure. What we are noticing is that there will be new technologies that will be coming up both on the software side and also in the hardware side we are already seeing some very key in-memory type of computing and those kind of things. And also, liquid software, this kind of architecture that will be happening will help our evolution in going forward. And there's a lot of challenges but there's a lot of enablers also that is helping this. And we are seeing a tremendous amount of growth or progress in these areas and I think it's happening a lot faster than it used to.

Host: Can you speak to some of the use cases that are driving the edge?

T.K. Lala: Let me lead it off on that one and then Sujata probably will join me later. There are quite a few compelling use cases, actually. And because almost no one will say no to a minimum latency, as you can tell, almost every application that we are aware of today they can all benefit from minimum latency and as well ultrahigh bandwidth. But in particular if you think about it, autonomous vehicle, which is starting to happen, autonomous vehicle and driving is a very compelling case. Now in this one to achieve autonomous driving, how to make the vehicle understand, for example, the environment or proximity awareness, situational awareness, what is happening around the vehicle and how to make the safe controls in real time is an extremely essential task. And this can only happen in real time, so all the processing needs to happen, and communication needs to happen in real time. So, what better example than that, really, for using edge computing right close to the vehicle, as close as possible to the vehicle. As the vehicle is passing through and it is coming closer and closer to the edge, for example, that edge will be processing those information, assisting those information in addition to the vehicle computing itself and then hands it over to the next adjacent node and so forth. So, edge computing is doing this job in a very harmonious way compared to a centralized cloud computing if you can think of that are located much further away. Rich sensors, for example, cameras, lighters, and so forth, they are all going to be used in this edge computing paradigm.

Another example is industry for factory type of automation where automation means real time assembly, real time control and so forth of the factories, so that you can minimize downtime. And again, all the processing that needs to be done in a distributed manner within the factory that would be done in different computing that are located in the proximity but at the same time there is an edge computing server that will serve basically the overall computing that is needed to run the factory and manage the whole automation part of the factory, so reducing the manpower need especially on the repetitive tasks that the factories or the manufacturing industry needs to do.

The third example comes to mind is cached content delivery. Gaming service is another one. By the way, cached content delivery is for infotainment, a purpose where you have a lot of information that you're using for entertainment. They could be cached closer to the frequent users and consumers rather than having to download it much further away. So, you want to keep it closer and also process it much faster so that the user has a much better QOE, quality of experience. And gaming service is another one. Gaming service is becoming very, very popular and real time interactive gaming that requires almost an instantaneous processing and so forth. So, in the neighborhood if several people are playing the games and they needed very high, rich graphics and images. They would be all served well through the closer to them, proximity node, which is like edge computing. And obviously, autonomous control of, imagine a smart city or a very smart, intelligent neighborhood and all kinds of things going on, electricity coming up at the right time, different type of services happening at the same time. So, autonomous control of many of these devices can be done very effectively through using intelligent devices and they're replacing the traditional IoTs for example, the smart devices, but they are located closer to the proximity and they are being processed through the closer proximity server or the serverless computing system in the edge. And these are all being automated. Imagine the situation. And they are being distributed through multiple nodes as well. So that way, they are working all in a harmonious manner but a well-orchestrated system; telemedicine comes to mind; street traffic control is also part of that; and farming and agriculture, automation is another; AR or VR-powered education or instruction; the entertainment system that could be part of that. Because they all require a tremendous amount of computing power as well as storage power and communication power. These all can be served from closer to the proximity rather than far away. Surveillance or other monitoring devices also would be helped very well.

Sujata Tibrewala: Thank you, T.K.. You covered it really well. As T.K. mentioned, there are all these cases. One of the things that I would like to add would be disaster response systems, like, in case of natural disasters like earthquake and flooding or storm and things like that. Autonomous systems could be leveraged to put emergency response system in place quickly and communicate with the different users and coordinate whatever relief things need to be coordinated. Now apart from that I would like to also point out that even though most of these systems do have low latency and high bandwidth requirements, there is a gradation to that. For example, a disaster recovery system or traffic control system for an  autonomous driving system is more particular in terms of latency requirement than a gaming service is. And things like agriculture and farming systems, even though they will contain a lot of devices spread across a huge geography they are not really latency-sensitive but they do have a requirement of having a lot of connections in the edge system and of being able to bring up the connection and respond to whatever parameters that need to be monitored for a farming and agriculture system. So, these variety of requirements for all of these edge use cases is what dictates what is developed as part of the edge automation platform and that is what we are dealing with in this working group.

Host: What are the functional requirements for the edge? What are the non-functional requirements for the edge? And what are the key differentiators for edge platforms and services?

Prakash Ramchandran: For the functional requirement I will handle and non-functional I will pass on after I've finished to Frederick. On the functional requirement, if you look at the essential partition between the underlay and the overlay, the underlay deals with the infrastructure form factor; and overlay we talk about functionality, service functionality. So, since there are multiple services from the use case we heard from both T.K. and Sujata, there are a variety of services. And each service has a different need for resources and the functionality of the services differ. So, let's take, start with the IoT use case. If it is an IoT use case, if the data is humongous in the sense that every 15 seconds if you are to get a data sample to data mine and if it has to be processed, the processing needs to be really fast in the sub, I would say, 5 millisecond range, and that means we have to look at whether we can really do it on the spot or do we need to aggregate and have some historical data and build some models to deliver it. So that functional requirement may require us to provide the ability to sense, ability to aggregate, ability to store and ability to respond back in a timely fashion, real time depending on what the need is. If it is an emergency service, that's a different story. You can probably respond right at home in a few minutes range, but if it is something like an emergency fire, then you may have to respond more urgent. So, therefore, the edge use case will require different responses in real time, and what is real time for one may not be real for another, so it depends on how the use case function evolves and requires the ability of the edge service to respond. There I would add that if there is something to aggregate it, then you may offload, you may not have a computer sufficient, so you may offload it to the cloud if it is possible, or to the nearest local storage. So, offload could be one factor. The other could be if you have something you want to recognize while driving example, you don't want to skip the lane, then you have a measuring process. That requires then a very quick response. So, you may not have the model at your hand, so you may have to process the model, a pre-process model you may have to apply. So, there is an acceleration compute if you meet other requirements which we've spoken of. The other factor in the use case, like factory automation which was brought up, there are trial reliability and lower latency. Of course, the functionality then requires that processing be closer. So, how the platform places the edge stack will be closer to the industry because then it is faster to process. So, maybe the radio unit there, distributed radio unit there, simple unit, or whatever the process, they'll be moved to edge. So, there may be requirement for edge caching, there may be the requirement for aggregation. So, those are the functional requirements derived from obviously the service requirement. So, there is several real life examples if we are talking of extreme need for a training center to do the MOOC, Massive Online Open Course. Then in those cases you will acquire more of an ability to do video streaming if it is part of the extreme EBB they call it, Extremely Mobile Broadband. So, caching in various media conversions is more like you need a conversion voice, video, data to be able to train the people in a different way, so that requires another type of functionality where encoding and decoding and encryption, decryption and those things fall in. So, every use case comes with its own challenges and requirements. At this stage, I think I should pass on to Frederick to talk about the non-functional aspect.

Frederick Kautz: Sure. Many of the non-functional requirements that you would expect in software systems still apply. So, you still have to look at things like what is the quality, what is the reliability of the system we're using, how do you trace it, how do you handle things like latency throughput requirements that are present? There are some key differentiators and some of them were touched upon already. So, for example, when you start looking at something like portability, we're going to have different types of architectures, which we'll be able to deploy to that are on the edge or at least that's what we're expecting. So, it's very likely we'll see ARM, we'll see RISC, we'll see Intel-based systems. And so, the portability of the platform becomes very important because we don't want to be maintaining a large outlay of different types of systems that are fundamentally different from each other. And even though we may compile to specific targets, we expect for there to be common platforms-- for example, we're seeing Kubernetes is gaining a heavy lead in some of these areas-- and have common ways to deploy and manage these systems even though the actual binaries themselves may be different. Simultaneously, we also are seeing changes in things like density whereas there was a discussion about radios before, how we'll be able to have this higher density set of radios. And if you look at what this means, from a data perspective, this means that we're going to be able to collect or process data that is closer to the user, to the edge. And one fundamental difference is that there's pieces of data today that may not be worth aggregating or collecting or working with that you may be more comfortable working with because you can process them at the edge where the edge could literally be on premise at the customer sites, so they don't have to share their data with third parties. Or it may be processed in an edge data center that is located within a few miles of the location. And this means that you could aggregate local information, so in the case of the self-driving car, it could be local traffic information that could be co-located inside of a regional area, literally a data center very close to you and not having to transfer that huge volume of information to a cloud, which ends up saving bandwidth and increases the overall reliability and latency properties. We are already seeing some of this in early environments such as Netflix where Netflix created a box, a storage device, that they land at the ISPs so that way the movies don't have to transfer all the way from Amazon every time, but instead, they can transfer that data from that box. So, that was an early version that we saw appear a few years ago. And we're expecting these type of patterns to be commoditized so that you can bring your own type of compute, your own type of data and networking to the user based upon your applications needs as opposed to being based upon what the infrastructure of the ISP and your own premise or cloud are able to provide you. So, there's also areas around cost that we're expecting to change as well where, from a cost perspective we're very likely to see spot instances, marketplaces that appear nearby where we can regulate what is currently running or not running based upon feedback pressure from what is going on with local environments and demand, while at the same time, preserving critical lifesaving technologies where suppose you have a 9-1-1 service going through, we can ensure those things get priority, but at the same time, have it set so that as the consumer you're able to pick and choose the type of things that you want to gain access to because you're in control of the placement, not a third party or an ISP.

Host: Where could our listeners go for more information? Or if they wanted to volunteer to participate in the Edge Automation Platform Working Group?

Sujata Tibrewala: Yeah. Definitely. So, I believe we will have the links for the INGR Group website on the description for the podcast, so that is a good place to start. The EAP Working Group is listed over there and our roadmap from previous year is already published. The roadmap for this year would be published shortly, so please be on the lookout for that. As far as participating is concerned, there is an email list that you could send an email to and then one of us will contact you. So that email would be also in the description link down below, so please send an email to participate in the Working Group. We really love to hear from the people in the industry and include as many diverse opinions as possible.

Host: Thank you for listening to this edition of the IEEE Future Networks Podcast with the Experts. Discover more about the IEEE Future Networks Initiative and inquire about participating in this effort by visiting our web portal at futurenetworks.ieee.org.

 

For more information: https://futurenetworks.ieee.org/roadmap
To participate, contact: This email address is being protected from spambots. You need JavaScript enabled to view it.

----