Massive MIMO’s Path Ahead: Sky High Scalability, Intelligent Control, and AI/ML

IEEE Future Networks Podcasts with the ExpertsFNPodcast INGR MIMO ieeetv
An IEEE Future Directions Digital Studios Production


Massive MIMO’s Path Ahead: Sky High Scalability, Intelligent Control, and AI/ML


Trusty old MIMO is becoming Massive MIMO and is expected to continue into the future two decades of networking, but it will have to rise to many challenges. Massive MIMO must evolve to interwork with thousands of antenna elements while managing power in Internet of Things applications, evolving hardware, ever lower latency, emerging technologies like machine learning and artificial intelligence, and intelligence at the edge, while security is an ever-present concern. What is entailed as it is determined whether technology can meet the envisioned future and what does the path ahead involve? Known technology gaps are in the areas of efficient receiver architecture, power efficiency, and CMOS. Three Massive MIMO subject matter experts discuss the challenges, and also the path to solutions in beamforming algorithms, resource management, computationally efficient channel estimation approaches, simultaneous multiple transmit/receive beams, and new deep learning techniques.


Subject Matter Experts

Chris NgChris T. K. Ng
Co-chair Massive MIMO Working Group 
International Network Generations Roadmap
IEEE Future Networks 

Director, Systems Engineering Products
Blue Danube Systems


Webert Montlouis ProfileWebert Montlouis 
Co-chair Massive MIMO Working Group 
International Network Generations Roadmap
IEEE Future Networks 

Johns Hopkins University
Faculty, ECE
Chief Scientist, Applied Physics Lab


Rose Hu ProfileRose Qingyang Hu
Co-chair Massive MIMO Working Group 
International Network Generations Roadmap
IEEE Future Networks 

Associate Dean for Research | College of Engineering
Professor | Electrical and Computer Engineering Department
Utah State University



Brian Walker


Click here to listen. 
Click here to download. 


Subscribe to our feed on Apple Podcasts, Google Podcasts, or Spotify


Podcast Transcript 


Brian Walker: Firstly, thank you all for taking time to contribute to the IEEE Future Networks Podcast Series. Let's begin with a question for Webert. What are the challenges of controlling massive mobile elements? For example, multitudes of antenna elements?

Webert Montlouis: The process of controlling the multiple antenna elements in an array, we have to be able to generate the weight vector so that we can steer the antenna beam in the desired location. As we increase the number of antenna elements, now you kind of understand that the number of weight elements we have to calculate, also called the weight vector, is becoming more difficult. So, now we need to start asking the question, can we do that at the antenna level? Is it possible? But now, today, we are dealing with 64 elements in 5G, but as we try to look at the envisioned future for Massive MIMO where you have that many number of elements, it may not be possible to do that calculation at the antenna level. Therefore, we need to start looking at edge computing to help us determine the weight vector because we will need to kind of somewhat implement optimization averages to come up with the optimum weight vector to minimize interference. So, therefore, edge computing where we have most compute resources may help us in that process. And then also we need to come up with very efficient trust mates that receive beamforming averages to minimize interference. And then, as it is today, users have many devices to try to connect to the network. So, therefore, interference is there, especially in the urban environment. This type of thinking leads to what we call typically in the architecture world, no system architecture or partitioning where we partition the workload, having the antenna perform what it does best, and then have the edge computing supplement what the antenna was supposed to do but does not have the resources to do the work. As we move to that envisioned future by IEEE, having that many antenna elements, we need to look at new techniques. As it is today with 64 elements, we kind of see a lot of approaches are borrowing from legacy systems. But as we move to 6G and beyond, we have to come up with new 6G or 5G concepts to help us with this future. Well, the best way to do that as we know is to engage in basic research. So, therefore, IEEE is well-positioned to approach academia to tell them ‘hey, this is the problem that I'm dealing with’. And they will come up with new concepts, new algorithms, and new techniques. Not all of them will work but what we learned from the past, some of these techniques, some of these concepts and algorithms will work and help us fulfill the goal of this envisioned future.

Brian: Webert, as the number of devices increases, the workload at the antenna will become massive. What if a UE can connect directly to the edge cloud processor without using the base station?

Webert: This idea is known as cell-free Massive MIMO. In that framework, we have many access points where the UE is connected directly to the edge computing unit, bypassing the base station. Well, now we just talked about how it is maybe difficult to come up with the level of computation that we need to service all the UEs using the base station. So, now this is becoming very attractive as we increase the number of elements, so, that is, we minimize the workload at the base station and then shift some of the work to edge computing. Well, there's a caveat with that. This approach has its own issues because either connecting to the base station or the edge computing unit, both techniques need to be working simultaneously because there will be cases where the UE will not be able to connect to the edge computing unit, where they have to connect directly to the base station because distributing the system, distributing the work with many antenna elements or access points is massive and costly. In an urban area this is feasible, but as you move to remote areas the cost of doing that may not work efficiently.

Brian: How can we work with research, academia, and industry to determine how technology can meet the envisioned future?

Webert: As a matter of fact, this idea initiated by IEEE is well-positioned to play that role. Most of the members in academia, they are connected with IEEE, and also their work in that area will help facilitate or improve how the 5G and beyond network for meeting this envisioned future will be feasible. Engaging academia to discuss research areas is paramount. And the goal of IEEE is to expose the need to academia. And then with that, new concepts will emerge. As I previously mentioned, not all the concepts will work. But this is the best scenario to ensure that this envisioned future with the massive number of elements, new receiver architecture, efficient chip design to minimize power are developed. Once we go on from the basic research phase then IEEE also can play a major role by engaging industry to help them understand what is coming down from basic research and the requirements that we need to ensure that we meet power efficiency, latency requirements in 6G wireless and beyond.

Brian: Okay. This next question for Chris. What can you tell us about whether to ORAN or no RAN?

Chris: All right, that is indeed the question. Let's begin by defining the concept. When we talk about Open RAN or Open Radio Access Network, we mean this paradigm that there's a well-defined and open interface between the different components of a cellular network. So, for example, there's an open interface between the baseband processor, you know, between the antenna system, the gateway system, the radio, and so on. So, the idea is, say you're a brilliant wireless engineer or a startup and then you come up with the next Massive MIMO system. That is great. If you conform to those open interfaces, you can actually take your system and plug it into the existing network, right? The service provider can sort of mix and match and get the best component or the most innovative solution for each part of the network. So, this is in contrast to, you can say, maybe the traditional way in how companies build out a cellular network. Traditionally, if you want to deploy a certain cellular system, you pretty much have to get the whole system from one of the big manufacturers, what we call OEMs. It's very hard or almost impossible to, let's say, swap out one component and then plug in another component. I would say, really within the past few years and particularly within the last year, the industry trend is, the industry players sort of recognize the advantage of this open architecture for the RAN, for the Radio Access Network system. You get an accelerated pace of innovation. You know, anyone can bring in the next best component. And it's also driving down the cost, because I don't have to buy the whole system from one certain company. I can really select the best from anyone available in this ecosystem. So, I would say the industry trend is really moving towards this Open RAN concept.

Brian: Chris, what do you think the risks and rewards are of bringing intelligence into RAN?

Chris: Yeah, that's a good question. You know, within the Massive MIMO Working Group, in our group, we think that when you, just like Webert mentioned, that when you have so many antenna elements it is really an intelligent system, because each of the antenna elements, it can do some sensing. It can collect some data from the environment. And you can also control it, right? We talked about how you can control the phase and manage it for each of these individual elements together. So, I think when people think about traditional RAN would be something like a passive antenna. You put it there, it radiates something but there's really not much you do with it. But now we think that an intelligent RAN system would be something almost autonomous. You know, it collects some data about the environment, maybe sensing where the traffics are, and it's also able to make its own decision using the latest machine learning and Artificial Intelligence algorithms, to, let's say, form a beam, you know, to improve the quality of service for the user. You know, for example, in a concrete example, hey, if I'm the cellular network, I notice you are driving down a highway, right? And then you start downloading a video. So, okay, maybe you'll be driving down the highway for the next five to ten minutes and I sense that hey, the video is streaming; it will take a lot of bandwidth and I want to provide you with a good quality of service. Maybe the RAN module would actually make the decision to form a beam and actually follow your car along the highway so that no matter where you are along the highway you have very good focused RF energy to improve the signal quality. So, that is some of the rewards we are talking about from an intelligent RAN system. But I like the question that it's not just rewards. There are some risks that one concern that our group was discussing is, okay, you can imagine if each component is doing its own thing, right. Maybe things become a little bit chaotic. How do you guarantee the reliability of the network? But we think, in order to avoid the problem, the future is really about having some kind of intelligent algorithm. If you can think about it, sort of sitting on top of the network, coordinating these different intelligent systems to interoperate with one another properly to minimize this risk that each element will be doing its own thing.

Brian: Okay. This next one's for Rose. What are your thoughts about moving beyond traditional signal processing with Massive MIMO and stepping into machine learning and Artificial Intelligence?

Rose: Yeah, that's a good question. We know that currently we are facing dramatic revolutions in terms of networking and communication technology development and advancement. We know that next generation wireless networks are expected to support extremely high data rates and an extremely high number of new applications. All those actually will require new wireless technologies to support that. So, those new technologies will allow all those users, especially actually in the future we expect a massive number of users, to autonomously access, very competitive spectral bands with the aid of sophisticated algorithms such as those signal processing and optimization schemes in order to control, say, transmission power and that achieve energy efficiency, to tackle interference, and to actually improve both spectral efficiency and energy efficiency. Massive MIMO is a new technology we're talking about here and thanks to millimeter wave and high band we can actually pack a very high number of antennae into one device so that we can actually support Massive MIMO. But traditionally, Massive MIMO is more or less based on centralized signal processing and optimization. And that actually works for, for example, 4X4, 2X2 or even some 8X8. So, we're talking about 64X64,108X108, or even higher. This actually brings lots of challenges and difficulty in supporting Massive MIMO in the traditional centralized signal processing and optimization way. So, now, talking about Massive MIMO, we have lots of new challenges ahead of us and we have new solutions such as artificial intelligence and machine learning that we can leverage to tackle those centralized optimization schemes, for example, to significantly reduce the complexity but at the same time to improve accuracy. So, a couple of significant areas or the challenges people are dealing with in Massive MIMO by leveraging the power and the intelligence of machine learning is, for example, time estimation, because time estimation is super important in supporting MIMO. Well, when we go to Massive MIMO, time estimation becomes important but very, very difficult. Not to mention that we need to collect a large number of data from users and from the environment and feedback them to, say, the base station to do centralized signal processing, then we get the time estimation results. This actually brings complexity to the issue, overhead on channels on the bandwidth and also the computing and also the complexity of the optimization scheme. We can leverage this intelligence and the power offered by machine learning and also, thanks to the large number of datasets we can collect in the millimeter wave environment. By leveraging both we can actually do better or more accurate time estimation but with much lower complexity. And also, this is just the information we get from the channel, but talking about by leveraging the channel estimated from, for example, machine learning and AI, then we can further use machine learning or artificial intelligence technologies to design massive MIMO schemes. So, how to design the beamforming vectors, and how to design the transmission powers and how to design, for example, the bandwidth allocated to each user? This is all part of the design of Massive MIMO, but again, only leveraging the traditional centralized signal processing and optimization schemes, which actually become less and less feasible into the future, especially when we deal with massive connectivity, high heterogeneity and a high capacity environment like 5G or 6G and beyond. We can see tremendous challenges ahead of using Massive MIMO but on the other hand, we see great opportunities offered by leveraging the intelligence of machine learning and AI to support that.

Brian: Rose, powerful computing and intelligent capabilities are moving to the edge, far away from the cloud. How will we identify and develop the complicated and computing intensive signal processing needs to drive Millimeter Wave?

Rose: Machine learning actually can be used in a distributed way. In that case, we can leverage the power offered by edge computing, for example. We'll talk about cell-free Massive MIMO that actually is designed to manage or tackle, for example, the distributed Internet of Things kind of network. You have billions of devices. They are extremely distributed with high heterogeneity. So, you have centralized a scheme that actually normally doesn't work like we said before. If you have, for example, distributed antennae close to the users, they form a kind of a cell-free Massive MIMO type of network. But this actually would hugely rely on the computing power at the edge so that they can actually design the MIMO access schemes in the local environment but without losing those kinds of global information by leveraging, for example, further learning -- which is very popular now in machine learning technologies. So, combining this edge computing power that can support local computing, but also, at the same time, by leveraging distributed machine learning technology so that we can greatly facilitate technology development such as cell-free Massive MIMO, which is extremely important in the future user-centric type of networks such as IoT.

Brian: Let's move the conversation to opportunities and challenges related to MIMO. Chris, Open RAN is an area where we are seeing vendors and others taking both sides. What are the pros and cons of ORAN relative to Massive MIMO?

Chris: Relating to the vendors, I think we see a very interesting trend that I would say maybe a few years earlier, the concept of an Open RAN or open architecture -- of course we see more maybe smaller or upstart companies, you know, embracing that because that's one way they can very quickly bring innovation to the cellular network -- but, you know, we saw maybe once some of the more traditional or bigger vendors maybe a little bit more conservative towards that. But I can see in the last year that trend has changed. I would say that now most of the vendors are really embracing this concept of open architecture. Maybe now the only debate is, what is the timeline? Is this something that will happen very soon, like, let's say, within a year, or is this something more sort of down the road, in a few years or even ten years later? And personally, I think this open architecture concept would get adopted sooner rather than later. And this is actually particularly important for Massive MIMO because I think going back to what Professor Hu said, that for Massive MIMO you have a lot of hardware, a lot of antenna elements, a lot of amplifiers, and so on. If everything is aggregated, if the whole system is tied together, they tend to be expensive, power hungry, maybe heavy in weight, and so on, and it's very hard for them to get adopted or it will be at a very slow rate. But, as the industry is moving to this open architecture, now we have the opportunity to separate out what are the most important value adding elements. And it looks like probably it will be about the machine learning algorithms or the smart AI control and so on. And, if you have an open architecture for the more generic hardware, you should be able to just pick the most cost-effective commodity hardware and then apply your smart algorithm on top of that. It's a little bit like the computing industry. You can have generic hardware, generic CPU but, what is the most high impact is your smart algorithm, your search algorithm, your machine learning algorithm, pattern recognition algorithm. So, I think by having an open architecture in particular in the case of Massive MIMO systems, I think it can do two things. It can bring in the innovation for controlling the RAN layer, how to control those antenna elements, but also at the same time drive the cost down for deploying Massive MIMO systems. In terms of the cons, we mentioned earlier that, hey, how do you make sure that all the systems still work together? How do you still ensure the reliability of the overall network? So, again, I feel that those risks are there but it's not to be avoided, but I think they are actually opportunities for intelligent algorithms to coordinate the different parts of the network so that each one of these components would be operating in almost an autonomous manner and they can work together to bring out this accelerated pace of innovation.

Brian: This next question for Rose. How do we leverage powerful computing and intelligent capabilities at the edge to deliver complicated and computing intensive signal processing needs to drive millimeter wave?

Rose: In order to drive millimeter wave, we first need to understand the millimeter wave. So, we said before, millimeter wave, high capacity because of the high spectrum and low distance communication and also very, very poor channel. By saying that, I think while edge computing and AI are two great technologies to work together with Massive MIMO, there are extensive active research areas, research projects in this area. Actually, my own research group is actively doing research in this area as well. We know that the communication actually can be done both uplink and downlink and the millimeter wave can only support low distance communication that naturally as you're speaking to what edge computing support. So, from the uplink perspective, for example, because a lot of devices now, these days are very, very low power, low computing capacity and low resource, low energy, for example, IoT devices. So, if they need to, for example, execute all those computing intensive but latency critical tasks such as remote surgery, robot operations, in this case actually you probably need to rely on offloading your tasks to the nearby edge service. That actually is one of the major tasks supported by edge computing. If we use Massive MIMO, for example, then Massive MIMO can greatly improve the efficiency of offloading because it can simultaneously offload multiple users and also, if you just focus on waves offloading, Massive MIMO can greatly improve transmit diversity and also improve the spectrum and energy efficiency. In that case you can both reduce delay for waiting in the local device for offloading and at the same time actually you can greatly help the local users to process those very, very complicated tasks. So, that's actually the uplink perspective. And from the downlink perspective, we have also great leverage and powerful computing, powerful computing and intelligence capabilities of the edge because we can actually push the base stations or the access points very close to the end user such as cell-free Massive MIMO. You don't have to design a global optimization but by just serving the local users but leveraging the local computing facilities and also just to tackle the local channel alignment, because we know if you talk about a greater coverage area actually the channel environments are very, very heterogeneous from one small area to another small area so that actually we can greatly leverage this edge computing powers at the same time intelligent capabilities. That's the machine learning power that we have talked about previously, so that we use machine learning to do this distributed optimization, for example, distributed Massive MIMO design. But at the same time, you can do this limited message feedback between the edge and the centralized point, which we can call the server or cloud-based base station. So, you can quickly design the effective machine learning Massive MIMO but at the same time you would not lose this global optimization. That's actually what I can see the great facilitating point that can be offered by powerful computing and intelligent capability to drive millimeter wave.

Brian: Webert, how does communications without a base station work?

Webert: This is one of the approaches that is being investigated today in order to offload the amount of work that one needs to do at the antenna level. Basically, you have a distributed access point in the environment with the capability to directly connect it to the edge computing unit bypassing the workstation - now the base station. That being said, there are challenges that we need to overcome because both the UE to the base station and the UE to the edge computing, both approaches need to work together. With that, we need to ensure that we distribute the workload very efficiently. We can leverage AI to sense the environment. We have to recall that the environment is changing dynamically, so sensing the environment with the Massive MIMO approach is very important, but the amount of work that we need to do and the amount of data that we'll have to deal with, we need to ensure that we use that information very efficiently. This is where, when we sense the environment, AI and signal processing can couple together to inform the edge computing what are the changes in the environment. And then that would help us partition the architecture well so that we can tell the antenna, you're going to do what you need to do to ensure that we move that massive amount of information efficiently and ensure quality of service, and, at the same time, the Massive MIMO system, and with that number of antenna element will efficiently sense the environment and inform the edge computing unit how to optimize the next weight vector, the steering vector to ensure that we maintain latency and quality of service.

Brian: Rose, what is the process of moving into Massive MIMO with emerging technologies such as deep learning, machine learning and Artificial Intelligence?

Rose: That's a great question. Actually, one thing we are sure, in order to move into Massive MIMO with emerging technologies, we need to combine the domain knowledge that has been developed already in communication technologies and the signal processing technologies with intelligence enabled by machine learning and AI algorithms. So, by saying that, the intelligence enabled by machine learning and AI algorithms in these days, and these days we have huge advancement of both series and model and testing and prototyping this development in this area. And in the communication and signal processing field we have huge development and advancement as well. So, we need to combine both fields and then, by leveraging the big data we are offered these days together to tackle the challenging issues in Massive MIMO. So that the areas challenge issues in the Massive MIMO we have already measured before. This can include channel estimation, user distribution prediction, especially in the heterogeneous environment, traffic estimation, same in the heterogeneous environment, and it's informing interference control, power control. But the good thing is, we have already seen extensive research that has been actively developed and carried out in this area.

Brian: Back to you, Webert. Do you have any thoughts on the roles research academia and industry can play in determining how technology can meet the envisioned future?

Webert: This is a very good question. We cannot get there without the help of academia. Keep in mind that we have to maintain or even lower latency while we increase the number of antenna elements by so much because we understand the need for the user to move a massive amount of data. But we need to ensure that we reduce the amount of power that we use at the base station. We need to develop efficient receiver architecture and power efficient devices to ensure that we do not put too much stress in the cities' power system. And then, also, as we move to that many number of elements in the antenna, new beam-forming techniques need to be developed. The rule is, as you increase the number of antenna elements for a given frequency, the beam width is going to be smaller. But now, we need new techniques to ensure that, as the user is moving, depending on the speed of the user, we keep the user in the beam to minimize interference and loss of service; therefore, resource management is key there. We need to ensure we can do that efficiently. Artificial Intelligence, what we used to do in the old days is ensure that we transform the data or the information that we have into images and then give it to the trend network to give us results. The trend in academia these days is to ensure that we pass the raw data to the network and then learn from it. And this is a new area being developed today, so we have the need to help us work through that massive amount of data in a very short period of time and come up with results. Only academia can help us do that process. Using the power of academia will suddenly make this process very efficient and address the future. And then, also, as we partition the network, sensing the network, having that massive amount of data, this is the same approach that DARPA proposed recently using RF machine learning. We need to go through the data as we receive the data, so that is providing as input to the machine learning network, or deep learning network, the raw data so that the network can tell us more about what is going on in the environment so that we can better understand each other. Keep in mind that we talked earlier about partitioning new architecture, having the base station perform what it does best and then offload part of the work to the edge computing. The best way to do that is to ensure that we sense the environment very well and Massive MIMO can do that for us, and then also pass that information to the edge computing unit to help optimize what we need to optimize to ensure that we provide the base quality of service and maintain the low latency requirement that we need to do. So, as you think about that massive amount of work that we need to do, only academia can help us tackle this problem.

Brian: This has been really informative. Chris, where can our listeners go to find out more about this subject?

Chris: For the listeners in our audience, go to our web page. So, I think if our listeners would do a search on “IEEE Future Networks“ and then look under the Massive MIMO Working Group they would see the scope of work for the group, our purpose, and our thinking on, what kind of directions the whole group would be taking. I'd like to add that it's actually a lot of fun working with Webert, Professor Hu, and the whole Massive MIMO Working Group. We have subject matter experts on, let's say some of the newer topics, intelligent service, Massive MIMO radar, and so on. Personally, I actually learn a lot from our interaction and discussions with the group. And maybe here I want to take a moment to do a little bit of advertisement for the group. We held what we call a virtual hackathon session before, on a Saturday, a group of us will just get online and just talk about a technical topic and then we would do a deep dive and the member would write down their vision for that topic for Massive MIMO. And then at the end of the day we will reveal, ‘oh, okay, so this is what we have done for the day, these are the ideas generated and this is how the group has organized them. We found that very productive. I would see in the future we may hold additional hackathon sessions for anyone interested in getting together and talk about Massive MIMO, or 5G or 6G or communications with a really tremendous group of experts, I would say, hey, come join us. I think that would be a lot of fun.

Brian: Great. So, one final question for you all. How does the Massive MIMO Working Group fit in with other Working Groups of the International Network Generations Roadmap to advance the future of networks?

Chris: I would talk about some of my thoughts and I'm sure Webert and Professor Hu would be able to fill in more details. So, from working with the group and also within the larger IEEE Future Networks, like in our virtual workshop, in our briefing and interaction with the other groups, we found there's actually a lot of interaction and you can say maybe even interdependencies with the other Working Group. For example, most recently we were exploring testbed for Massive MIMO. For some of the concepts we think that if we really want to go to the step of maybe prototyping something during an experiment or something, we need some kind of testbed. So, there we can learn a lot from the Testbed Group. And then, going to something maybe affecting all groups, we also have a lot of discussions with the Standards Group. Because we say, well, we recognize that Massive MIMO is only one component of the whole cellular network, right? And even cellular network, it's only one component of the overall communication systems. So, how do we interface with the other components of either the cellular network or the whole communication system? Would Massive MIMO have standards of our own or should we conform to a certain standard so that any kind of other system can just talk with a Massive MIMO system. So, those are very active discussions we have with the Standards Group. So, I think those are just some of the examples as you may see, that they're very deep interactions that we have with the other Working Groups within IEEE.

Brian: That sounds great. Webert?

Webert: I believe simply put; we are one unit operating together to ensure that the envisioned future works. So, you do realize that a lot of the issues that we just mentioned, we're looking at only one part of the overall system. And then we are receiving inputs from all the other groups who let us know what is possible. And then, by working together, we're kind of bringing information to each other to ensure that each group is aware of the trend and then what we need to do to move the overall future network forward.

Brian: Rose, do you have any additional thoughts you'd like to add?

Rose: I agree with what Chris and Webert just said. I think Massive MIMO is actually an area that involves lots of theory, algorithms, prototyping, standardization, productization, development. So, it naturally fits as a point or Working Group that can impact with lots of other groups such as Standardization, such as System Design, Hardware, Signal Processing, Millimeter Wave, AI, Optimization. I can see lots of good collaboration. And besides that, because this is an area that involves lots of theory, algorithms development as well. So, I would say besides the collaborations among all the groups inside INGR, and I would strongly encourage the collaborations between, say industry, academia, or among industry, academia, government so all those units together, we would like to see more participants from academia and industry to get involved in the future.