Learning-Enhanced 5G-NR RAN Algorithms

Communications & wireless engineering are on the cusp of a data-driven revolution, powered by measurement, feedback, computation, and powerful AI tools such as deep learning, that will grow wireless systems to unprecedented levels of adaptivity, scale, performance, and reliability. The core optimization tools that enabled 4G and early-5G technology have been important enablers of modern communications systems, but they have struggled to keep pace with the demands of reconfigurability, adaptation, many-dimensional RAN optimization and computational efficiency which are demanded by true 5G and Beyond-5G systems to deliver the best performance.

Improving RAN Performance In the Real World

Todays wireless systems are often optimized using simplified statistical models which represent a drastic simplification of the real world. Because communications engineering has relied principally on formal probabilistic optimization methods and simplified system models, many of the modem processing algorithms today are fixed at design time and both unable to adapt after deployment or reconfiguration and tied to the statistical formulation of the processing problem, often resulting in high computational complexity. Notoriously, methods such as belief propagation, iterative decoding, and high-order MIMO processing have been power consumption pain points, thus limiting the scalability and efficacy of cellular systems. By allowing a deployed system to adapt to fit the complexity of our deployment environments, we allow cellular systems to adapt to the complexity of our cities, real world deployment scenarios, and derive lower complexity signal processing solutions which better fit these complex channel distributions.

Screen Shot 2019-10-21 at 9.29.45 PM.png

Building A NExt Generation Learning-BAsed RAN

Machine learning within the RAN can drastically improve performance by reducing power consumption, better exploiting channel information to support more and better connections, better exploiting Massive MIMO systems, and making better use of real world feedback and responses to compensate for hardware, distortion, non-linearities, and effects. DeepSig has been rapidly building out these capabilities within their partial reference 5G-NR RAN L1 implementation in order to provide case-studies which allow VRAN and other 5G-NR base station integrators to understand and quantify the value of embracing the data-centric RAN algorithms provided by OmniPHY-5G.

Through time-domain sample level simulations and over-the-air system testing, we have rapidly transitioned these ideas into reality, demonstrating superior baseband processing performance in terms of bit error rate, leading to 2x-10x better signal reception in many cases and more efficient power saving inference algorithms which help reduce computational cost and allowing operators to deploy in some cases twice as many radio heads per baseband unit when embracing a vRAN centric front-haul architecture. While we have observed better performance in the real world due to its stability vs random models, below we illustrate this performance improvement on one of the harshest 3GPP TDL-A channel models.

5g equalizer.png

Real World Channels & Demonstrations

We first demonstrated this capability in a real world over the air system at the Brooklyn-5G Symposium in early 2019. Using off-the-shell embedded and laptop NVIDIA graphics processing units (GPUs), we showed a 5G-NR downlink signal where the receiver adapted to the environment to continually improve the performance of channel estimation and equalization in an over-the-air 900 MHz test, while remaining 100% standard compliant. In this instance, we observed BER reductions of 10-100x in the first true ML based 5G-RAN learning testbed of its kind when compared with a conventional MMSE based approach. This work has been extended to wideband 50 - 100 MHz channel configurations, multi-layer deployments, and the uplink reception process. The pace of development in this area is increasing, and mature learning-based baseband algorithms offer enormous and growing advantages when compared with the conventional approach.

MVIMG_20190423_145857.jpg

Massive MIMO And Spatial Processing

We have demonstrated an ML-driven processing approach for Massive MIMO in simulation, and are working to build out a testbed to provide real-world measurement and demonstration of the data-centric method for Massive MIMO processing in UL-MIMO and DL-MIMO configurations for TDD and FDD systems. DeepSig has developed unique high performance C++ code for this application, which couples high-performance NR implementation code with high-performance deep learning-based inference algorithms and GPU offload to provide a unique implementation and offering in this area which could help significantly save power and improve density and system performance in 5G RAN deployments as we continue to work with partners to make their systems more efficient and performant in the real world.

nrloop3.gif

We continue to sprint forward demonstrating this capability in more mature 5G-NR RAN realizations over the air, and will continue to provide updates in this area as our RAN enhancements mature. Our primary capabilities at this point have focused on digital element processing within mid-band NR deployment scenarios, but this will continue to evolve with our partners.

Inquiries

To learn more about our OmniPHY-5G communications systems and how you can use them in your systems and deployments, please contact us!