Which give better performance TDD or FDD when using mMIMO

Which give better performance TDD or FDD when using mMIMO

Hello My teammates, in y article today i gonna to talk about very important exclamation may anyone ask or think about it that is "why TDD mMIMO get better performance than FDD?".

Lets begin our claim to answer this question is the main differentiated factor between TDD & FDD as shown below::

No alt text provided for this image
In TDD, by exploiting the channel reciprocity, the transmitter can estimate the downlink channel from the sounding on the uplink channel. Such reciprocity relies on accurate calibration of the transceiver RF chains at the eNB.

  • FDD needs two separate frequency bands or channels. TDD systems use a single frequency band for both transmit and receive so from here we can deduce that TDD DL & UL Frequency are same so if any UL signals have reciprocity at eNB , system can detect DL signal RF condition form these UL signal without need of waiting CSI reporting in UL path that represent what is RF condition at DL.

Pilot signals overhead will be limited for FDD than TDD as number of Beam Forming BF beams or antenna element increasing as below:

No alt text provided for this image

As shown above ,consider an SDMA system that can afford τp pilots. This value determines the combinations of M and K that can be supported. The TDD protocol supports up to K = τp UEs and an arbitrary M. The FDD protocol supports any M and K such that [M+K+max(M,K)]/ 2 ≤ τp. 

In summary, SDMA systems should ideally be combined with TDD, by exploiting the reciprocity between UL and DL channels. This is because the required channel acquisition overhead in TDD is K, while it is [M+K+max(M,K)]/ 2 in FDD. The FDD overhead is around 50% larger when M ≈ K, while it is much larger for M >> K, which is the preferable operating regime for SDMA.

Finally, The 5G standard supports many different modes of operation. When it comes to spatially multiplexing of users in the down-link, the way to configure the multi-user beam-forming is of critical importance to control the inter-user interference.

  1. The first option is to let the users transmit pilot signals in the uplink and exploit the reciprocity between uplink and downlink to identify good downlink beams. This is the preferred operation from a theoretical perspective; if the base station has 64 transceivers, a single uplink pilot is enough to estimate the entire 64-dimensional channel. In 5G, the pilot signals that can be used for this purpose are called Sounding Reference Signals (SRS). The base station uses the uplink pilots from multiple users to select the downlink beamforming. 
  2. The second option is to let the base station transmit a set of downlink signals using different beams. The user device then reports back some measurement values describing how good the different downlink beams were. In 5G, the corresponding downlink signals are called Channel State Information Reference Signal (CSI-RS). The base station uses the feedback to select the downlink beamforming. The drawback of this approach is that 64 downlink signals must be transmitted to explore all 64 dimensions, so one might have to neglect many dimensions to limit the signaling overhead. Moreover, the resolution of the feedback from the users is limited.
In practice, the CSI-RS operation might be easier to implement, but the lower resolution in the beamforming selection will increase the interference between the users and ultimately limit how many users and layers per user that can be spatially multiplexed to increase the throughput.

Reference. massivemimobook.com && ma-mimo.ellintech.se


Sadullah Ahmed

Sr Software Engineer | CSM | Technical Solution Specialist for AR VR Technology

4y

Thanks 

Like
Reply
Kamal Vij

LTE & 5G-NR Radio Engineer, Eden-Net SON, PMP

4y

Tdd

Like
Reply

To view or add a comment, sign in

More articles by Fathi Farouk

Insights from the community

Others also viewed

Explore topics