Thursday, September 19, 2013

Why Special Subframe is required in LTE?


In LTE Frame Type 2 (TD-LTE) there is a special sub-frame when switching from DL to UL but there is no special sub-frame or gap when switching from UL to DL. 

Different TDD modes

To understand this, it is important to know why a transmission gap is required when switching from DL to UL. The special sub-frame is made up of DwPTS, GP and UpPTS and all of these have configurable lengths while the sum of the lengths has to be 1 ms i.e. the length of the sub-frame. Now consider, the format 1, where the GP (guard period or TTG in WiMAX) is 4 symbols long which equates to 285 us approx. Consider a UE-A at a distance of 10 km from the eNB and UE-B at 50 km from the eNB. The time it takes the RF signals to reach the UE-A and UE-B will be
Time for UE-A = distance/velocity of light = 10000/3x10^8 = 33.3 us
Time for UE-B = distance/velocity of light = 45000/3x10^8 = 150 us

This means that after the eNB has transmitted the last symbol of DL data and it starts the GP, the last symbol will be received at UE-A after 33.3 us and at UE-B after 150 us. Now, every UE takes a small amount of time to switch from Rx to Tx mode and lets assume this switching period to be 50 us (it should be lesser for LTE UEs but this is just an assumption). So, the UE-A will start its switching period and will start transmitting after 33.5 + 50 = 83.5 us and it will take another 33.5 us to reach the eNB. This makes the total Round Trip Time for UE-A to be equal to 33.5 +50 + 33.5 = 117 us. Now we know that the GP at eNB is set 285 us so that means that the UE-A will be able to transmit the UL data within the GP. In actual practice, all the UEs know their Timing Advance from the eNB so the UE-A would wait that much amount before transmitting so that the UL data reaches the eNB at exactly the end of GP.

However, lets do the same analysis for UE-B. The total round trip time for UE-B would be 150 + 50 + 150 = 350 us which is greater than the GP (285 us) so, the UE-B would not be able to reach the first uplink symbol. Because of this, the GP is supposed to determine the maximum cell radius for a TDD system.
If there hadn't been a GAP or TTG (as in WiMAX) between the DL and UL transmission, these Over-The-Air timing delays and the switching period could not be compensated so we need to add a transmission gap when switching from DL to UL.

Now, lets consider the UL to DL switching. We will only consider UE-A for this example as UE-B has been proven beyond the cell range. The UE-A will transmit the last UL symbol and then start switching from Tx to Rx mode. The last UL symbol will reach the eNB after 33.5 us and the eNB would switch to Tx after receiving the last UL symbol. It will transmit the next DL symbol which will reach the UE-A after another 33.5 us and thus the UE-A will have a total of 33.5 + 33.5 = 67 us of time to switch from Tx to Rx mode without any presence of any RTG. So, while switching from UL to DL, a RTG or GP is not really required as the system already gets a virtual GP due to OTA delays.
There can be a query about the UEs that are very close to the eNB as they would have a very small Over-The-Air delay so it might not get enough time to switch to Rx mode. There can be two possible solutions for that


- In LTE, there is a 1 ms TTI so if the UE is too close to the eNB that it would not be able to switch to Rx mode in time, the eNB can allocate the DL resources in the next DL sub-frame so the UE will have 1 ms to make the switch. In WiMAX, this would not have been possible as it has 5 ms TTI and in case of No RTG, the UEs closer to WiMAX BTS would have to be scehduled in the next frame adding another 5 ms to the latency plane.

- Secondly, the UE switching from Rx to Tx and vice versa should now be extensively reduced. The 50 us limitation was there in beceem chipsets around 4 years back while I think that beceem and intel had made chipsets with switching periods of less than 20 us last year. SO, LTE UEs should have a much lower switching times.

Thursday, July 11, 2013

3GPP LTE : Valid numbers of allocated Resource blocks in UL (Uplink)

Total number of allocated resource block for a user can only be any of these.

0
1
2
3
4
5
6
8
9
10
12
15
16
18
20
24
25
27
30
32
36
40
45
48
50
54
60
64
72
75
80
81
90
96
100

Condition for valid number 2^a * 3^b *  5^c
a,b,c = 0,1,2,....

3GPP LTE : Dynamic Scheduling, Persistent Scheduling and Semi Persistent Scheduling

Dynamic Scheduling


In a subframe in physical layer the first OFDM symbol of each subframe consists of CFI information. CFI information basically tells you the number of OFDM symbols used by PDCCH. The PDCCH has DCI information that lets you decode the data from the PDSCH. In case you forgot, PDSCH has all the user data pertaining to the UE's. Now when a UE is downloading a set of files, say from the internet, each and every subframe has the PCFICH and PDCCH data in the first 3-4 OFDM symbols. This is essential when the data is robust or adaptive in nature, especially when it is web data. So it is essential that you send the control information for each subframe along with it. This kind of Scheduling is known as Dynamic Scheduling.

The advantage of Dynamic Scheduling is basically the flexibility to alter the size of data in each subframe. You can push more data in one SF, less on another. 

Persistent Scheduling 


Now consider a case where the amount of data expected is less and occurs in a fixed time interval. Yes, I'm talking about something like VoLTE (Voice over LTE). Voice data is in the form of small packets and it comes in a regular interval, which is network dependent. In such cases, sending control information in each and every subframe plays a vital role in the effective utilization of bandwidth. Thus, we use something called as Persistent Scheduling, where the control information sent across a SF is retained for every nth SF coming after it, until notified. This scheme drastically reduces the overhead.

Semi Persistent Scheduling


Now note the point that when you have a NACK for any of the DL data, the retransmission has to contain some extra information. ( Probably to indicate retransmission, SF number so on). So your retransmissions cant be pushed along with the Persistent Scheduled Time interval. In other words, you have to explicitly put the header info for retransmission SF's. Thus, Persistent Scheduling is rarely used, instead this new scheduling scheme known as Semi Persistent Scheduling is used.

Semi Persistent Scheduling Example

The time interval for SPS is informed by the RRC. The termination of SPS/alteration of time interval is also RRC triggered.
    
In VoIP services, the voice data is encoded using a codec and sent. At times, the network might have to change the codec(maybe for internal reasons, or say clarity etc). When you change the codec, the amount of data sent per Radio Frame might be different. As a result, you might have to increase the SPS interval.

In the diagram above, also note one thing. Once SPS is triggered, every nth SF is first checked for PDCCH data. This is because, PDCCH signals always have a higher priority. So just because you initiated SPS, it doesnt mean that it will continue till you tell it to stop. It will always give a higher priority for PDCCH data in that particular SF. An example for this would be downloading a webpage along with a voice call. Here, you might need PDCCH data to decode the user data.

Wednesday, May 29, 2013

LTE Physical Layer Simulation in Scilab. Please join to develop

Hi

I have four years of experience in working in 3GPP LTE physical layer.
Now I'm planning of making LTE Physical layer simulation in Scilab to help the students.
If you are interested in this project please contact me.

Regards
Ashok

Tuesday, April 16, 2013

How to Find the nearest power of 2


/* returns greatest power of 2 less than or equal to x, branch-free */

int
flp2(int x)
{
    x = x | (x>>1);
    x = x | (x>>2);
    x = x | (x>>4);
    x = x | (x>>8);
    x = x | (x>>16);
    return x - (x>>1);
}
It's entertaining to study it and see how it works. I think the only way for you to know for sure which of the solutions you see will be optimal for your situation is to use all of them in a text fixture and profile it and see which is most efficient for your purpose.
Being branch-free, this one is likely to be quite good performance-wise relative to some others, but you should test it directly to be sure.
If you want the least power of two greater than or equal to X, you can use a slightly different solution:
unsigned
clp2(unsigned x)
{
    x = x -1;
    x = x | (x >> 1);
    x = x | (x >> 2);
    x = x | (x >> 4);
    x = x | (x >> 8);
    x = x | (x >> 16);
    return x + 1;
}

Thursday, April 4, 2013

DAI (Downlink Assignment Index) in LTE


The DL assignment carries a Downlink Assignment Index (DAI) indicating how many assignments the UE should have received so far within the current bundling window. If the UE detects that the DAI differs from the number of correctly received DL assignments, it does not send any HARQ feedback and the eNB can detect this. However, the eNB cannot know which of the transmissions was missed, and thus the whole bundle has to be retransmitted.

Wednesday, April 3, 2013

RSSI,SINR,RSRP and RSRQ in LTE

RSSI,SINR,RSRSP and RSRQ : These are  the basic measurement quantities used in LTE.
RSSI - Received Signal Strength Indicator
SINR - Signal to Interference & Noise Ratio
RSRP - Reference Signal Received Power
RSRQ - Reference Signal Received Quality

RSRP is a measure of signal strength. It is of most importance as it used by the UE for the cell selection and reselection process and is reported to the network to aid in the handover procedure. For those used to working in UMTS WCDMA it is equivalent to CPICH RSCP.

The 3GPP spec description is "The RSRP (Reference Signal Received Power) is determined for a considered cell as the linear average over the power contributions (Watts) of the resource elements that carry cell specific Reference Signals within the considered measurement frequency bandwidth."

In simple terms the Reference Signal (RS) is mapped to Resource Elements (RE). This mapping follows a specific pattern (see below). So at any point in time the UE will measure all the REs that carry the RS and average the measurements to obtain an RSRP reading.

RSRQ is a measure of signal quality. It is measured by the UE and reported back to the network to aid in the handover procedure. For those used to working in UMTS WCDMA is it equivalent to CPICH Ec/N0. Unlike UTMS WCDMA though it is not used for the process of cell selection and reselection (at least in the Rel08 version of the specs).

The 3GPP spec description is "RSRQ (Reference Signal Received Quality) is defined as the ratio: N×RSRP/(E -UTRA carrier RSSI) where N is the number of Resource Blocks of the E-UTRA carrier RSSI measurement bandwidth."

The new term that appears here is RSSI (Received Signal Strength Indicator). RSSI is effectively a measurement of all of the power contained in the applicable spectrum (1.4, 3, 5, 10, 15 or 20MHz). This could be signals, control channels, data channels, adjacent cell power, background noise, everything. As RSSI applies to the whole spectrum we need to multiple the RSRP measurement by N (the number of resource blocks) which effectively applies the RSRP measurement across the whole spectrum and allows us to compare the two.

Finally SINR is a measure of signal quality as well. Unlike RSRQ, it is not defined in the 3GPP specs but defined by the UE vendor. It is not reported to the network. SINR is used a lot by operators, and the LTE industry in general, as it better quantifies the relationship between RF conditions and throughput. UEs typically use SINR to calculate the CQI (Channel Quality Indicator) they report to the network.

The components of the SINR calculation can be defined as:

S: indicates the power of measured usable signals. Reference signals (RS) and physical downlink shared channels (PDSCHs) are mainly involved

I: indicates the power of measured signals or channel interference signals from other cells in the current system

N: indicates background noise, which is related to measurement bandwidths and receiver noise coefficients

LTE Uplink Physical Layer


Here is a brief description of LTE Uplink Physical Layer


LTE uplink Consists of


  • PUSCH
  • PUCCH
  • PRACH
  • SRS


PUSCH (Physical Uplink Shared Channel)


The physical uplink shared channel is used to transmit the uplink shared channel (UL-SCH) and L1 and L2 control information. The UL-SCH is the transport channel used for transmitting uplink data (a transport block). L1 and L2 control signalling can carry the following type of information: HARQ acknowledgements for received DL-SCH blocks, channel quality reports and scheduling requests. It uses SC-FDMA in physical layer

The processing blocks of PUSCH transmitter side is in the figure below.





Processing blocks at PUSCH receive, i.e, at eNodeB is in the figure below



More Details will be added soon based on the requirement..

Or you may refer :
http://www.steepestascent.com/content/mediaassets/html/LTE/Help/PUSCH.html


Please feel free to contact me if you need any details regarding LTE uplink. I will be happy share the knowledge I have.



Tuesday, April 2, 2013

LTE - Long Term Evolution


LTE, an initialism of long-term evolution, marketed as 4G LTE, is a standard for wireless communication of high-speed data for mobile phones and data terminals. It is based on the GSM/EDGE and UMTS/HSPA network technologies, increasing the capacity and speed using a different radio interface together with core network improvements.[1][2] The standard is developed by the 3GPP (3rd Generation Partnership Project) and is specified in its Release 8 document series, with minor enhancements described in Release 9.

Although marketed as a 4G wireless service, LTE as specified in the 3GPP Release 8 and 9 document series does not satisfy the technical requirements the 3GPP consortium has adopted for its new standard generation, and which were originally set forth by the ITU-Rorganization in its IMT-Advanced specification. However, due to marketing pressures and the significant advancements that WIMAX,HSPA+ and LTE bring to the original 3G technologies, ITU later decided that LTE together with the aforementioned technologies can be called 4G technologies. [6] The LTE Advanced standard formally satisfies the ITU-R requirements to be considered IMT-Advanced.[7] And to differentiate LTE-Advanced and WiMAX-Advanced from current 4G technologies, ITU has defined them as "True 4G"

Cyclic Prefix (CP)


Cyclic Prefix (CP)

In telecommunications, the term cyclic prefix refers to the prefixing of a symbol with a repetition of the end. Although the receiver is typically configured to discard the cyclic prefix samples, the cyclic prefix serves two purposes.
  • As a guard interval, it eliminates the intersymbol interference from the previous symbol.
  • As a repetition of the end of the symbol, it allows the linear convolution of a frequency-selective multipath channel to be modelled as circular convolution, which in turn may be transformed to the frequency domain using a discrete Fourier transform. This approach allows for simple frequency-domain processing, such as channel estimation and equalization.

The intersymbolic interference is almost completely eliminated by introducing a guard time for a each OFDM symbol. The guard time is chosen larger than the expected delay spread such that multipath components from one symbol cannot interfere with the next symbol. This guard time could be no signal at all but the problem of intercarrier interference (ICI) would arise. Then, the OFDM symbol is cyclically extended in the guard time. Using this method, the delay replicas of the OFDM symbol always have an integer number of cycles within the FFT interval, as long as the delay is smaller than the guard time. Multipath signals with delays smaller than the guard time cannot cause ICI.

In order for the cyclic prefix to be effective (i.e. to serve its aforementioned objectives), the length of the cyclic prefix must be at least equal to the length of the multipath channel. Although the concept of cyclic prefix has been traditionally associated with OFDM systems, the cyclic prefix is now also used in single carrier systems to improve the robustness to multipath. 


LTE Bandwidth/Resource Configuration (for normal CP – 7 OFDM symbols)


Channel Bandwidth [MHz]
1.4
3
5
10
15
20
Number of resource blocks (N_RB)
6
15
25
50
75
100
Number of occupied subcariers
72
180
300
600
900
1200
IDFT(Tx)/DFT(Rx) size
128
256
512
1024
1536
2048
Sample rate [MHz]
1.92
3.84
7.68
15.36
23.04
30.72
Samples per slot
960
1920
3840
7680
11520
15360

OFDM




OFDM

OFDM spread spectrum technique distributes the data over a large number of carriers that are spaced apart at precise frequencies. This spacing provides the “orthogonality” in this technique which prevents the demodulators from seeing frequencies other than their own. OFDM is sometimes called multi-carrier or discrete multi-tone modulation.

The OFDM transmission scheme has the following key advantages:
·         Makes efficient use of the spectrum by allowing overlap
·         By dividing the channel into narrowband flat fading subchannels, OFDM is more
·         resistant to frequency selective fading than single carrier systems are.
·         Eliminates ISI and IFI through use of a cyclic prefix.
·         Using adequate channel coding and interleaving one can recover symbols lost due to
·         the frequency selectivity of the channel.
·         Channel equalization becomes simpler than by using adaptive equalization
·         techniques with single carrier systems.
·         It is possible to use maximum likelihood decoding with reasonable complexity
·         As discussed in OFDM is computationally efficient by using FFT techniques to
·         implement the modulation and demodulation functions.
·         In conjunction with differential modulation there is no need to implement a channel estimator.
·         Is less sensitive to sample timing offsets than single carrier systems are.
·         Provides good protection against co channel interference and impulsive parasitic noise.

In terms of drawbacks OFDM has the following characteristics:
·         The OFDM signal has a noise like amplitude with a very large dynamic range; therefore it requires RF power amplifiers with a high peak to average power ratio.
·         It is more sensitive to carrier frequency offset and drift than single carrier systems are due to leakage of the DFT.


It is used for wireless as well as wireless access.
Some of the Major Tecnologies using OFDM : 802.11a WLAN, WiMax, LTE

RSSI & RSRP in LTE



Received Signal Strength Indicator (RSSI) and Reference Signal Received Power (RSRP)

RSSI is the more traditional metric that has long been used to display signal strength for GSM, CDMA1X, etc., and it integrates all of the RF power within the channel passband. In other words, for LTE, RSSI measurement bandwidth is all active subcarriers.


RSRP, on the other hand, is an LTE specific metric that averages the RF power in all of the reference signals in the passband. Remember those aforementioned and depicted 100 subcarriers that contain reference signals? To calculate RSRP, the power in each one of those subcarriers is averaged. As such, RSRP measurement bandwidth is the equivalent of only a single subcarrier. 

In other words:

RSRP (Reference Signal Receive Power) is the average power of Resource Elements (RE) that carry cell specific Reference Signals (RS) over the entire bandwidth, so RSRP is only measured in the symbols carrying RS. While RSSI (Received Signal Strength Indicator) is a parameter which provides information about total received wide-band power (measure in all symbols) including all interference and thermal noise.

So it would be safe to write that, in LTE, RSRP provides information about signal strength and RSSI helps in determining interference and noise information. This is the reason, RSRQ (Reference Signal Receive Quality) measurement and calculation is based on both RSRP and RSSI.


Since the logarithmic ratio of 100 subcarriers to one subcarrier is 20 dB (e.g. 10 × log 100 = 20), RSSI tends to measure about 20 dB higher than does RSRP. Or, to put it another way, RSRP measures about 20 dB lower than what we are accustomed to observing for a given signal level. Thus, that superficially weak -102 dBm RSRP signal level would actually be roughly -82 dBm if it were converted to RSSI.

To conclude, here are a few takeaways about RSSI and RSRP as signal strength measurement techniques for LTE:
  • RSSI varies with LTE downlink bandwidth. For example, even if all other factors were equal, VZW 10 MHz LTE bandwidth RSSI would measure 3 dB greater than would Sprint 5 MHz LTE bandwidth RSSI. But that does not actually translate to stronger signal to the end user.
  • RSSI varies with LTE subcarrier activity -- the greater the data transfer activity, the higher the RSSI. But, again, that does not actually translate to stronger signal to the end user.
  • RSRP does a better job of measuring signal power from a specific sector while potentially excluding noise and interference from other sectors.
  • RSRP levels for usable signal typically range from about -75 dBm close in to an LTE cell site to -120 dBm at the edge of LTE coverage.