Wednesday, January 29, 2014

LTE UE Procedure

i) UE is Off
ii) Power On UE
iii) < Frequency Search >
iv) < Cell Search > : Normally a UE would find multiple cells in this process
v) < Cell Selection >
vi) MIB decoding
vii) SIB deconding
viii) < Initial RACH Process >
ix) < Registration/Authentication/Attach>
x) <Default EPS Bearer Setup >
xi) Now UE is in IDLE Mode
xi) <(If the current cell become weak or UE moves to another cell regisn) Cell Reselection>
xii) <(When Paging message comes or User make a call) RACH Process>
xiii) < Setup Dedicated EPS Bearer >
xiv) Receive data
xv) Transmit data
xvi)  (If UE power is percieved too weak by the network) Network send TPC command to increase UE Tx Power
xvii) (If UE power is percieved too strong by the network) Network send TPC command to decrease UE Tx Power
xviii) < (If UE moves to another cell region) Network and UE perform Handover procedure >
xix) User stop call and UE gets into IDLE mode

LTE Cell Search - Synchronization Procedure


UE searches in all centre frequencies

Primary synchronization Signal(PSS) 


  • achieve subframe, slot and symbol synchronisation in the time domain 
  • identify the center of the channel bandwidth in the frequency domain 
  • Physical Layer ID (N2CellID)

PSS is Zadoff-Chu sequence which is CAZAC.
Transmitted on last symbol of slot 0 and 10 for FDD, third symbol of slots 2 and 12 for TDD.

Secondary Synchronization signal(SSS)

  • gives cell ID group(N1CellID)
  • frame timing
  • CP length
  • TDD/FDD

SSS is constructed using two interleaved maximum length sequences.
Transmitted on second last symbol of slot 0 and 10 in FDD, last symbol of slots 1 and 11 in TDD.

Tuesday, January 28, 2014

DSP Code Optimization Techniques for Speed

Code can be optimized for Speed or memory.
Here I am discussing the methods for optimization for speed

Optimization can can be done at different levels

Design level

At the highest level, the design may be optimized to make best use of the available resources. The implementation of this design will benefit from a good choice of efficient algorithms and the implementation of these algorithms will benefit from being written well. The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design and, since the choice of algorithm usually is the first thing that must be decided, arguments against early or "premature optimization" may be hard to justify.

In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain more faults than unoptimized versions.

Source code level

Avoiding poor quality coding can also improve performance, by avoiding obvious "slowdowns". After that, however, some optimizations are possible that actually decrease maintainability. Some, but not all, optimizations can nowadays be performed by optimizing compilers.

Build level

Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, or optimizing for specific processor models or hardware capabilities. Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization.

Compile level

Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict.

Using Intrisics 

Intrisics are functions equivalent to assembly instructions - one to one or one to many mapping.
These can be used in the code like a function.

Assembly level

At the lowest level, writing code using an assembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire of machine instructions. Many operating systems used on embedded systems have been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language.

With more modern optimizing compilers and the greater complexity of recent CPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step.

Much code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.

Run time

Just-in-time compilers and assembler programmers may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.

Self-modifying code can alter itself in response to run time conditions in order to optimize code.

Some CPU designs can perform some optimizations at runtime. Some examples include Out-of-order execution, Instruction pipelines, and Branch predictors. Compilers can help the program take advantage of these CPU features, for example through instruction scheduling.

Thursday, January 23, 2014

Why Special Subframe is needed in LTE

As the single frequency block is shared in time domain between UL and DL the transmission in TDD is not continuous. All UL transmission need to be on hold while any downlink resource it is used and the other way around. 
Switching between transmission directions has a small hardware delay (for both UE and NodeB) and needs to be compensated. To control the switching between the UL and DL a guard period GP is allocated which compensates for the maximum propagation delay of interfering components.


Within a radio frame, the transmission direction changes several times between downlink and uplink. 

In special subframe DL to UL switching happens.
Special subframe includes DL,UL and a guard period.


Click image for larger version. 

Name: guard.jpg 
Views: 15 
Size: 15.9 KB 
ID: 2458
Due to the different signal transit times between the eNodeB and the various mobile stations, a timing advance mechanism involving a time gap called “guard period” is needed when the transmission direction switches from downlink to uplink. However, no guard period is needed when the transmission direction switches from uplink to downlink. 
Click image for larger version. 

Name: guard1.jpg 
Views: 15 
Size: 34.8 KB 
ID: 2459

In the uplink,as shown in above figure the greater the distance between the eNodeB and the mobile station, the earlier the mobile station must start transmitting. This helps ensure that all signals reach the eNodeB in a frame-synchronous manner. When switching from downlink to uplink, a guard period is inserted between the DwPTS and UpPTS field in each special subframe. The duration of the guard period is configured by the network, based on the cell size. The maximum possible guard period length of ten OFDM symbols allows cell sizes with a radius of 100 km. 


When switching from uplink to downlink there is no need for a guard period, since the uplink signals all arrive at the eNodeB in a frame-synchronous fashion - thanks to the timing advance mechanism - and the downlink data is also transmitted in the form of a frame-synchronous OFDMA signal.


DwPTS : Downlink Pilot Time Slot
UpPTS :  Uplink Pilot Time Slot

Thursday, September 19, 2013

Why Special Subframe is required in LTE?


In LTE Frame Type 2 (TD-LTE) there is a special sub-frame when switching from DL to UL but there is no special sub-frame or gap when switching from UL to DL. 

Different TDD modes

To understand this, it is important to know why a transmission gap is required when switching from DL to UL. The special sub-frame is made up of DwPTS, GP and UpPTS and all of these have configurable lengths while the sum of the lengths has to be 1 ms i.e. the length of the sub-frame. Now consider, the format 1, where the GP (guard period or TTG in WiMAX) is 4 symbols long which equates to 285 us approx. Consider a UE-A at a distance of 10 km from the eNB and UE-B at 50 km from the eNB. The time it takes the RF signals to reach the UE-A and UE-B will be
Time for UE-A = distance/velocity of light = 10000/3x10^8 = 33.3 us
Time for UE-B = distance/velocity of light = 45000/3x10^8 = 150 us

This means that after the eNB has transmitted the last symbol of DL data and it starts the GP, the last symbol will be received at UE-A after 33.3 us and at UE-B after 150 us. Now, every UE takes a small amount of time to switch from Rx to Tx mode and lets assume this switching period to be 50 us (it should be lesser for LTE UEs but this is just an assumption). So, the UE-A will start its switching period and will start transmitting after 33.5 + 50 = 83.5 us and it will take another 33.5 us to reach the eNB. This makes the total Round Trip Time for UE-A to be equal to 33.5 +50 + 33.5 = 117 us. Now we know that the GP at eNB is set 285 us so that means that the UE-A will be able to transmit the UL data within the GP. In actual practice, all the UEs know their Timing Advance from the eNB so the UE-A would wait that much amount before transmitting so that the UL data reaches the eNB at exactly the end of GP.

However, lets do the same analysis for UE-B. The total round trip time for UE-B would be 150 + 50 + 150 = 350 us which is greater than the GP (285 us) so, the UE-B would not be able to reach the first uplink symbol. Because of this, the GP is supposed to determine the maximum cell radius for a TDD system.
If there hadn't been a GAP or TTG (as in WiMAX) between the DL and UL transmission, these Over-The-Air timing delays and the switching period could not be compensated so we need to add a transmission gap when switching from DL to UL.

Now, lets consider the UL to DL switching. We will only consider UE-A for this example as UE-B has been proven beyond the cell range. The UE-A will transmit the last UL symbol and then start switching from Tx to Rx mode. The last UL symbol will reach the eNB after 33.5 us and the eNB would switch to Tx after receiving the last UL symbol. It will transmit the next DL symbol which will reach the UE-A after another 33.5 us and thus the UE-A will have a total of 33.5 + 33.5 = 67 us of time to switch from Tx to Rx mode without any presence of any RTG. So, while switching from UL to DL, a RTG or GP is not really required as the system already gets a virtual GP due to OTA delays.
There can be a query about the UEs that are very close to the eNB as they would have a very small Over-The-Air delay so it might not get enough time to switch to Rx mode. There can be two possible solutions for that


- In LTE, there is a 1 ms TTI so if the UE is too close to the eNB that it would not be able to switch to Rx mode in time, the eNB can allocate the DL resources in the next DL sub-frame so the UE will have 1 ms to make the switch. In WiMAX, this would not have been possible as it has 5 ms TTI and in case of No RTG, the UEs closer to WiMAX BTS would have to be scehduled in the next frame adding another 5 ms to the latency plane.

- Secondly, the UE switching from Rx to Tx and vice versa should now be extensively reduced. The 50 us limitation was there in beceem chipsets around 4 years back while I think that beceem and intel had made chipsets with switching periods of less than 20 us last year. SO, LTE UEs should have a much lower switching times.

Thursday, July 11, 2013

3GPP LTE : Valid numbers of allocated Resource blocks in UL (Uplink)

Total number of allocated resource block for a user can only be any of these.

0
1
2
3
4
5
6
8
9
10
12
15
16
18
20
24
25
27
30
32
36
40
45
48
50
54
60
64
72
75
80
81
90
96
100

Condition for valid number 2^a * 3^b *  5^c
a,b,c = 0,1,2,....

3GPP LTE : Dynamic Scheduling, Persistent Scheduling and Semi Persistent Scheduling

Dynamic Scheduling


In a subframe in physical layer the first OFDM symbol of each subframe consists of CFI information. CFI information basically tells you the number of OFDM symbols used by PDCCH. The PDCCH has DCI information that lets you decode the data from the PDSCH. In case you forgot, PDSCH has all the user data pertaining to the UE's. Now when a UE is downloading a set of files, say from the internet, each and every subframe has the PCFICH and PDCCH data in the first 3-4 OFDM symbols. This is essential when the data is robust or adaptive in nature, especially when it is web data. So it is essential that you send the control information for each subframe along with it. This kind of Scheduling is known as Dynamic Scheduling.

The advantage of Dynamic Scheduling is basically the flexibility to alter the size of data in each subframe. You can push more data in one SF, less on another. 

Persistent Scheduling 


Now consider a case where the amount of data expected is less and occurs in a fixed time interval. Yes, I'm talking about something like VoLTE (Voice over LTE). Voice data is in the form of small packets and it comes in a regular interval, which is network dependent. In such cases, sending control information in each and every subframe plays a vital role in the effective utilization of bandwidth. Thus, we use something called as Persistent Scheduling, where the control information sent across a SF is retained for every nth SF coming after it, until notified. This scheme drastically reduces the overhead.

Semi Persistent Scheduling


Now note the point that when you have a NACK for any of the DL data, the retransmission has to contain some extra information. ( Probably to indicate retransmission, SF number so on). So your retransmissions cant be pushed along with the Persistent Scheduled Time interval. In other words, you have to explicitly put the header info for retransmission SF's. Thus, Persistent Scheduling is rarely used, instead this new scheduling scheme known as Semi Persistent Scheduling is used.

Semi Persistent Scheduling Example

The time interval for SPS is informed by the RRC. The termination of SPS/alteration of time interval is also RRC triggered.
    
In VoIP services, the voice data is encoded using a codec and sent. At times, the network might have to change the codec(maybe for internal reasons, or say clarity etc). When you change the codec, the amount of data sent per Radio Frame might be different. As a result, you might have to increase the SPS interval.

In the diagram above, also note one thing. Once SPS is triggered, every nth SF is first checked for PDCCH data. This is because, PDCCH signals always have a higher priority. So just because you initiated SPS, it doesnt mean that it will continue till you tell it to stop. It will always give a higher priority for PDCCH data in that particular SF. An example for this would be downloading a webpage along with a voice call. Here, you might need PDCCH data to decode the user data.