Wednesday, January 29, 2014

LTE UE Procedure

i) UE is Off
ii) Power On UE
iii) < Frequency Search >
iv) < Cell Search > : Normally a UE would find multiple cells in this process
v) < Cell Selection >
vi) MIB decoding
vii) SIB deconding
viii) < Initial RACH Process >
ix) < Registration/Authentication/Attach>
x) <Default EPS Bearer Setup >
xi) Now UE is in IDLE Mode
xi) <(If the current cell become weak or UE moves to another cell regisn) Cell Reselection>
xii) <(When Paging message comes or User make a call) RACH Process>
xiii) < Setup Dedicated EPS Bearer >
xiv) Receive data
xv) Transmit data
xvi)  (If UE power is percieved too weak by the network) Network send TPC command to increase UE Tx Power
xvii) (If UE power is percieved too strong by the network) Network send TPC command to decrease UE Tx Power
xviii) < (If UE moves to another cell region) Network and UE perform Handover procedure >
xix) User stop call and UE gets into IDLE mode

LTE Cell Search - Synchronization Procedure


UE searches in all centre frequencies

Primary synchronization Signal(PSS) 


  • achieve subframe, slot and symbol synchronisation in the time domain 
  • identify the center of the channel bandwidth in the frequency domain 
  • Physical Layer ID (N2CellID)

PSS is Zadoff-Chu sequence which is CAZAC.
Transmitted on last symbol of slot 0 and 10 for FDD, third symbol of slots 2 and 12 for TDD.

Secondary Synchronization signal(SSS)

  • gives cell ID group(N1CellID)
  • frame timing
  • CP length
  • TDD/FDD

SSS is constructed using two interleaved maximum length sequences.
Transmitted on second last symbol of slot 0 and 10 in FDD, last symbol of slots 1 and 11 in TDD.

Tuesday, January 28, 2014

DSP Code Optimization Techniques for Speed

Code can be optimized for Speed or memory.
Here I am discussing the methods for optimization for speed

Optimization can can be done at different levels

Design level

At the highest level, the design may be optimized to make best use of the available resources. The implementation of this design will benefit from a good choice of efficient algorithms and the implementation of these algorithms will benefit from being written well. The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design and, since the choice of algorithm usually is the first thing that must be decided, arguments against early or "premature optimization" may be hard to justify.

In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain more faults than unoptimized versions.

Source code level

Avoiding poor quality coding can also improve performance, by avoiding obvious "slowdowns". After that, however, some optimizations are possible that actually decrease maintainability. Some, but not all, optimizations can nowadays be performed by optimizing compilers.

Build level

Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, or optimizing for specific processor models or hardware capabilities. Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization.

Compile level

Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict.

Using Intrisics 

Intrisics are functions equivalent to assembly instructions - one to one or one to many mapping.
These can be used in the code like a function.

Assembly level

At the lowest level, writing code using an assembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire of machine instructions. Many operating systems used on embedded systems have been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language.

With more modern optimizing compilers and the greater complexity of recent CPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step.

Much code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.

Run time

Just-in-time compilers and assembler programmers may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.

Self-modifying code can alter itself in response to run time conditions in order to optimize code.

Some CPU designs can perform some optimizations at runtime. Some examples include Out-of-order execution, Instruction pipelines, and Branch predictors. Compilers can help the program take advantage of these CPU features, for example through instruction scheduling.

Thursday, January 23, 2014

Why Special Subframe is needed in LTE

As the single frequency block is shared in time domain between UL and DL the transmission in TDD is not continuous. All UL transmission need to be on hold while any downlink resource it is used and the other way around. 
Switching between transmission directions has a small hardware delay (for both UE and NodeB) and needs to be compensated. To control the switching between the UL and DL a guard period GP is allocated which compensates for the maximum propagation delay of interfering components.


Within a radio frame, the transmission direction changes several times between downlink and uplink. 

In special subframe DL to UL switching happens.
Special subframe includes DL,UL and a guard period.


Click image for larger version. 

Name: guard.jpg 
Views: 15 
Size: 15.9 KB 
ID: 2458
Due to the different signal transit times between the eNodeB and the various mobile stations, a timing advance mechanism involving a time gap called “guard period” is needed when the transmission direction switches from downlink to uplink. However, no guard period is needed when the transmission direction switches from uplink to downlink. 
Click image for larger version. 

Name: guard1.jpg 
Views: 15 
Size: 34.8 KB 
ID: 2459

In the uplink,as shown in above figure the greater the distance between the eNodeB and the mobile station, the earlier the mobile station must start transmitting. This helps ensure that all signals reach the eNodeB in a frame-synchronous manner. When switching from downlink to uplink, a guard period is inserted between the DwPTS and UpPTS field in each special subframe. The duration of the guard period is configured by the network, based on the cell size. The maximum possible guard period length of ten OFDM symbols allows cell sizes with a radius of 100 km. 


When switching from uplink to downlink there is no need for a guard period, since the uplink signals all arrive at the eNodeB in a frame-synchronous fashion - thanks to the timing advance mechanism - and the downlink data is also transmitted in the form of a frame-synchronous OFDMA signal.


DwPTS : Downlink Pilot Time Slot
UpPTS :  Uplink Pilot Time Slot