As we’ve talked about traffic to and from UEs is encapsulated in GTP-U tunnels, with the idea that by encapsulating data destined for a UE it can be routed to the correct destination (eNB serving UE) transparently and efficiently.
As all traffic destined for a UE will come to the P-GW, the P-GW must be able to quickly determine which eNB and S-GW to send the encapsulated data too.
The encapsulated data is logically grouped into tunnels between each node.
A GTP tunnel exists between the S-GW and the P-GW, another GTP tunnel exists between the S-GW and the eNB.
Each tunnel between the eNB and the S-GW, and each tunnel between S-GW and P-GW, is allocated a unique 32 bit value called a Tunnel Endpoint Identifier (TEID) allocated by the node that corresponds to each end of the tunnel and each TEID is locally unique to that node.
For each packet of user data (GTP-U) sent through a GTP tunnel the TEID allocated by the receiver is put in the GTP header by the sender.
The destinations of the tunnels can be updated, for example if a UE moves to a different eNB, the tunnel between the S-GW and the eNB can be quickly updated to point at the new eNB.
Each end of the tunnel is associated with a TEID, and each time a GTP packet is sent through the tunnel it includes the TEID of the remote end (reciever) in the GTP header.
When a packet arrives from an external network, like the internet, it is routed to the P-GW.
The P-GW takes this packet and places it in another IP packet (encapsulates it) and then forwards the encapsulated data to the Serving-Gateway.
The S-GW then takes the encapsulated data it just recieved and sends it on inside another IP packet to the eNB.
The encapsulated data sent from the P-GW to the S-GW, and the S-GW to the eNB, is carried by UDP, even if the traffic inside is TCP.
Communication between these elements can be done using internal addressing, and this addressing information will never be visible to the UE or the external networks, and only the P-GW needs to be reachable from the external networks.
This encapsulation is done using GTP – the GPRS Tunneling Protocol.
Specifically IP traffic to and from the UE is contained in GTP-U (User data) packets.
The control data for GTP is contained in GTP-C packets, which sets up tunnels for the GTP traffic to flow through (more on that later).
To summarize, user IP packets are encapsulated into GTP-U packets, which are a transported by UDP between the different nodes (S-GW and eNB)
In a fixed network, if I were to connect my laptop to my network at home, I’d get a different address to if I plug it in at work.
In a mobile network UEs are often moving, but we can’t keep changing the IP address – that would lead to all sorts of issues.
The UE must maintain the same IP address, at least for the duration of their session.
Instead the IP address of a UE is allocated by the P-Gateway (P-GW) when the UE attaches.
The IP the P-GW allocates to the UE is in a subnet managed by the P-GW – that IP prefix is associated with the P-GW, so traffic is sent to the P-GW to get to the UE.
Therefore all traffic destined for the UE from external networks will be sent to the P-GW first.
Because the UE is mobile and changing places inside the network, we need a way to keep the UE’s IP even though it’s moving around, something that by default TCP/IP networks don’t cater for very well.
To achieve this we use a technique known as encapsulation where we take the complete IP packet for the UE, and instead of forwarding it on to the UE on a Layer 3 or Layer 2 level, we bundle the whole packet up and put it inside a new IP packet we can forward on anywhere regardless of what’s insude, until it gets to the eNB the UE is on when it can be unpacked and sent to the UE, which is unaware of all the steps / hops it went through.
The concept is very similar to GRE to a VPN (PPTP, IPsec, L2TP, etc) where the user’s IP packets are encapsulated inside another IP packet.
We encapsulate the data using GTP – GPRS Tunneling Protocol.
From a Layer 3 perspective the fact the contents of the GTP packet (another IP packet) is irrelevant, and it’s just handled like any other IP packet being sent from the P-GW to the S-GW.
Getting traffic to the UE
When a packet is sent to the UE’s IP Address from an external destination, it’s first sent to the P-GW, as the P-GW manages that IP prefix.
The P-GW identifies the packet as being destined for a UE, so it encapsulates the entire packet for the UE, by wrapping it up inside a GTP packet.
The P-GW then forwards the GTP packet to the serving Serving-Gateway (S-GW)’s IP Address, so the S-GW can forward it to the eNB to get to the UE.
(The P-GW forwards the traffic to the S-GW so the S-GW can get it to the correct eNB for the UE, the reason for having two nodes to manage this is so it can scale better)
Once the traffic gets to the the eNB serving the UE, the eNB de-encapsulates the data, so it’s now got an IP packet with the destination IP of the UE.
It takes the data and puts it onto a transport block sent to the specific RNTI of the UE.
The hops between the UE and the P-GW are transparent to the UE – it doesn’t see the IP Address of the eNB or the S-GW, or any of the routers in between.
As the traffic is point to point headers vary vary little so is predictable and can be compressed efficiently.
For a VoLTE medea stream a 40 byte IPv6 header, 8 byte UDP header, 12 byte RTP header and 30 bytes of RTP data.
This means that we have 60 bytes of headers and only 30 bytes of data, which is a very inefficient use of resources, so by compressing this data we can shrink this substantially.
Handover Mitigation
When handing over between NodeBs on previous 3GPP RANs packet loss and reordering was common during handovers between NodeBs.
E-UTRAN specs have minimized this as much as possible, the handing off eNB can transfer information using PDCP about data to be transferred to the UE to the eNB the UE is handing over too.
Security
As the radio link is particularly vulnerable to eavesdropping, PDCP offers another independent ciphering and integrity control mechanism to verify data is not modified / intercepted.
Usage of PDCP Functionality
Not all these functionalities are used for all types of traffic, as shown in the table below.
Recap of Radio Interfaces
PDCP is discussed in this post, interfacing the radio interface with the core network.
A summary of the hierarchy is shown here with user data in pink and control data in blue:
The RNTI is shown in a dotted box as the RNTI is not transmitted as a header on the transport block but is logically associated with the transport block thanks to the allocation table.
The problem is when it comes time to add a new UE to an eNB, the UE needs to be allocated a resources to be allocated a RNTI so it can request / be allocated resources.
In the uplink a group of resources is reserved so any new UE can indicate it’s presence and be assigned an RNTI, so it can go on to request & be allocated resources.
This is done on the Physical Random Access Channel (PRACH), made up of 6 resource blocks, and occurs every 1-20ms depending on what the operator has configured.
Access to the PRACH is by CDMA (Code Division Multiple Access). Without going into the mechanics of CDMA the important thing to note is that on CDMA two transmissions can occur at the same time and as long as they are each using a different one of CDMA’s 64 Codes the eNB will be able to distinguish between the two transmissions.
When attempting to associate the UE will send a CDMA symbol with one of the 64 CDMA sequence codes across all 6 resource blocks. As we discussed the eNB will still be able to determine the code used even if multiple UEs were transmitting at the same time each hoping to associate with the eNB.
UE Attach and RNTI Assignment
The UE begins by listening to the eNB to identify when the Physical Random Access Channel (PRACH)is scheduled.
Once the UE knows when the PRACH is going to be it transmits one of the 64 possible CDMA codes on the PRACH in all 6 of the resource blocks in the Random Access Channel.
The eNB detects the transmission and which one of the 64 CDMA codes was used by the UE wishing to attach, and the eNB assigns it an RNTI.
At this point only the eNB knows the RNTI, it needs to let the UE know it’s assigned RNTI so it can start scheduling.
The eNB creates a new identifier RA-RNTI or Random Access – RNTI. This is calculated using the CDMA code used by the UE in it’s transmission on the PRACH and the RNTI to be assigned.
The eNB then allocates a resource for that RNTI so the UE can send a response back in the form of a Connection Request containing the TMSI.
The eNB then echos back the connection request on the channel allocated to the RNTI.
The echo procedure means if two UEs happened to use the same CDMA Code and both believed they were the owner of the RNTI assigned by the eNB, the eNB would either have received only one of the responses, in which case the other would detect the wrong identity in the echo and start the random access procedure again, or both would be lost and both would start the random access procedure again, as shown below:
As we can see the eNB recieved TMSI1’s Connection Request, and sent back the echo, TMSI one confirmed it and continued the setup procedure, while TMSI2’s Connection Request was not received by the eNB and it knows this beacuse the echo did not contain it’s TMSI. TMSI2 detects thew wrong identity and stops that process and starts the random access procedure again.
The Radio Link Control (RLC) layer sits above the MAC layer and can manage:
Re-sequencing of blocks held up by HARQ
Concatenates / segments messages to fit into the size defined by the MAC layer
Re-transmits lost blocks (independent of ARQ)
These functions are set out and managed based on which of the 3 RLC Modes used based on QoS requirements of the traffic type.
RLC Modes
RLC has 3 services or modes that can be used depending on the type of data transmitted:
Transparent Mode (TM)
Does not offer any RLC features / services
Can only be used for short messages (As no segmentation to fit MAC requirements)
Mainly used for signaling messages
Unacknowledged Mode (UM)
Re-Sequences data if received out of order
Segments data according to MAC needs / limitations
Low latency but no re-transmission on the RLC layer
Suitable for VoLTE / real time communications
Does not re-transmit lost packets
Acknowledged Mode (AM)
Like UM but adds re transmission of lost packets
Higher latency but more reliable
Suitable for web browsing, file transfer, etc.
Upon valid receipt of a message the receiver sends an ACK on the data channel.
Several different RLC modes/services can be used at the same time by a single UE, as we saw in the last post:
The MAC layer takes packets from each of the different RLC streams and packs them into MAC SDUs.
Here we can see 3 different RLC SDUs being packet into MAC SDUs.
RLC SDU 1 is packed into the a RLC PDU along with RLC SDU2. These two are concatenated together. RLC also adds a header to delineate the start of RLC SDU 1 and the start of RLC SDU 2.
The header allows the receiver to determine where each RLC SDU starts and ends and the sequence number of each RLC SDU.
Part of RLC SDU 3 is also packed into the first RLC PDU, and the second part is packed into the next RLC PDU. RLC is said to have segmented or fragmentedthis message as it splits it across multiple RLC PDUs for transmission. Again the RLC PDU adds headers to define that the data it contains is split across multiple RLC PDUs.
The MAC layer (Media Access Control) handles error correction, and performs multiplexing of services to the same UE at the same time (multiplexing).
Automatic Repeat Request (ARQ)
When data is sent a CRC (Cyclic Redunancy Check) is added, containing a checksum equivalent of the data contained in the message.
The receiver runs the same CRC calculation on the data, and if the CRC value is not equal to the CRC value it received it knows the data is not correct/complete.
There are 3 scenarios shown below:
Scenario 1 – Data is sent and the CRC calculated by the sender matches the CRC calculated by the reciver. An ACK is sent to confirm the data was received correctly.
Scenario 2 – Data is sent and the CRC calculated by the sender does not match the CRC calculated by the receiver. The receiver sends a NACK (Negative Ack), The sender sends the data again, the CRC this time matches, so an ACK is sent to confirm the data was received correctly.
Scenario 3 – Data is sent by not ACK or NACK was received. This could mean the data was not received, or the ACK/NACK was not received. The sender then sends the message again. This process is repeated a set number of times after which if no response is received the sender gives up.
Acknowledgement
This technique is called Send and Wait ARQ, because the sender must send the data and wait for an ACK/NACK, and will automatically request re-transmission.
Because CRC may take some time to calculate the ACK/NACK is given time to process by the receiver and the ACK/NACK is sent 4ms after it was received.
If a NACK is received the data is re-transmitted 4ms after receipt of the NACK.
This means all up it takes up to 8ms (8 subframes) to send the data, wait for the response and send again if needed. During this time no other data would be sent.
As you can imagine this isn’t a particularly efficient use of time or resources, so the EUTRAN specs define 8 Send and Wait processes in parallel.
While the first process is blocked waiting for an ACK/NACK, another process can transmit. This is called Parallel Send and Wait.
The problem with this is it can lead to data being received out of sequence, as if data is sent and a re-transmission is needed (NACK received by sender) that data will be received after the data sent 8 frames after it.
Here we can see Block 2 was lost, a NACK was sent and a re-transmission occurs 8 subframes later, long after Block 3 and Block 4 were received.
The MAC layer does not deal with re-sequencing, this is managed by the RLC layer above the MAC layer.
Hybrid ARQ
LTE relies on Hybrid ARQ. To increase redundancy and increase the possibility of decoding a corrupted message correctly.
We talked about coding – sending multiple copies of the same data and comparing them to find the common features that would indicate correct data, Hybrid ARQ functions in much the same way.
To increase error correction performance the receiver keeps the invalid/corrupt messages it sends a NACK for, so it can compare it to the re-transmitted version and hopefully correctly decode the message even if the re-transmission is corrupted.
It is called Hybird because the MAC layer has to communicate to the physical layer to let is know this is a re-transmission and not a new transmission.
Multiplexing on the MAC Layer
You may use your smartphone (UE) for a voice call while looking up something online and getting push notifications, while these are 3 distinct streams of data, there is only one stream of data to and from the eNB <-> UE.
These different types of data all need to be combined into one “pipe” between the eNB and UE, this is known as multiplexing.
The RLC layer has multiple types of data arranged in logical channels, but this data has to be put into a MAC PDU and sent over the air.
In the standard networking model, data in an upper layer is called SDU “Service Data Units”, and data in a lower layer is called a PDU “Protocol Data Units”.
To form the transport blocks the MAC layer must take each of the SDUs from the RLC layer, and put it into the transport block, as show in the image above.
The MAC header contains the delineation of what data is for which SDU on the RLC layer.
To inform UEs of which resources are allocated to it, the eNB regularly publishes Allocation Tables with this information.
Resources are allocated dynamically, by the eNB to all the UEs it is serving.
Because the eNB manages all the resources, the eNB must inform the UEs which resources are allocated to which UEs.
This is broken into two functions:
A UE must be able to be informed it’s going to receive data (downlink) and be allocated the resources for it.
A UE must be able to request resources from the eNB to send data (uplink) and be allocated resources for it.
The eNB manages all resource allocation, for downlink and uplink, when they are needed. This is done through an allocation table published by the eNB every subframe (1ms).
There are two allocation tables – One for uplink, one for downlink.
Addressing on the Radio Interface – RNTI
As an allocation table needs to allocate resources to each UE it needs a way to address them.
GUTI, IMSI, TMSI etc, are all too long (allocation tables are published every subframe so need to be a small as possible).
Instead for addressing in the allocation tables as RNTI – Radio Network Temporary Identifier is issued by the eNB to each UE it is serving, the RNTI is issued by the eNB and only valid for that cell, if the user moves to another cell served by another eNB another RNTI is allocated by that eNB.
The RNTI is 16 bits long, meaning it can store 65,536 decimal values. (65,536 UEs)
Allocation on Downlink
Resource allocation for the downlink is managed by the eNB, which publishes allocation tables every subframe defining which resource blocks are allocated to which UE.
The resource blocks contains the RNTI of each UE to receive data and the resource blocks it’s data will be contained in.
Each UE listens for the allocation tables published in each subframe, and if the UE sees it’s own RNTI in the allocation table it listens on the resource blocks allocated to it.
In the example above we can see the allocation table in the dark blue colour, published every 1ms (aka every subframe).
In this example the UE that has been assigned RNTI 63 (represented in green) has got resource blocks 12 & 13 assigned to it, so will listen on 12 and 13 to get it’s downlink data.
Because UEs only listen for the allocation tables and the resource blocks assigned to them, it leads to power savings on the UE as they don’t all need to listen / decode to all resource blocks. Power savings on the UE translate to better battery efficiency.
The UE with RNTI 61 for example, does not get allocated any resource blocks in the downlink in the example allocation table, so it listens for the allocation table and then goes into standby mode until the next allocation table is published.
The allocation tables are contained in the Physical Downlink Control Channel (PDCCH) a channel used only by the eNB to broadcast resource allocation tables and control data.
The actual downlink data for each UE is contained in the Physical Downlink Shared Channel (PDSCH)
Allocation in the Uplink
Allocation in the uplink is similar to allocation in the downlink, however there are some important differences.
The UE must request the resources from the eNB and wait for them to be allocated in the next uplink resource block.
There is a 4ms delay between a resource block being allocated in an allocation table by the eNB for the uplink and it being used by the UE to send data. This gives the UE time to get the data ready to go into the resource block.
The UE requests a resource from the eNB (covered later) and the eNB publishes an allocation table in the next subframe, however this allocation table is to be used in 4 subframes time.
The UE then buffers this allocation table and uses it in 4 subframes time.
By having this delay in using the resource table / allocating resource tables in advance, it allows our UEs to prepare the message for transmission, encode it, modulate it, etc.
The image below shows the UE in red requesting a resource for uplink from the eNB, the eNB then publishes the allocation table for 4 subframes time, the UE waits for 4 subframes to pass and then the UE transmits using the resources allocated in the allocation table published 4ms prior.
For example in the image below the UE with the RTNI of 64, represented in light blue, has requested a resource to send data (uplink), the eNB publishes an uplink allocation table in the next subframe, and the UE has then 4 subframes to prepare the data for transmission before sending the data using the resources allocated in the allocation table sent to it 4 subframes prior.
Like in the Downlink, Uplink transmissions are managed by a Control Channel and data is contained within a Data Channel.
The Physical Uplink Control Channel(PUCCH) contains the control information and the resource tables for the uplink (to be used in 4 subsframes time), shown in gray.
The data being sent from the UEs is contained in Physical Uplink Shared Channels (PUSCH) allocated 4ms prior in a PUCCH.
When a UE has data to transmit it transmits on the PUCCH to request a resource block for the uplink data.
As spectrum is sparse and expensive, so it must be used wisely and shared across multiple users.
LTE shares spectrum in both frequency and time.
LTE can use bandwidths from 1.4Mhz to 20Mhz, based on the spectrum owned and needs of the area.
Spectrum is divided into sub-carriers, allowing each subcarrier to be allocated to a different user, and these subcarriers are re-allocated by the eNB based on the terminal’s needs.
Resource Element (RE)
A Resource Element is the time and frequency a single symbol can be transmitted on.
Resource Elements are allocated by the eNB to UEs and the UE transmits on it’s allocated resource element one symbol.
The size of the data in the symbol is defined by the MCS used.
One Resource Element is contained within 1 subcarrier of 15kHz lasting 66μ s.
Resource Blocks (RB)
Because resource elements are so small, they’re managed in Resource Blocks.
Each Resource Block lasts 0.5ms with 12 sub carriers on each, allowing for 84 Resource Elements in per Resource Block.
The number of Resource Blocks that can be used is determined by the spectrum available.
As we can calculate a Resource Block occupies 180kHz of bandwidth, how many Resource Blocks we can have is determined by how many will fit into our bandwidth.
A system using the minimum bandwidth of 1.4Mhz will have 6 RBs available (1.4Mhz divides into 6 complete 180kHz RBs), while one using the maximum of 20Mhz will have 100 RBs available.
Not all the REs in an RB can be used by terminals though, many of them are reserved for LTE control channels.
Meaning only the white REs shown above can be filled with user traffic.
Sub-Frame
Every 1ms (or 2 Resource Blocks) LTE reallocates the RBs to the terminals that need to communicate.
This means Resource Blocks are allocated in pairs, called a subframe, lasting 1ms.
Subframe, RB, RE Hierarchy
Each subframe is 1ms long and made up of 2 0.5ms Resource Blocks.
Each Resource Block contains 84 Resource Elements, each of which contain one symbol of data.
Resource Allocation in Uplink
When a device needs to transmit data it is allocated one or more resource blocks.
If the number of resource blocks is not enough it can be allocated more in the next subframes.
The amount of data a device can transmit in each subframe is called a Transport Block and is made up of the number of RBs and the modulation (MCS) used.
The sub frame containing contain data for various terminals is shown below in different colors.
Transmission Chain
Transport Blocks are filled with data based on the Transport Block size.
CRC is added to detect errors.
Data is encoded to help recover data containing errors. (Defined by MCS)
Data is modulated (Using modulation scheme defined by MCS)
Data is transmitted in the user-data part that has been allocated in one or more Resource Block Pairs.
The E-UTRAN relies on Phase Shift Keying to modulate data.
The downlink uses orthogonal frequency division multiplex (OFDM) while the uplink uses SC-FDMA due to OFDM’s high peak-to-average-power ratio making it unstable for uplink due to power consumption requirements.
Binary Phase Shift Keying (BPSK)
The simplest modulation is Binary Phase Shift Keying, allowing the phase to be left unmodified to encode a 0, or offset by 180 degrees (aka π) to transmit a 1.
While each bit of data is being transmitted, the time it is being sent over the air is referred to as the symbol length.
Quaternary Phase Shift Keying (QPSK)
QPSK adds to additional phase states, to allow us to send twice as much data in one symbol.
This is done by defining more than two states (phase unmodified, phase offset by pi), but rather 4 states:
Data
Phase Offset
00
π/2
11
5π/2
01
3π/2
10
7π/2
This means we can transmit double the number of bits in a single symbol, with QPSK we can now transmit 2 bits per symbol as per the table above.
This means the data rate of QPSK is twice that of BPSK.
BPSK vs QPSK
Thanks to interference, drift, Doppler shift etc, our modulated data probably isn’t going to be received at exactly the same offset that it was sent.
So because our phase shift isn’t going to land exactly on the red dot in the circle, but somewhere nearby.
The receiver will determine the phase of the signal based on it’s proximity to a known phase shift angle.
Because QPSK has more phase states than BPSK we get a higher data rate, but as the recieved data isn’t going to be exactly the phase offsets defined, the states may overlap and the receiver will not receive the correct information
Channel conditions restrict the modulation techniques we can use. BPSK is slower but more reliable, while QPSK is faster but more error prone due to it’s lower tolerances.
Transmission Reliability
Error Correction is needed in LTE to make sure the message can be reconstructed correctly by the reciever.
To do this, in a simple form LTE adds redundant data.
For example sending 3 copies of the data increases the chance one will get through correctly, and provides the receiver with information to discriminate the right data.
(If only two copies were sent to increase the reliability, the receiver wouldn’t know which one was the correct one.)
Let’s take an example of sending the message “Hello World” and look at the 3 copies sent.
Copy 1: Helso Wdrld
Copy 2: H1llo Worlp
Copy 3: qello Uorld
Correct Data: Hello World
By looking at what’s common we can see that the first letter is H in the first to copies, but not in the third copy, so we can say with some surety that the first letter is H.
The second letter is e in copy 1 and copy 3, so we can again say the second letter is e.
This is a simplified example of coding the data with redundant data to aid in reconstruction.
The ratio of useful information / total transmitted is called the coding rate.
LTE coding rates can vary from 1/3 for extensive error correction, to close to 1 for almost no error correction.
Modulation Coding Scheme (MCS)
As channel conditions change continuously for each terminal/UE, LTE has to change the modulation technique and coding rate dynamically as channel conditions change for each terminal/UE.
The Modulation Coding Scheme is the combination of modulation and coding scheme used, and this changes/adapts in real time based on the signal conditions, independently for each terminal/UE.
HTable is Kamailio’s implimentation of Hash Tables a database-like data structure that runs in memory and is very quick.
It’s uses only become apparent when you’ve become exposed to it.
Let’s take an example of protecting against multiple failed registration attempts.
We could create a SQL database called registration attempts, and each time one failed log the time and attempted username.
Then we could set it so before we respond to traffic we query the database, find out how many rows there are that match the username being attempted and if it’s more than a threshold we set we send back a rate limit response.
The problem is that’s fairly resource intensive, the SQL data is read and written from disks and is slow to do both.
Enter HTable, which achieves the same thing with an in-memory database, that’s lightning fast.
Basic Setup
We’ll need to load htable and create an htable called Table1 to store data in:
$sht(MessageCount=>test) is the logical link to the Htable called MessageCount with a key named test. We’re making that equal itself + 1.
We’re then outputting the content of $sht(MessageCount=>test) to xlog too so we can see it’s value in Syslog.
Now each time a new dialog is started the MessageCount htable key “test” will be incremented.
We can confirm this in Syslog:
ERROR: : MessageCount is 1 ERROR: : MessageCount is 2
We can also check this in kamcmd too:
htable.dump MessageCount
Here we can see in MessageCount there is one key named “test” with a value of 6, and it’s an integer. (You can also store Strings in HTable).
So that’s all well and pointless, but let’s do make it a bit more useful, report on how many SIP transactions we get per IP. Instead of storing our values with the name key “test” we’ll name it based on the Source IP of the message, which lives in Psedovariable $si for Source IP Address.
I’m calling the boilerplate AUTH block, and I’ve added some logic to increment the AuthCount for each failed auth attempt, and reset it to $null if authentication is successful, thus resetting the counter for that IP Address.
Now we’ve done that we need to actually stop the traffic if it’s failed too many times. I’ve added the below check into REQINIT block, which I call at the start of processing:
if($sht(AuthCount=>$si) > 5){
xlog("$si is back again, rate limiting them...");
sl_send_reply("429", "Rate limiting");
exit;
}
Now if AuthCount is more than 5, it’ll respond with a Rate Limiting response.
Because in our modparam() setup for AuthCount we set an expiry, after 360 seconds (10 minutes), after 10 minutes all will be forgiven and our blocked UA can register again.
Advanced Usage / Notes
So now we’ve got Kamailio doing rate limiting, it’s probably worth mentioning the Pike module, which can also be used.
You’ll notice if you reboot Kamailio all the htable values are lost, that’s because the hashes are stored in memory, so aren’t persistent.
You have a few options for making this data persistent,
By using DMQ you can Sync data between Kamailio instances including htable values.
kamcmd can view, modify & manipulate htable values.
As we’ve seen before we can dump the contents of an htable using:
kamcmd htable.dump MessageCount
We can also add new entries & modify existing ones:
kamcmd htable.seti MessageCount ExampleAdd s:999
htable.seti is for setting integer values, we can also use htable.sets to set string values:
htable.sets MessageCount ExampleAdd Iamastring
We can also delete values from here too, which can be super useful for unblocking destinations manually:
htable.delete MessageCount ExampleAdd
As always code from this example is on GitHub. (Please don’t use it in production without modification, Authentication is only called on Register, and it’s just built upon the previous tutorials).
There are a number of ways to feed Homer data, in this case we’re going to use Kamailio, which has a HEP module, so when we feed Kamailio SIP data it’ll use the HEP module to encapsulate it and send it to the database for parsing on the WebUI.
We won’t actually do any SIP routing with Kamailio, we’ll just use it to parse copies of SIP messages sent to it, encapsulate them into HEP and send them to the DB.
We’ll be doing this on the same box that we’re running the HomerUI on, if we weren’t we’d need to adjust the database parameters in Kamailio so it pushes the data to the correct MySQL database.
Next we’ll need to configure captagent to capture data and feed it to Kamailio. There’s two things we’ll need to change from the default, the first is the interface we capture on (By default it’s eth0, but Ubuntu uses eth33 as the first network interface ID) and the second is the HEP destination we send our data to (By default it’s on 9061 but our Kamailio instance is listening on 9060).
We’ll start by editing captagent’s socket_pcap.xml file to change the interface we capture on:
vi /etc/captagent/socket_pcap.xml
Next we’ll edit the port that we send HEP data on
vi /etc/captagent/transport_hep.xml
And finally we’ll restart captagent
/etc/init.d/captagent
Now if we send SIP traffic to this box it’ll be fed into HOMER.
In most use cases you’d use a port mirror so you may need to define the network interface that’s the destination of the port mirror in socket_pcap.xml
HOMER is a popular open source SIP / RTP debug / recording tool.
It’s architecture is pretty straight forward, we have a series of Capture Agents feeding data into a central HOMER Capture Server, which runs a database (today we’re using MySQL), a Homer-UI (Running on Apache), a Homer-API (Also running on Apache) and a HEP processor, which takes the HEP encoded data from the Capture Agents and runs on Kamailio. (That’s right, I’m back rambling about Kamailio)
So this will get the web interface and DB backend of HOMER setup,
For HOMER to actually work you’ll need to feed it data, in the next tutorial we’ll cover configuring a capture agent to feed the HEP processor (Kamailio) which we’ll also setup, but for now we’ll just setup the web user interface for HOMER, API and Database.
Caller-ID spoofing has been an issue in most countries since networks went digital.
SS7 doesn’t provide any caller ID validation facilities, with the assumption that everyone you have peered with you trust the calls from. So because of this it’s up to the originating switch to verify the caller ID selected by the caller is valid and permissible, something that’s not often implemented. Some SIP providers sell the ability to present any number as your CLI as a “feature”.
There’s heaps of news articles on the topic, but I thought it’d be worth talking about RFC4474 – Designed for cryptographically identifying users that originate SIP requests. While almost never used it’s a cool solution to a problem that didn’t take off.
It does this by adding a new header field, called Identity, for conveying a signature used for validating the identity of the caller, and Identity-Info for a reference to the certificate signing authority.
The calling proxy / UA creates a hash of it’s certificate, and inserts that into the SIP message in the Identity header.
The calling proxy / UA also inserts a “Identity-Info” header containing
The called party can then independently get the certificate, create it’s own hash of it, and if they match, then the identity of the caller has been verified.
Sometimes standards are created that are superior in some scenarios, and just don’t get enough love.
To me Stream Control Transmission Protocol (SCTP) is one of those, and it’s really under-utilised in Voice.
Defined by the SIGTRAN working group in 2000 while working to transport SS7 over IP, SCTP takes all the benefits of TCP, mixes in some of the benefits of UDP (No head of line blocking) and mutihoming support, and you’ve got yourself a humdinger of a Transmission Protocol.
Advantages
Reliable Transmission
Like TCP, SCTP includes a reliable transmission mechanism that ensures packets are delivered and retries if they’re not.
Multi Homing
SCTP’s multi homing allows a single connection to be split across multiple paths. This means if you had two paths between Melbourne and Sydney, you could be sending data down both simultaneously.
This means a loss of one transmission path results in the data being sent down another available transmission path.
If you’re doing this using TCP you’d have to wait for the TCP session to expire, BGP to update and then try again. Not so with SCTP.
No Head of Line Blocking
An error / discard with a packet in a TCP stream requires a re-transmission, blocking anything else in that stream from getting through until the error/discarded packet is sorted out. This is referred to as “head of line blocking” and is generally avoided by switching to UDP but that looses the reliability.
4 Way Handshake
Compared to TCP’s 3-way handshake which is susceptible to SYN flooding.
Deployment
If you’ve got a private network, chances are it can support SCTP.
There’s built in SCTP support in almost all Linux kernels since 2002, Cisco iOS and VxWorks all have support, and there’s 3rd party drivers for OSX and Windows.
SCTP is deployed in 3GPP’s LTE / EPC protocol stack for communication over S1-AP and X2 interfaces, meaning if you’ve got a LTE enabled mobile you’re currently using it, not that you’d see the packets.
You’ll find SCTP in SIGTRAN implementations and some TDM-IP gateways, Media Gateways, protocol converters etc, but it’s not widely deployed outside of this.
Now we’ll restart Kamailio and use kamcmd to check the status of our rtpengine instance:
kamcmd rtpengine.show all
All going well you’ll see something like this showing your instance:
Putting it into Practice
If you’ve ever had experience with the other RTP proxies out there you’ll know you’ve had to offer, rewrite SDP and accept the streams in Kamailio.
Luckily rtpengine makes this a bit easier, we need to call rtpengine_manage(); when the initial INVITE is sent and when a response is received with SDP (Like a 200 OK).
So for calling on the INVITE I’ve done it in the route[relay] route which I’m using:
And for the reply I’ve simply put a conditional in the onreply_route[MANAGE_REPLY] for if it has SDP:
SIP Proxies are simple in theory but start to get a bit more complex when implemented.
When a proxy has a response to send back to an endpoint, it can have multiple headers with routing information for how to get that response back to the endpoint that requested it.
So how to know which header to use on a new request?
Routing SIP Requests
Record-Route
If Route header is present (Like Record-Route) the proxy should use the contents of the Record-Route header to route the traffic back.
The Record-Route header is generally not the endpoint itself but another proxy, but that’s not an issue as the next proxy will know how to get to the endpoint, or use this same logic to know how to get it to the next proxy.
Contact
If no Route headers are present, the contact header is used.
The contact provides an address at which a endpoint can be contacted directly, this is used when no Record-Route header present.
From
If there is no Contact or Route headers the proxy should use the From address.
A note about Via
Via headers are only used in getting responses back to a client, and each hop removes it’s own IP on the response before forwarding it onto the next proxy.
This means the client doesn’t know all the Via headers that were on this SIP request, because by the time it gets back to the client they’ve all been removed one by one as it passed through each proxy.
A client can’t send a SIP request using Via’s as it hasn’t been through the proxies for their details to be added, so Via is only used in responding to a request, for example responding with a 404 to an INVITE, but cannot be used on a request itself (For example an INVITE).
If you, like me, spend a lot of time looking at SIP logs, sngrep is an awesome tool for debugging on remote machines. It’s kind of like if VoIP Monitor was ported back to the days of mainframes & minimal remote terminal GUIs.
Installation
It’s in the Repos for Debian and Ubuntu:
apt-get install sngrep
GUI Usage
sngrep can be used to parse packet captures and create packet captures by capturing off an interface, and view them at the same time.
We’ll start by just calling sngrep on a box with some SIP traffic, and waiting to see the dialogs appear.
Here we can see some dialogs, two REGISTERs and 4 INVITEs.
By using the up and down arrow keys we can select a dialog, hitting Enter (Return) will allow us to view that dialog in more detail:
Again we can use the up and down arrow keys to view each of the responses / messages in the dialog.
Hitting Enter again will show you that message in full screen, and hitting Escape will bring you back to the first screen.
From the home screen you can filter with F7, to find the dialog you’re interested in.
Command Line Parameters
One of the best features about sngrep is that you can capture and view at the same time.
As a long time user of TCPdump, I’d been faced with two options, capture the packets, download them, view them and look for what I’m after, or view it live with a pile of chained grep statements and hope to see what I want.
By adding -O filename.pcap to sngrep you can capture to a packet capture and view at the same time.
You can use expression matching to match only specific dialogs.
Want more telecom goodness?
I have a good old fashioned RSS feed you can subscribe to.