MOCN is one of those great concepts I’d not really come across,
Multi-tenancy on the RAN side of the network, allowing an eNB to broadcast multiple PLMN IDs (MCC/MNC) in the System Information Block (SIB).
It allows site sharing not just on the tower itself, but site sharing on the RAN side, allowing customers of MNO A to see themselves connected to MNO A, and customers from MNO B see themselves as connected to MNO B, but they’re both connected to the same RAN hardware.
Setup in my lab was a breeze; your RAN hardware will probably be different.
In terms of signaling it’s a standard S1AP Setup Request except with multiple broadcast PLMN keys:
This series of post covers RF Planning using Forsk Atoll. We cover the basics of RF Planning in the process of learning how to use the software.
Forsk Atoll is software for RF Planning and Optimization of mobile networks.
We’ll start by creating a new document from template:
In our example we’re working with LTE, so, we’ll pick the LTE template.
(The templates setup the basic information on what we’re looking at, prediction models and defaults.)
So now we’ll be looking at a blank white document, showing our map, with no data on it, Atoll doesn’t know if the area is hilly, heavily populated, densely treed, what we’re dealing with is a flat void with no features – “flatland” a perfect place to start.
We’ll add an eNodeB (Transmitter Station and Site) from the top menu bar, clicking the transmitter icon to add a new Transmitter or Station.
Now we’ll click in the white of our map to place the transmitter site, and repeat this a few times.
Now we’ve added a few transmitter sites, let’s take a bit of a look at one.
If we take a closer look we’ll see it’s actually created us a 3 sector site, and each of the arrows coming from the site is a cell sector.
Double clicking on the transmitter will allow us to change the basic info about the site, such as it’s location, as well as display parameters, etc.
In the General tab I’ve renamed Site0 to “Example Street Cell Site”, given it an altitude (for the base of the site) and some comments,
In the Support tab I’ve put some information about the support structure the antennas are one, in our case it’s on a 30m pylon / monopole.
In the LTE tab we can specify S1 throughput (backhaul) and in the Display tab we can set the color / icon used to display this site, but we’ll keep it simple for now and confirm these changes by pressing OK.
We can give each of our other Transmitters a bit of basic info, again, same process, double click on them and add some info:
So in my example I’ve got 3 transmitter sites, labeled and each given a bit of basic info. The main thing we need to have correct for each site is the location (In our case we’re placing them anywhere so it doesn’t matter), the height of the site (Altitude -Real) and the height of the structure (Support Height) the antennas are on.
Now we’ve got our 3 cell sites in our imaginary town devoid of any features, let’s get some coverage predictions for the inhabitants of desolate featureless town!
We’ll right click on Predictions and select “New Prediction”,
There’s a lot of different prediction types, but let’s look at the Effective Service Area Analysis for Uplink and Downlink from our eNodeBs.
We’ll be asked to give this coverage prediction a name, and also specify a Resolution – The higher the resolution the more processing time but the higher the accuracy calculated.
At 50m it means Atoll will split the map into 50m squares and calculate the coverage in each square. This would be suitable for planning in really rural areas where you want a rough idea of footprint, but for In Building Coverage you’d want far more resolution, so you might want select 5m resolution say.
We’ll click Ok and now if we expand “Predictions” we’ll see our catchily named “Effective Service Area Analysis” there.
By right clicking on our prediction we can select “Calculate” and presto, we’ll have a prediction of service area from each of our cells,
Each of those pink cherry blobs represents the effective usable area of coverage provided by our network.
We may have some unhappy customers looking at this, our users will only be able to use their devices around Fake Street, Flatland Water Tower and the Our Lady of Bandwidth church.
But if we have a look at the scale in the bottom left of the screen that’s understandable, our sites are ~10km apart…
So let’s cheat a little by clicking and dragging on each cell site to bring them closer together, in real life we can’t move sites quite so easily…
You’ll notice our prediction hasn’t changed, so let’s recalculate that by right clicking on our Prediction and selecting Calculate again,
We’ll also set our zoom level from 1:250,000 to something a bit more reasonable like 1:100,000
So now our 3 sites have got one area fairly well covered, let’s throw in a few more sites to expand our footprint a bit.
We’ll add extra sites as we did at the start, and fill in those coverage gaps.
After we’ve added some extra sites we’ll recalculate our Coverage Predictions and have a look at how we’ve done.
As you can see we’ve done Ok, a few holes in the coverage but mostly covered.
So next let’s do some tweaking to try and increase our predicted coverage,
By clicking on a site’s sector we can reorient the antenna to a different angle, by recalculating the coverage prediction we can see how this effects the predicted coverage.
By now you’ve probably got an idea of the basics of what we’re doing in Atoll, how changing the location, orientation and height of cells / sites affects the coverage, and how you can predict coverage.
In the upcoming posts we’ll cover adding real world data to Atoll so we can accurately model and predict how our RAN will perform.
We’ll look at how we can use Automatic Cell Planning to get the most optimal setup in terms of power settings, antenna orientations and tilts, etc for our existing sites.
We’ll be able to simulate subscribers, traffic flow, backhaul, and model our network all before a single truck rolls.
So stick around, the next post will be coming soon and will cover adding environment data.
In our last post we talked about getting our geospatial data right, and in our first post we covered the basics of adding sites and transmitters.
There’s a bit of a chicken-and-egg problem with site placement, antenna orientation, type and down-tilt.
If all our sites were populated and in place, we could look at optimizing coverage by changing azimuths / orientations, plug in our data and run some predictions / modeling and coming up with some solutions. Likewise if we’ve already done that we might want to calculate ideal down-tilt angles to get the most out of network.
But we’ve got no sites, no transmitters and no coverage predictions yet, so we’re probably going to need to ask ourselves a more basic, but harder question: Where will we put the cell sites?
To keep this easy we’ll focus on providing the South Western corner of the Island, a town called Tankerton, with only 3 cell sites.
Manual Site Selection
In the very first post we put up a few sites, we’ll do the same, let’s place 3 sites in the bottom right of the island and attempt to provide contiguous coverage for the town with them;
We’ll pick our Station Template and set it to FDD Rural as this is pretty remote.
Next we’ll add some sites and transmitters:
Click to place it on the map and add our cell sites;
When we’re looking at where to place it, it’s good to remember that height (elevation) is good (To an extent), so when looking at where to place sites, keep an eye on the Z (Height) value in the bottom right, and try and pick sites with a good elevation.
Setting Computation Zone
As we’re only focusing on a small part of the island we’ll set a Computation Zone to limit the calculations / computations Atoll has to do to a set region.
I’ve chosen to draw a Polygon around the area, but you could also just get away with drawing a Rectangle, around the area we’re interested in.
This just constrains everything so we’re only crunching numbers inside that area.
Predictions
So now we’ve put our 3 sites out & constrained to the Tankerton area, let’s see how much of the area we’ve covered, we’ll jump to the Network tab, right click on Predictions and select New Prediction
There’s a lot of predictions we can run, but we’ll go simple and select Effective Service Area Analysis (UL + DL) & click Calculate
Atoll will crunch the numbers and give us a simple overlay, showing the areas with and without coverage.
The areas in red are predicted to have coverage, and the areas with no shading will be our blackspots / “notspots”.
We’ve covered most of the area, but we can improve.
Manually Tweaking Attributes
So there’s still some holes in our coverage, so let’s adjust the azimuth of some of the antennas and see if we can fill them.
Click on each of the arrows on the site, each of these represents an antenna / cell and we can change the angles.
So after a bit of fiddling I think I’ve got a better antenna azimuth for each of the sectors on each of my 3 sites.
Let’s compare that to what we had before to see if we’ve made it better or worse,
We’ll Duplicate the Effective Service Area Analysis prediction we created before & calculate it.
To make viewing a bit easier we’ll edit the properties of the copy and set it to a different colour:
Now I can see at a glance how much better we’re looking;
The obvious problem here is I could tweak and tweak and improve some things, make others worse, and we’d be here forever.
Luckily Atoll can do a better job of fiddling with each parameter for us and selecting the configuration that leads to the best performance in our RAN.
Automatic Cell Planning
Enter Automatic Cell Planning, to adjust the parameters we set to find the most optimal setup,
We’ll right click on ACP – Automatic Cell Planning and create a new one.
From here we set how many iterations we want to try out (more leads to better results but takes longer to compute), the parameters we want to change (ie Azimuth, Tilt, Antenna type, etc).
When you’ve set the parameters you want, click Run and Atoll will start running through possible parameter combinations and measuring how they perform.
Once it’s run you’ll be able to view the Optomization
The report shows you the results, improvements in RSSI and RSSQ;
Here we can see we boosted the RSRQ (The quality of the signal) by 9.5%, but had to sacrifice RSRP (Signal power) by 1%.
Sacrificies have to be made, and if you’re happy with this you can view the details of the changes, and commit the adjustments.
Committing the changes adjusts all the Transmitters in the area to the listed values, after which we can run our Predictions again to compare like we did earlier.
So that’s what we’ve got when we randomly place sites, we can use Atoll to optomize what we’ve already got, but what if we left the picking of cell sites up to Atoll to look for better options?
In our next post we’ll look at Site Selection using ACP, and constraining it. This means we can tell Atoll to just find the best sites, or load in a list of possible sites and let Atoll determine which are the best candidates.
You’ve done the flatland model we did Part 1 and now you’re pumped up and ready to start plotting your cell sites, optimizing your coverage and boosting the services you’re offering.
But one thing stands in your way – The predicted & modeled data we get out of Atoll is only going to be as good as the data we’ve given it, the old garbage in garbage out adage.
So let’s get started, we’ll create a new document from Template again and select LTE.
Setting our Projection
Now before we go throwing out cell sites we’re going to have to tell Atoll where we are, this can get a little tricky if you’re setting this up for a different real world location, but stick with me and I’ll give you the data you need for this example.
We’ll select Document -> Properties and we’ve got to define a projection,
A projection is essentially a coordinate system, like Latitude and Longitude, that constrains our project to somewhere on this planet.
In this example we’re in Australia, so we’ll select Asia Pacific from the “Find In” section and scroll until we find MGA Zone 55. We’ll select it and click OK.
All this Zone information makes sense to GIS folks, there’s lots of information online about UTM datums, projections and GIS, which can help you select the right Coordinate system and projection for your particular area – But for us we’ll select MGA Zone 55 and that’s the last we’ll hear about it.
So now we’ve got that information setup we’ll hit Ok again and be on our way. Atoll now knows where in the world we are and we can start filling in the specifics.
So now we’ve still got an empty map with nothing to show, so let’s add some data.
Adding Population and Roads
We’ll start by adding some base data, we’ll import a footprint of the Island and a map of all the roads.
First we’ll import the populated area outline it into Atoll from File -> Import and select the file called EXTRACT_POLYGON.shp
We’ll put it into the population folder, this will be useful later when we try and ensure this area is covered by our network.
Although the population data is kind of rough ( <100 for the entire area) it’s still very useful for limiting our coverage area and saying “We’ve got everyone covered” when it comes to coverage.
Next up we’ll import the roads file, same thing, File -> Import, TR_ROAD.shp
We’ll import it to the Geo folder – This is just data that Atoll doesn’t process but is useful to us as humans.
Finally we’ll enable the layers we’ve just imported and center the map on our imported data to get us in the right region. We’ll do this by expanding “Population”, right clicking on the file we just imported and selecting “Center in Map Window”
Adding Elevation Data
Again, like our Datum elevation data is a standard GIS concept, but if you’re from an RF background you’ve probably not come across it, essentially it’s an image where the shade of each pixel translates to a height above sea level.
We import it into Atoll and it’s used in propagation modeling – after all we need to know if there’s a hill / mountain / valley in the way, and even slight rises / dips in the geography can have an impact on your coverage.
We’ll start by downloading the file above, and then importing it into Atoll
Next we’ll need to tell Atoll the type of data we’re importing (Altitudes) and it’s offset from the 0 point of our coordinate system, I’ve put the information we need for this into a handy table below:
West
337,966
North
5,768,108
Pixel Size
5m
Now when we move our cursor around we’ll see the elevation change in the bottom right ( z is height).
This is because the elevation data is kind of invisible (We’re looking top down) but it’s there.
Adding a Map Overlay
Ok, you’ve made it this far, let’s finally get out of our white blank map and give it some things that make it look like a map!
In order to add Google Maps / Bing Maps etc as an overlay for the first time, we’ve got to restart Atoll, be sure to save your work first.
Done that? Good, let’s add some map tiles.
We’ll right click on Online Maps -> New and select a map source from the drop down menu,
Next we’ll select a tile server, I’m using Open Street Map Standard Map, which I selected from the drop down menu,
Finally we’ll enable the layer by ticking it on the Geo panel on the right hand side. You may need to drag the layer to the top if you’ve added other layers.
All going well you’ll be looking at a map of the area, and by hovering over an area of land you should see the elevation data too.
We can even add other map layers and toggle between them or set the order by dragging them up and down.
Summary
So now we’ve got Atoll configured for our part of the world, imported height data, population data and roads, and added some map layers so we can see what we’re up to.
An important point to keep in mind is the more accurate the data you feed into Atoll, the more real-world the results you’ll get out of it will be.
Although filling in map layers and adding information seems tedious – and it is – the data-in data-out approach applies here, so the more quality data we put in the better.
If you’re doing this yourself in the real world contact your Government, they often publish large amounts of geospatial data like elevation, population, roads, land boundaries, and it’s often free.
I’ve attached my working file for you to play with in case you had any issues.
There’s always lots of talk of Network Function Virtualization (NFV) in the Telco space, but replacing custom hardware with computing resources is only going to get you so far, if every machine has to be configured manually.
Ansible is a topic I’ve written a little bit about in terms of network automation / orchestration.
I wanted to test limits of Open5gs EPC, which led me to creating a lot of Packet Gateways, so I thought I’d share a little Ansible Playbook I wrote for deploying P-GWs.
It dynamically sets the binding address and DHCP servers, and points to each PCRF in the defined pool.
You can obviously build upon this too, creating another playbook to deploy PCRFs, MMEs and S-GWs will allow you to reference the hosts in each group to populate the references.
logger:
file: /var/log/open5gs/pgw.log
parameter:
pgw:
freeDiameter: /etc/freeDiameter/pgw.conf
gtpc:
- addr: {{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}
gtpu:
- addr: {{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}
ue_pool:
- addr: 45.45.0.1/16
- addr: cafe::1/64
dns:
{% for dns in dns_servers %}
- {{ dns }}
{% endfor %}
Diameter Config (pgw.conf.j2)
# This is a sample configuration file for freeDiameter daemon.
# Most of the options can be omitted, as they default to reasonable values.
# Only TLS-related options must be configured properly in usual setups.
# It is possible to use "include" keyword to import additional files
# e.g.: include "/etc/freeDiameter.d/*.conf"
# This is exactly equivalent as copy & paste the content of the included file(s)
# where the "include" keyword is found.
##############################################################
## Peer identity and realm
# The Diameter Identity of this daemon.
# This must be a valid FQDN that resolves to the local host.
# Default: hostname's FQDN
#Identity = "aaa.koganei.freediameter.net";
Identity = "{{ inventory_hostname }}.{{ diameter_realm }}";
# The Diameter Realm of this daemon.
# Default: the domain part of Identity (after the first dot).
#Realm = "koganei.freediameter.net";
Realm = "{{ diameter_realm }}";
##############################################################
## Transport protocol configuration
# The port this peer is listening on for incoming connections (TCP and SCTP).
# Default: 3868. Use 0 to disable.
#Port = 3868;
# The port this peer is listening on for incoming TLS-protected connections (TCP and SCTP).
# See TLS_old_method for more information about TLS flavours.
# Note: we use TLS/SCTP instead of DTLS/SCTP at the moment. This will change in future version of freeDiameter.
# Default: 5868. Use 0 to disable.
#SecPort = 5868;
# Use RFC3588 method for TLS protection, where TLS is negociated after CER/CEA exchange is completed
# on the unsecure connection. The alternative is RFC6733 mechanism, where TLS protects also the
# CER/CEA exchange on a dedicated secure port.
# This parameter only affects outgoing connections.
# The setting can be also defined per-peer (see Peers configuration section).
# Default: use RFC6733 method with separate port for TLS.
#TLS_old_method;
# Disable use of TCP protocol (only listen and connect over SCTP)
# Default : TCP enabled
#No_TCP;
# Disable use of SCTP protocol (only listen and connect over TCP)
# Default : SCTP enabled
#No_SCTP;
# This option is ignored if freeDiameter is compiled with DISABLE_SCTP option.
# Prefer TCP instead of SCTP for establishing new connections.
# This setting may be overwritten per peer in peer configuration blocs.
# Default : SCTP is attempted first.
#Prefer_TCP;
# Default number of streams per SCTP associations.
# This setting may be overwritten per peer basis.
# Default : 30 streams
#SCTP_streams = 30;
##############################################################
## Endpoint configuration
# Disable use of IP addresses (only IPv6)
# Default : IP enabled
#No_IP;
# Disable use of IPv6 addresses (only IP)
# Default : IPv6 enabled
#No_IPv6;
# Specify local addresses the server must bind to
# Default : listen on all addresses available.
#ListenOn = "202.249.37.5";
#ListenOn = "2001:200:903:2::202:1";
#ListenOn = "fe80::21c:5ff:fe98:7d62%eth0";
ListenOn = "{{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}";
##############################################################
## Server configuration
# How many Diameter peers are allowed to be connecting at the same time ?
# This parameter limits the number of incoming connections from the time
# the connection is accepted until the first CER is received.
# Default: 5 unidentified clients in paralel.
#ThreadsPerServer = 5;
##############################################################
## TLS Configuration
# TLS is managed by the GNUTLS library in the freeDiameter daemon.
# You may find more information about parameters and special behaviors
# in the relevant documentation.
# http://www.gnu.org/software/gnutls/manual/
# Credentials of the local peer
# The X509 certificate and private key file to use for the local peer.
# The files must contain PKCS-1 encoded RSA key, in PEM format.
# (These parameters are passed to gnutls_certificate_set_x509_key_file function)
# Default : NO DEFAULT
#TLS_Cred = "<x509 certif file.PEM>" , "<x509 private key file.PEM>";
#TLS_Cred = "/etc/ssl/certs/freeDiameter.pem", "/etc/ssl/private/freeDiameter.key";
TLS_Cred = "/etc/freeDiameter/pgw.cert.pem", "/etc/freeDiameter/pgw.key.pem";
# Certificate authority / trust anchors
# The file containing the list of trusted Certificate Authorities (PEM list)
# (This parameter is passed to gnutls_certificate_set_x509_trust_file function)
# The directive can appear several times to specify several files.
# Default : GNUTLS default behavior
#TLS_CA = "<file.PEM>";
TLS_CA = "/etc/freeDiameter/cacert.pem";
# Certificate Revocation List file
# The information about revoked certificates.
# The file contains a list of trusted CRLs in PEM format. They should have been verified before.
# (This parameter is passed to gnutls_certificate_set_x509_crl_file function)
# Note: openssl CRL format might have interoperability issue with GNUTLS format.
# Default : GNUTLS default behavior
#TLS_CRL = "<file.PEM>";
# GNU TLS Priority string
# This string allows to configure the behavior of GNUTLS key exchanges
# algorithms. See gnutls_priority_init function documentation for information.
# You should also refer to the Diameter required TLS support here:
# http://tools.ietf.org/html/rfc6733#section-13.1
# Default : "NORMAL"
# Example: TLS_Prio = "NONE:+VERS-TLS1.1:+AES-128-CBC:+RSA:+SHA1:+COMP-NULL";
#TLS_Prio = "NORMAL";
# Diffie-Hellman parameters size
# Set the number of bits for generated DH parameters
# Valid value should be 768, 1024, 2048, 3072 or 4096.
# (This parameter is passed to gnutls_dh_params_generate2 function,
# it usually should match RSA key size)
# Default : 1024
#TLS_DH_Bits = 1024;
# Alternatively, you can specify a file to load the PKCS#3 encoded
# DH parameters directly from. This accelerates the daemon start
# but is slightly less secure. If this file is provided, the
# TLS_DH_Bits parameters has no effect.
# Default : no default.
#TLS_DH_File = "<file.PEM>";
##############################################################
## Timers configuration
# The Tc timer of this peer.
# It is the delay before a new attempt is made to reconnect a disconnected peer.
# The value is expressed in seconds. The recommended value is 30 seconds.
# Default: 30
#TcTimer = 30;
# The Tw timer of this peer.
# It is the delay before a watchdog message is sent, as described in RFC 3539.
# The value is expressed in seconds. The default value is 30 seconds. Value must
# be greater or equal to 6 seconds. See details in the RFC.
# Default: 30
#TwTimer = 30;
##############################################################
## Applications configuration
# Disable the relaying of Diameter messages?
# For messages not handled locally, the default behavior is to forward the
# message to another peer if any is available, according to the routing
# algorithms. In addition the "0xffffff" application is advertised in CER/CEA
# exchanges.
# Default: Relaying is enabled.
#NoRelay;
# Number of server threads that can handle incoming messages at the same time.
# Default: 4
#AppServThreads = 4;
# Other applications are configured by loaded extensions.
##############################################################
## Extensions configuration
# The freeDiameter framework merely provides support for
# Diameter Base Protocol. The specific application behaviors,
# as well as advanced functions, are provided
# by loadable extensions (plug-ins).
# These extensions may in addition receive the name of a
# configuration file, the format of which is extension-specific.
#
# Format:
#LoadExtension = "/path/to/extension" [ : "/optional/configuration/file" ] ;
#
# Examples:
#LoadExtension = "extensions/sample.fdx";
#LoadExtension = "extensions/sample.fdx":"conf/sample.conf";
# Extensions are named as follow:
# dict_* for extensions that add content to the dictionary definitions.
# dbg_* for extensions useful only to retrieve more information on the framework execution.
# acl_* : Access control list, to control which peers are allowed to connect.
# rt_* : routing extensions that impact how messages are forwarded to other peers.
# app_* : applications, these extensions usually register callbacks to handle specific messages.
# test_* : dummy extensions that are useful only in testing environments.
# The dbg_msg_dump.fdx extension allows you to tweak the way freeDiameter displays some
# information about some events. This extension does not actually use a configuration file
# but receives directly a parameter in the string passed to the extension. Here are some examples:
## LoadExtension = "dbg_msg_dumps.fdx" : "0x1111"; # Removes all default hooks, very quiet even in case of errors.
## LoadExtension = "dbg_msg_dumps.fdx" : "0x2222"; # Display all events with few details.
## LoadExtension = "dbg_msg_dumps.fdx" : "0x0080"; # Dump complete information about sent and received messages.
# The four digits respectively control: connections, routing decisions, sent/received messages, errors.
# The values for each digit are:
# 0 - default - keep the default behavior
# 1 - quiet - remove any specific log
# 2 - compact - display only a summary of the information
# 4 - full - display the complete information on a single long line
# 8 - tree - display the complete information in an easier to read format spanning several lines.
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dbg_msg_dumps.fdx" : "0x8888";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_rfc5777.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_mip6i.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_nasreq.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_nas_mipv6.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_dcca.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_dcca_3gpp.fdx";
##############################################################
## Peers configuration
# The local server listens for incoming connections. By default,
# all unknown connecting peers are rejected. Extensions can override this behavior (e.g., acl_wl).
#
# In addition to incoming connections, the local peer can
# be configured to establish and maintain connections to some
# Diameter nodes and allow connections from these nodes.
# This is achieved with the ConnectPeer directive described below.
#
# Note that the configured Diameter Identity MUST match
# the information received inside CEA, or the connection will be aborted.
#
# Format:
#ConnectPeer = "diameterid" [ { parameter1; parameter2; ...} ] ;
# Parameters that can be specified in the peer's parameter list:
# No_TCP; No_SCTP; No_IP; No_IPv6; Prefer_TCP; TLS_old_method;
# No_TLS; # assume transparent security instead of TLS. DTLS is not supported yet (will change in future versions).
# Port = 5868; # The port to connect to
# TcTimer = 30;
# TwTimer = 30;
# ConnectTo = "202.249.37.5";
# ConnectTo = "2001:200:903:2::202:1";
# TLS_Prio = "NORMAL";
# Realm = "realm.net"; # Reject the peer if it does not advertise this realm.
# Examples:
#ConnectPeer = "aaa.wide.ad.jp";
#ConnectPeer = "old.diameter.serv" { TcTimer = 60; TLS_old_method; No_SCTP; Port=3868; } ;
{% for pcrf in pcrf_hosts %}
ConnectPeer = "{{ pcrf }}" { ConnectTo = "{{ pcrf }}"; No_TLS; };
{% endfor %}
##############################################################
I’ve been working on a ePDG for VoWiFi access to my IMS core.
This has led to a bit of a deep dive into GTP (easy enough) and GTPv2 (Bit harder).
The Fully Qualified Tunnel Endpoint Identifier includes an information element for the Interface Type, identified by a two digit number.
In the end I found the answer in 3GPP TS 29.274, but thought I’d share it here.
0
S1-U eNodeB GTP-U interface
1
S1-U SGW GTP-U interface
2
S12 RNC GTP-U interface
3
S12 SGW GTP-U interface
4
S5/S8 SGW GTP-U interface
5
S5/S8 PGW GTP-U interface
6
S5/S8 SGW GTP-C interface
7
S5/S8 PGW GTP-C interface
8
S5/S8 SGW PMIPv6 interface (the 32 bit GRE key is encoded in 32 bit TEID field and since alternate CoA is not used the control plane and user plane addresses are the same for PMIPv6)
9
S5/S8 PGW PMIPv6 interface (the 32 bit GRE key is encoded in 32 bit TEID field and the control plane and user plane addresses are the same for PMIPv6)
10
S11 MME GTP-C interface
11
S11/S4 SGW GTP-C interface
12
S10 MME GTP-C interface
13
S3 MME GTP-C interface
14
S3 SGSN GTP-C interface
15
S4 SGSN GTP-U interface
16
S4 SGW GTP-U interface
17
S4 SGSN GTP-C interface
18
S16 SGSN GTP-C interface
19
eNodeB GTP-U interface for DL data forwarding
20
eNodeB GTP-U interface for UL data forwarding
21
RNC GTP-U interface for data forwarding
22
SGSN GTP-U interface for data forwarding
23
SGW GTP-U interface for DL data forwarding
24
Sm MBMS GW GTP-C interface
25
Sn MBMS GW GTP-C interface
26
Sm MME GTP-C interface
27
Sn SGSN GTP-C interface
28
SGW GTP-U interface for UL data forwarding
29
Sn SGSN GTP-U interface
30
S2b ePDG GTP-C interface
31
S2b-U ePDG GTP-U interface
32
S2b PGW GTP-C interface
33
S2b-U PGW GTP-U interface
I also found how this data is encoded on the wire is a bit strange,
In the example above the Interface Type is 7,
This is encoded in binary which give us 111.
This is then padded to 6 bits to give us 000111.
This is prefixed by two additional bits the first denotes if IPv4 address is present, the second bit is for if IPv6 address is present.
Bit 1
Bit 2
Bit 3-6
IPv4 Address Present
IPv4 Address Present
Interface Type
1
1
000111
This is then encoded to hex to give us 87
Here’s my Python example;
interface_type = int(7)
interface_type = "{0:b}".format(interface_type).zfill(6) #Produce binary bits
ipv4ipv6 = "10" #IPv4 only
interface_type = ipv4ipv6 + interface_type #concatenate the two
interface_type = format(int(str(interface_type), 2),"x") #convert to hex
I’ve been working for some time on open source mobile network cores, and one feature that has been a real struggle for a lot of people (Myself included) is getting VoLTE / IMS working.
Here’s some of the issues I’ve faced, and the lessons I learned along the way,
Sadly on most UEs / handsets, there’s no “Make VoLTE work now” switch, you’ve got a satisfy a bunch of dependencies in the OS before the baseband will start sending SIP anywhere.
Get the right Hardware
Your eNB must support additional bearers (dedicated bearers I’ve managed to get away without in my testing) so the device can setup an APN for the IMS traffic.
Sadly at the moment this rules our Software Defined eNodeBs, like srsENB.
ISIM – When you thought you understood USIMs – Guess again
According to the 3GPP IMS docs, an ISIM (IMS SIM) is not a requirement for IMS to work.
However in my testing I found Android didn’t have the option to enable VoLTE unless an ISIM was present the first time.
In a weird quirk I found once I’d inserted an ISIM and connected to the VoLTE network, I could put a USIM in the UE and also connect to the VoLTE network.
Obviously the parameters you can set on the USIM, such as Domain, IMPU, IMPI & AD, are kind of “guessed” but the AKAv1-MD5 algorithm does run.
Getting the APN Config Right
There’s a lot of things you’ll need to have correct on your UE before it’ll even start to think about sending SIP messaging.
I was using commercial UE (Samsung handsets) without engineering firmware so I had very limited info on what’s going on “under the hood”. There’s no “Make VoLTE do” tickbox, there’s VoLTE enable, but that won’t do anything by default.
If your P-GW doesn’t know the IP of your P-CSCF, it’s not going to be able to respond to it in the Protocol Configuration Options (PCO) request sent by the UE with that nice new bearer for IMS we just setup.
There’s no way around Mutual Authentication
Coming from a voice background, and pretty much having RFC 3261 tattooed on my brain, when I finally got the SIP REGISTER request sent to the Proxy CSCF I knocked something up in Kamailio to send back a 200 OK, thinking that’d be the end of it.
For any other SIP endpoint this would have been fine, but IMS Clients, nope.
Reading the specs drove home the same lesson anyone attempting to setup their own LTE network quickly learns – Mutual authentication means both the network and the UE need to verify each other, while I (as the network) can say the UE is OK, the UE needs to check I’m on the level.
I saw my 401 response go back to the UE and then no response. Nada.
This led to my next lesson…
There’s no way around IPsec
According to the 3GPP docs, support for IPsec is optional, but I found this not to be the case on the handsets I’ve tested.
After sending back my 401 response the UE looks for the IPsec info in the 401 response, then tries to setup an IPsec SA and sends ESP packets back to the P-CSCF address.
Even with my valid AKAv1-MD5 auth, I found my UE wasn’t responding until I added IPsec support on the P-CSCF, hence why I couldn’t see the second REGISTER with the Authentication Info.
After setting up IPsec support, I finally saw the UE’s REGISTER with the AKAv1-MD5 authentication, and was able to send a 200 OK.
For my LTE lab I got myself a BaiCells Neutrino, it operates on Band 3 (FDD ~1800Mhz) with only 24dBm of output power max and PoE powered it works well in a lab environment without needing -48vDC supply, BBUs, DUs feeders and antennas.
Setup can be done via TR-069 or via BaiCells management server, for smaller setups the web UI makes setup pretty easy,
Logging in with admin/admin to the web interface:
We’ll select Quick Settings, and load in our MME IP address, PLMN (MCC & MNC), Tracking Area Code, Cell ID and Absolute Radio Frequency No.
Once that’s done we’ll set our Sync settings to use GPS / GNSS (I’ve attached an external GPS Antenna purchased cheaply online).
Finally we’ll set the power levels, my RF blocking setup is quite small so I don’t want excess power messing around with it, so I’ve dialed the power right back:
And that’s it, it’ll now connect to my MME on 10.0.1.133 port 36412 on SCTP.
The Proxy-Call Session Control Function is the first network element a UE sends it’s SIP REGISTER message to, but how does it get there?
To begin with our UE connects as it would normally, getting a default bearer, an IP address and connectivity.
Overview
If the USIM has an ISIM application on it (or IMS is enabled on the UE using USIM for auth) and an IMS APN exists on the UE for IMS, the UE will set up another bearer in addition to the default bearer.
This bearer will carry our IMS traffic and allow QoS to be managed through the QCI values set on the bearer.
While setting up the bearer the UE requests certain parameters from the network in the Protocol Configuration Options element, including the P-CSCF address.
When setting up the bearer the network responds with this information, which if supported includes the P-CSCF IPv4 &/or IPv6 addresses.
The Message Exchange
We’ll start assuming the default bearer is in place & our UE is configured with the APN for IMS and supports IMS functionality.
The first step is to begin the establishment of an additional bearer for the IMS traffic.
This is kicked off through the Uplink NAS Transport, PDN Connectivity Request from the UE to the network. This includes the IMS APN information, and the UE’s NAS Payload includes the Protocol Configuration Options element (PCO), with a series of fields the UE requires responses from the network. including DNS Server, MTU, etc.
In the PCO the UE also includes the P-CSCF address request, so the network can tell the UE the IP of the P-CSCF to use.
If this is missing it’s because either your APN settings for IMS are not valid, or your device doesn’t have IMS support or isn’t enabling it.(that could be for a few reasons).
The MME gets this information from the P-GW, and the network responds in the E-RAB Setup Request, Activate default EPS bearer Context Request and includes the Protocol Configuration Options again, this time the fields are populated with their respective values, including the P-CSCF Address;
Once the UE has this setup, the eNB confirms it’s setup the radio resources through the E-RAB Setup Response.
One the eNB has put the radio side of things in place, the UE confirms the bearer assignment has completed successfully through the Uplink NAS Transport, Activate default EPS Bearer Accept, denoting the bearer is now in place.
Now the UE has the IP address(s) of the P-CSCF and a bearer to send it over, the UE establishes a TCP socket with the address specified in the P-CSCF IPv4 or IPv6 address, to start communicating with the P-CSCF.
The SIP REGISTER request can now be sent and the REGISTRATION procedure can begin.
The team at Software Radio Systems in Ireland have been working on an open source LTE stack for some time, to be used with software defined radio (SDR) hardware like the USRP, BladeRF and LimeSDR.
They’ve released SRSUE and SRSENB their open source EUTRAN UE and eNodeB, which allow your SDR hardware to function as a LTE UE and connect to a commercial eNB like a standard UE while getting all the juicy logs and debug info, or as a LTE eNB and have commercial UEs connect to a network you’re running, all on COTS hardware.
The eNB supports S1AP to connect to a 3GPP compliant EPC, like Open5Gs, but also comes bundled with a barebones EPC for testing.
The UE allows you to do performance testing and gather packet captures on the MAC & PHY layers, something you can’t do on a commericial UE. It also supports software-USIMs (IMSI / K / OP variables stored in a text file) or physical USIMs using a card reader.
I’ve got a draw full of SDR hardware, from the first RTL-SDR dongle I got years ago, to a few HackRFs, a LimeSDR up to the BladeRF x40.
Really cool software to have a play with, I’ve been using SRSUE to get a better understanding of the lower layers of the Uu interface.
Installation
After mucking around trying to satisfy all the dependencies from source I found everything I needed could be found in Debian packages from the repos of the maintainers.
To begin with we need to install the BladeRF drivers and SopySDR modules to abstract it to UHD:
For most Voice / Telco engineers IPsec is a VPN technology, maybe something used when backhauling over an untrusted link, etc, but voice over IP traffic is typically secured with TLS and SRTP.
IMS / Voice over LTE handles things a bit differently, it encapsulates the SIP & RTP traffic between the UE and the P-CSCF in IPsec Encapsulating Security Payload (ESP) payloads.
In this post we’ll take a look at how it works and what it looks like.
It’s worth noting that Kamailio recently added support for IPsec encapsulation on a P-CSCF, in the IMS IPSec-Register module. I’ll cover usage of this at a later date.
The Message Exchange
The exchange starts off looking like any other SIP Registration session, in this case using TCP for transport. The UE sends a REGISTER to the Proxy-CSCF which eventually forwards the request through to a Serving-CSCF.
This is where we diverge from the standard SIP REGISTER message exchange. The Serving-CSCF generates a 401 Unauthorized response, containing an authentication challenge in the WWW-Authenticate header, and also a Ciphering Key & Integrity Key (ck= and ik=) also in the WWW-Authenticate header.
The Serving-CSCF sends the Proxy-CSCF the 401 response it created. The Proxy-CSCF assigns a SPI for the IPsec ESP to use, a server port and client port and indicates the used encryption algorithm (ealg) and algorithm to use (In this case HMAC-SHA-1-96.) and adds a new header to the 401 Unauthorized called Security–Server header to share this information with the UE.
The Proxy-CSCF also strips the Ciphering Key (ck=) and Integrity Key (ik=) headers from the SIP authentication challenge (WWW-Auth) and uses them as the ciphering and integrity keys for the IPsec connection.
Finally after setting up the IPsec server side of things, it forwards the 401 Unauthorized response onto the UE.
Upon receipt of the 401 response, the UE looks at the authentication challenge.
If the network is considered authenticated by the UE it generates a response to the Authentication Challenge, but it doesn’t deliver it over TCP. Using the information generated in the authentication challenge the UE encapsulates everything from the network layer (IPv4) up and sends it to the P-CSCF in an IPsec ESP.
Communication between the UE and the P-CSCF is now encapsulated in IPsec.
IPsec ESP can be used in 3 different ways on the Gm interface between the Ue and the P-CSCF:
Integrity Protection – To prevent tampering
Ciphering – To prevent inception / eavesdropping
Integrity Protection & Ciphering
On Wireshark, you’ll see the ESP, but you won’t see the payload contents, just the fact it’s an Encapsulated Security Payload, it’s SPI and Sequence number.
By default, Kamailio’s P-CSCF only acts in Integrity Protection mode, meaning the ESP payloads aren’t actually encrypted, with a few clicks we can get Wireshark to decode this data;
Just open up Wireshark Preferences, expand Protocols and jump to ESP
Now we can set the decoding preferences for our ESP payloads,
In our case we’ll tick the “Attempt to detect/decode NULL encrypted ESP payloads” box and close the box by clicking OK button.
Now Wireshark will scan through all the frames again, anything that’s an ESP payload it will attempt to parse.
Now if we go back to the ESP payload with SQN 1 I showed a screenshot of earlier, we can see the contents are a TCP SYN.
Now we can see what’s going on inside this ESP data between the P-CSCF and the UE!
As a matter of interest if you can see the IK and CK values in the 401 response before they’re stripped you can decode encrypted ESP payloads from Wireshark, from the same Protocol -> ESP section you can load the Ciphering and Integrity keys used in that session to decrypt them.
Kamailio is generally thought of as a SIP router, but it can in fact handle Diameter signaling as well.
Everything to do with Diameter in Kamailio relies on the C Diameter Peer and CDP_AVP modules which abstract the handling of Diameter messages, and allow us to handle them sort of like SIP messages.
CDP on it’s own doesn’t actually allow us to send Diameter messages, but it’s relied upon by other modules, like CDP_AVP and many of the Kamailio IMS modules, to handle Diameter signaling.
Before we can start shooting Diameter messages all over the place we’ve first got to configure our Kamailio instance, to bring up other Diameter peers, and learn about their capabilities.
C Diameter Peer (Aka CDP) manages the Diameter connections, the Device Watchdog Request/Answers etc, all in the background.
We’ll need to define our Diameter peers for CDP to use so Kamailio can talk to them. This is done in an XML file which lays out our Diameter peers and all the connection information.
In our Kamailio config we’ll add the following lines:
This will load the CDP modules and instruct Kamailio to pull it’s CDP info from an XML config file at /etc/kamailio/diametercfg.xml
Let’s look at the basic example given when installed:
<?xml version="1.0" encoding="UTF-8"?>
<!--
DiameterPeer Parameters
- FQDN - FQDN of this peer, as it should apper in the Origin-Host AVP
- Realm - Realm of this peer, as it should apper in the Origin-Realm AVP
- Vendor_Id - Default Vendor-Id to appear in the Capabilities Exchange
- Product_Name - Product Name to appear in the Capabilities Exchange
- AcceptUnknownPeers - Whether to accept (1) or deny (0) connections from peers with FQDN
not configured below
- DropUnknownOnDisconnect - Whether to drop (1) or keep (0) and retry connections (until restart)
unknown peers in the list of peers after a disconnection.
- Tc - Value for the RFC3588 Tc timer - default 30 seconds
- Workers - Number of incoming messages processing workers forked processes.
- Queue - Length of queue of tasks for the workers:
- too small and the incoming messages will be blocked too often;
- too large and the senders of incoming messages will have a longer feedback loop to notice that
this Diameter peer is overloaded in processing incoming requests;
- a good choice is to have it about 2 times the number of workers. This will mean that each worker
will have about 2 tasks in the queue to process before new incoming messages will start to block.
- ConnectTimeout - time in seconds to wait for an outbound TCP connection to be established.
- TransactionTimeout - time in seconds after which the transaction timeout callback will be fired,
when using transactional processing.
- SessionsHashSize - size of the hash-table to use for the Diameter sessions. When searching for a
session, the time required for this operation will be that of sequential searching in a list of
NumberOfActiveSessions/SessionsHashSize. So higher the better, yet each hashslot will consume an
extra 2xsizeof(void*) bytes (typically 8 or 16 bytes extra).
- DefaultAuthSessionTimeout - default value to use when there is no Authorization Session Timeout
AVP present.
- MaxAuthSessionTimeout - maximum Authorization Session Timeout as a cut-out measure meant to
enforce session refreshes.
-->
<DiameterPeer
FQDN="pcscf.ims.smilecoms.com"
Realm="ims.smilecoms.com"
Vendor_Id="10415"
Product_Name="CDiameterPeer"
AcceptUnknownPeers="0"
DropUnknownOnDisconnect="1"
Tc="30"
Workers="4"
QueueLength="32"
ConnectTimeout="5"
TransactionTimeout="5"
SessionsHashSize="128"
DefaultAuthSessionTimeout="60"
MaxAuthSessionTimeout="300"
>
<!--
Definition of peers to connect to and accept connections from. For each peer found in here
a dedicated receiver process will be forked. All other unkwnown peers will share a single
receiver. NB: You must have a peer definition for each peer listed in the realm routing section
-->
<Peer FQDN="pcrf1.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<Peer FQDN="pcrf2.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<Peer FQDN="pcrf3.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<Peer FQDN="pcrf4.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<Peer FQDN="pcrf5.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<Peer FQDN="pcrf6.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/>
<!--
Definition of incoming connection acceptors. If no bind is specified, the acceptor will bind
on all available interfaces.
-->
<Acceptor port="3868" />
<Acceptor port="3869" bind="127.0.0.1" />
<Acceptor port="3870" bind="192.168.1.1" />
<!--
Definition of Auth (authorization) and Acct (accounting) supported applications. This
information is sent as part of the Capabilities Exchange procedures on connecting to
peers. If no common application is found, the peers will disconnect. Messages will only
be sent to a peer if that peer actually has declared support for the application id of
the message.
-->
<Acct id="16777216" vendor="10415" />
<Acct id="16777216" vendor="0" />
<Auth id="16777216" vendor="10415"/>
<Auth id="16777216" vendor="0" />
<!--
Supported Vendor IDs - list of values which will be sent in the CER/CEA in the
Supported-Vendor-ID AVPs
-->
<SupportedVendor vendor="10415" />
<!--
Realm routing definition.
Each Realm can have a different table of peers to route towards. In case the Destination
Realm AVP contains a Realm not defined here, the DefaultRoute entries will be used.
Note: In case a message already contains a Destination-Host AVP, Realm Routeing will not be
applied.
Note: Routing will only happen towards connected and application id supporting peers.
The metric is used to order the list of prefered peers, while looking for a connected and
application id supporting peer. In the end, of course, just one peer will be selected.
-->
<Realm name="ims.smilecoms.com">
<Route FQDN="pcrf1.ims.smilecoms.com" metric="3"/>
<Route FQDN="pcrf2.ims.smilecoms.com" metric="5"/>
</Realm>
<Realm name="temp.ims.smilecoms.com">
<Route FQDN="pcrf3.ims.smilecoms.com" metric="7"/>
<Route FQDN="pcrf4.ims.smilecoms.com" metric="11"/>
</Realm>
<DefaultRoute FQDN="pcrf5.ims.smilecoms.com" metric="15"/>
<DefaultRoute FQDN="pcrf6.ims.smilecoms.com" metric="13"/>
</DiameterPeer>
First we need to start by telling CDP about the Diameter peer it’s going to be – we do this in the <DiameterPeer section where we define the FQDN and Diameter Realm we’re going to use, as well as some general configuration parameters.
<Peers are of course, Diameter peers. Defining them here will mean a connection is established to each one, Capabilities exchanged and Watchdog request/responses managed. We define the usage of each Peer further on in the config.
The Acceptor section – fairly obviously – sets the bindings for the addresses and ports we’ll listen on.
Next up we need to define the Diameter applications we support in the <Acct id=” /> and <SupportedVendor> parameters, this can be a little unintuitive as we could list support for every Diameter application here, but unless you’ve got a module that can handle those applications, it’s of no use.
Instead of using Dispatcher to manage sending Diameter requests, CDP handles this for us. CDP keeps track of the Peers status and it’s capabilities, but we can group like Peers together, for example we may have a pool of PCRF NEs, so we can group them together into a <Realm >. Instead of calling a peer directly we can call the realm and CDP will dispatch the request to an up peer inside the realm, similar to Dispatcher Groups.
Finally we can configure a <DefaultRoute> which will be used if we don’t specify the peer or realm the request needs to be sent to. Multiple default routes can exist, differentiated based on preference.
We can check the status of peers using Kamcmd’s cdp.list_peers command which lists the peers, their states and capabilities.
While poking around the development and debugging features on Samsung handsets I found the ability to run IMS Debugging directly from the handset.
Alas, the option is only available in the commercial version, it’s just there for carriers, and requires a One Time Password to unlock.
When tapping on the option a challenge is generated with a key.
Interestingly I noticed that the key changes each time and can reject you even in aeroplane mode, suggesting the authentication happens client side.
Some research revealed you can pull APKs off an Android phone, so I downloaded a utility called “APK Extractor” from the Play store, and used it to extract the Samsung Sysdump utility.
So now I was armed with the APK on my local machine, the next step was to see if I could decompile the APK back into source code.
Some Googling found me an online APK decompiler, which I fed the compiled APK file and got back the source code.
I did some poking around inside the source code, and then I found an interesting directory:
Here’s a screenshot of the vanilla code that came out of the app.
I’m not a Java expert, but even I could see the “CheckOTP” function and understand that that’s what validates the One Time Passwords.
The while loop threw me a little – until I read through the rest of the code; the “key” in the popup box is actually a text string representing the current UNIX timestamp down to the minute level. The correct password is an operation done on the “key”, however the CheckOTP function doesn’t know the challenge key, but has the current time, so generates a challenge key for each timestamp back a few minutes and a few minutes into the future.
I modified the code slightly to allow me to enter the presented “key” and get the correct password back. It’s worth noting you need to act quickly, enter the “key” and enter the response within a minute or so.
In the end I’ve posted the code on an online Java compiler,
Samsung handsets have a feature built in to allow debugging from the handset, called Sysdump.
Entering *#9900# from the Dialing Screen will bring up the Sysdump App, from here you can dump logs from the device, and run a variety of debugging procedures.
But for private LTE operators, the two most interesting options are by far the TCPDUMP START option and IMS Logger, but both are grayed out.
Tapping on them asks for a one-time password and has a challenge key.
These options are not available in the commercial version of the OS and need to be unlocked with a one time key generated by a tool Samsung for unlocking engineering firmware on handsets.
Luckily this authentication happens client side, which means we can work out the password it’s expecting.
Once you’ve entered the code and successfully unlocked the IMS Debugging tool there’s a few really cool features in the hamburger menu in the top right.
DM View
This shows the SIP / IMS Messaging and the current signal strength parameters (used to determine which RAN type to use (Ie falling back from VoLTE to UMTS / Circuit Switched when the LTE signal strength drops).
Tapping on the SIP messages expands them and allows you to see the contents of the SIP messages.
Interesting the actual nitty-gritty parameters in the SIP headers are missing, replaced with X for anything “private” or identifiable.
Luckily all this info can be found in the Pcap.
The DM View is great for getting a quick look at what’s going on, on the mobile device itself, without needing a PC.
Logging
The real power comes in the logging functions,
There’s a lot of logging options, including screen recording, TCPdump (as in Packet Captures) and Syslog logging.
From the hamburger menu we can select the logging parameters we want to change.
From the Filter Options menu we can set what info we’re going to log,
The PLMN Identifier is used to identify the radio networks in use, it’s made up of the MCC – Mobile Country Code and MNC – Mobile Network Code.
But sadly it’s not as simple as just concatenating MCC and MNC like in the IMSI, there’s a bit more to it.
In the example above the Tracking Area Identity includes the PLMN Identity, and Wireshark has been kind enough to split it out into MCC and MNC, but how does it get that from the value 12f410?
This one took me longer to work out than I’d like to admit, and saw me looking through the GSM spec, but here goes:
PLMN Contents: Mobile Country Code (MCC) followed by the Mobile Network Code (MNC). Coding: according to TS GSM 04.08 [14].
If storage for fewer than the maximum possible number n is required, the excess bytes shall be set to ‘FF’. For instance, using 246 for the MCC and 81 for the MNC and if this is the first and only PLMN, the contents reads as follows: Bytes 1-3: ’42’ ‘F6′ ’18’ Bytes 4-6: ‘FF’ ‘FF’ ‘FF’ etc.
TS GSM 04.08 [14].
Making sense to you now? Me neither.
Here’s the Python code I wrote to encode MCC and MNCs to PLMN Identifiers and to decode PLMN into MCC and MNC, and then we’ll talk about what’s happening:
In the above example I take MCC 505 (Australia) and MCC 93 and generate the PLMN ID 05f539.
The first step in decoding is to take the first two bits (in our case 05 and reverse them – 50, then we take the third and fourth bits (f5) and reverse them too, and strip the letter f, now we have just 5. We join that with what we had earlier and there’s our MCC – 505.
Next we get our MNC, for this we take bytes 5 & 6 (39) and reverse them, and there’s our MNC – 93.
Together we’ve got MCC 505 and MNC 93.
The one answer I’m still looking for; why not just encode 50593? What is gained by encoding it as 05f539?
After a few quiet months I’m excited to say I’ve pushed through some improvements recently to PyHSS and it’s growing into a more usable HSS platform.
MongoDB Backend
This has a few obvious advantages – More salable, etc, but also opens up the ability to customize more of the subscriber parameters, like GBR bearers, etc, that simple flat text files just wouldn’t support, as well as the obvious issues with threading and writing to and from text files at scale.
Knock knock.
Race condition.
Who’s there?
— Threading Joke.
For now I’m using the Open5GS MongoDB schema, so the Open5Gs web UI can be used for administering the system and adding subscribers.
The CSV / text file backend is still there and still works, the MongoDB backend is only used if you enable it in the YAML file.
The documentation for setting this up is in the readme.
SQN Resync
If you’re working across multiple different HSS’ or perhaps messing with some crypto stuff on your USIM, there’s a chance you’ll get the SQN (The Sequence Number) on the USIM out of sync with what’s on the HSS.
This manifests itself as an Update Location Request being sent from the UE in response to an Authentication Information Answer and coming back with a Re-Syncronization-Info AVP in the Authentication Info AVP. I’ll talk more about how this works in another post, but in short PyHSS now looks at this value and uses it combined with the original RAND value sent in the Authentication Information Answer, to find the correct SQN value and update whichever database backend you’re using accordingly, and then send another Authentication Information Answer with authentication vectors with the correct SQN.
SQN Resync is something that’s really cryptographically difficult to implement / confusing, hence this taking so long.
What’s next? – IMS / Multimedia Auth
The next feature that’s coming soon is the Multimedia Authentication Request / Answer to allow CSCFs to query for IMS Registration and manage the Cx and Dx interfaces.
Code for this is already in place but failing some tests, not sure if that’s to do with the MAA response or something on my CSCFs,
LTE has great concepts like NAS that abstract the actual transport layers, so the NAS packet is generated by the UE and then read by the MME.
One thing that’s a real headache about private LTE is the authentication side of things. You’ll probably bash your head against a SIM programmer for some time.
As your probably know when connecting to a network, the UE shares it’s IMSI / TIMSI with the network, and the MME requests authentication information from the HSS using the Authentication Information Request over Diameter.
The HSS then returns a random value (RAND), expected result (XRES), authentication token (AUTN) and a KASME for generating further keys,
The RAND and AUTN values are sent to the UE, the USIM in the UE calculates the RES (result) and sends it back to the MME. If the RES value received by the MME is equal to the expected RES (XRES) then the subscriber is mutually authenticated.
Using this tool I was able to plug a USIM into my USIM reader, using the Diameter client built into PyHSS I was able to ask for Authentication vectors for a UE using the Authentication Information Request to the HSS and was sent back the Authentication Information Answer containing the RAND and AUTN values, as well as the XRES value.
Then I used the osmo-sim-auth app to query the RES and RAND values against the USIM.
The RES I got back matched the XRES, meaning the HSS and the USIM are in sync (SQNs match) and they mutually authenticated.
I thought I’d expand a little on how the Crypto side of things works in LTE & NR (also known as 4G & 5G).
Authentication primarily happens in two places, one at each end of the network, the Home Subscriber Server and in the USIM card. Let’s take a look at each of them.
On the USIM
On the USIM we’ve got two values that are entered in when the USIM is provisioned, the K key – Our secret key, and an OPc key (operator key).
These two keys are the basis of all the cryptography that goes on, so should never be divulged.
The only other place to have these two keys in the HSS, which associates each K key and OPc key combination with an IMSI.
The USIM also stores the SQN a sequence number, this is used to prevent replay attacks and is incremented after each authentication challenge, starting at 1 for the first authentication challenge and counting up from there.
On the HSS
On the HSS we have the K key (Secret key), OPc key (Operator key) and SQN (Sequence Number) for each IMSI on our network.
Each time a IMSI authenticates itself we increment the SQN, so the value of the SQN on the HSS and on the USIM should (almost) always match.
Authentication Options
Let’s imagine we’re designing the authentication between the USIM and the Network; let’s look at some options for how we can authenticate everyone and why we use the process we use.
Failed Option 1 – Passwords in the Clear
The HSS could ask the USIM to send it’s K and OPc values, compare them to what the HSS has in place and then either accept or reject the USIM depending on if they match.
The obvious problem with this that to send this information we broadcast our supposedly secret K and OPc keys over the air, so anyone listening would get our secret values, and they’re not so secret anymore.
This is why we don’t use this method.
Failed Option 2 – Basic Crypto
So we’ve seen that sending our keys publicly, is out of the question.
The HSS could ask the USIM to mix it’s K key and OPc key in such a way that only someone with both keys could unmix them.
This is done with some cryptographic black magic, all you need to know is it’s a one way function you enter in values and you get the same result every time with the same input, but you can’t work out the input from the result.
The HSS could then get the USIM to send back the result of mixing up both keys, mix the two keys it knows and compare them.
The HSS mixes the two keys itself, and get’s it’s own result called XRES (Expected Result). If the RES (result) of mixing up the keys by the USIM is matches the result when the HSS mixes the keys in the same way (XRES (Expected Result)), the user is authenticated.
The result of mixing the keys by the USIM is called RES (Result), while the result of the HSS mixing the keys is called XRES (Expected Result).
This is abetter solution but has some limitations, because our special mixing of keys gets the same RES each time we put in our OPc and K keys each time a subscriber authenticates to the network the RES (result) of mixing the keys is going to be the same.
This is vulnerable to replay attacks. An attacker don’t need to know the two secret keys (K & OPc) that went into creating the RES (resulting output) , the attacker would just need to know the result of RES, which is sent over the air for anyone to hear. If the attacker sends the same RES they could still authenticate.
This is why we don’t use this method.
Failed Option 3 – Mix keys & add Random
To prevent these replay attacks we add an element of randomness, so the HSS generates a random string of garbage called RAND, and sends it to the USIM.
The USIM then mixes RAND (the random string) the K key and OPc key and sends back the RES (Result).
Because we introduced a RAND value, every time the RAND is different the RES is different. This prevents against the replay attacks we were vulnerable to in our last example.
If the result the USIM calculated with the K key, OPc key and random data is the same as the USIM calculated with the same K key, OPc key and same random data, the user is authenticated.
While an attacker could reply with the same RES, the random data (RAND) will change each time the user authenticates, meaning that response will be invalid.
While an attacker could reply with the same RES, the random data (RAND) will change each time the user authenticates, meaning that response will be invalid.
The problem here is now the network has authenticated the USIM, the USIM hasn’t actually verified it’s talking to the real network.
This is why we don’t use this method.
GSM authentication worked like this, but in a GSM network you could setup your HLR (The GSM version of a HSS) to allow in every subscriber regardless of what the value of RES they sent back was, meaning it didn’t look at the keys at all, this meant attackers could setup fake base stations to capture users.
Option 4 – Mutual Authentication (Real World*)
So from the previous options we’ve learned:
Our network needs to authenticate our subscribers, in a way that can’t be spoofed / replayed so we know who to bill & where to route traffic.
Our subscribers need to authenticate the network so they know they can trust it to carry their traffic.
So our USIM needs to authenticate the network, in the same way the network authenticates the USIM.
To do this we introduce a new key for network authentication, called AUTN.
The AUTN key is generated by the HSS by mixing the secret keys and RAND values together, but in a different way to how we mix the keys to get RES. (Otherwise we’d get the same key).
This AUTN key is sent to the USIM along with the RAND value. The USIM runs the same mixing on it’s private keys and RAND the HSS did to generate the AUTN , except this is the USIM generated – An Expected AUTN key (XAUTN). The USIM compares XAUTN and AUTN to make sure they match. If they do, the USIM then knows the network knows their secret keys.
The USIM then does the same mixing it did in the previous option to generate the RES key and send it back.
The network has now authenticated the subscriber (HSS has authenticated the USIM via RES key) and the subscriber has authenticated the USIM (USIM authenticates HSS via AUTN key).
*This is a slightly simplified version of how EUTRAN / LTE authentication works between the HSS and the USIM – In reality there are a few extra values, such as SQN to take into consideration and the USIM talks to to the MME not the HSS directly.
I’ll do a follow up post covering the more nitty-gritty elements, AMF and SQN fields, OP vs OPc keys, SQN Resync, how this information is transfered in the Authentication Information Answer and how KASME keys are used / distributed.
Want more telecom goodness?
I have a good old fashioned RSS feed you can subscribe to.