Category Archives: LTE

3GPP Long Term Evolution (4G)

Mobile IPv6 Tax?

Recently a Tweet from Dean Bubly got me thinking about how data is charged in cellular:

In the cellular world, subscribers are charged for data from the IP, transport and applications layers; this means you pay for the IP header, you pay for the TCP/UDP header, and you pay for the contents (the cat videos it contains).

This also means if an operator moves mobile subscribers from IPv4 to IPv6, there’s an extra 20 bytes the customer is charged for for every packet sent / received, which the customer is charged for – This is because the IPv6 header is longer than the IPv4 header.

Source: ServerFault - https://serverfault.com/questions/547768/ipv4-header-vs-ipv6-header-size

In most cases, mobile subs don’t get a choice as to if their connection is IPv4 or IPv6, but on a like for like basis, we can say that if a customer moves is on IPv6 every packet sent/received will have an extra 20 bytes of data consumed compared to IPv4.

This means subscribers use more data on IPv6, and this means they get charged for more data on IPv6.

For IoT applications, light users and PAYG users, this extra 20 bytes per packet could add up to something significant – But how much?

We can quantify this, but we’d need to know the number of packets sent on average, and the quantity of the data transferred, because the number of packets is the multiplier here.

So for starters I’ve left a phone on the desk, it’s registered to the network but just sitting in Idle mode – This is an engineering phone from an OEM, it’s just used for testing so doesn’t have anything loaded onto it in terms of apps, it’s not signed into any applications, or checking in the background, so I thought I’d try something more realistic.

So to get a clearer picture, I chucked a SIM in my regular everyday phone I use personally, registered it to the cellular lab I have here. For the next hour I sniffed the GTP traffic for the phone while it was sitting on my desk, not touching the phone, and here’s what I’ve got:

Overall the PCAP includes 6,417,732 bytes of data, but this includes the transport and GTP headers, meaning we can drop everything above it in our traffic calculations.

Everything except the data encapsulated in GTP can be dropped

For this I’ve got 14 bytes of ethernet, 20 bytes IP, 8 bytes UDP and 5 bytes for TZSP (this is to copy the traffic from the eNB to my local machine), then we’ve got the transport from the eNB to the SGW, 14 bytes of ethernet again, 20 bytes of IP , 8 bytes of UDP and 8 bytes of GTP then the payload itself. Phew.
All this means we can drop 97 bytes off every packet.

We have 16,889 packets, 6,417,732 bytes in total, minus 97 bytes from each gives us 1,638,233 of headers to drop (~1.6MB) giving us a total of 4.556 MB traffic to/from the phone itself.

This means my Android phone consumes 4.5 MB of cellular data in an hour while sitting on the desk, with 16,889 packets in/out.

Okay, now we’re getting somewhere!

So now we can answer the question, if each of these 16k packets was IPv6, rather than IPv4, we’d be adding another 20 bytes to each of them, 20 bytes x 16,889 packets gives 337,780 bytes (~0.3MB) to add to the total.

If this traffic was transferred via IPv6, rather than IPv4, we’d be looking at adding 20 bytes to each of the 16,889 packets, which would equate to 0.3MB extra, or about 7% overhead compared to IPv4.

But before you go on about what an outrage this IPv6 transport is, being charged for those extra bytes, that’s only one part of the picture.

There’s a reason operators are finally embracing IPv6, and it’s not to put an extra 7% of traffic on the network (I think if you asked most capacity planners, they’d say they want data savings, not growth).

IPv6 is, for lack of a better term, less rubbish than IPv4.

There’s a lot of drivers for IPv6, and some of these will reduce data consumption.
IPv6 is actually your stuff talking directly to the remote stuff, this means that we don’t need to rely on NAT, so no need to do NAT keepalives, and opening new sessions, which is going to save you data. If you’re running apps that need to keep a connection to somewhere alive, these data savings could negate your IPv6 overhead costs.

Will these potential data savings when using IPv6 outweigh the costs?

That’s going to depend on your use case.

If you’ve extremely bandwidth / data constrained, for example, you have an IoT device on an NTN / satellite connection, that was having to Push data every X hours via IPv4 because you couldn’t pull data from it as it had no public IP, then moving it to IPv6 so you can pull the data on the public IP, on demand, will save you data. That’s a win with IPv6.

If you’re a mobile user, watching YouTube, getting push notifications and using your phone like a normal human, probably not, but if you’re using data like a normal user, you’ve probably got a sizable data allowance that you don’t end up fully consuming, and the extra 20 bytes per packet will be nothing in comparison to the data used to watch a 2k video on your small phone screen.

SMS Transport Wars?

There’s old joke about standards that the great thing about standards there’s so many to choose from.

SMS wasn’t there from the start of GSM, but within a year of the inception of 2G we had SMS, and we’ve had SMS, almost totally unchanged, ever since.

In a recent Twitter exchange, I was asked, what’s the best way to transport SMS?
As always the answer is “it depends” so let’s take a look together at where we’ve come from, where we are now, and how we should move forward.

How we got Here

Between 2G and 3G SMS didn’t change at all, but the introduction of 4G (LTE) caused a bit of a rethink regarding SMS transport.

Early builders of LTE (4G) networks launched their 4G offerings without 4G Voice support (VoLTE), with the idea that networks would “fall back” to using 2G/3G for voice calls.

This meant users got fast data, but to make or receive a call they relied on falling back to the circuit switched (2G/3G) network – Hence the name Circuit Switched Fallback.

Falling back to the 2G/3G network for a call was one thing, but some smart minds realised that if a phone had to fall back to a 2G/3G network every time a subscriber sent a text (not just calls) – And keep in mind this was ~2010 when SMS traffic was crazy high; then that would put a huge amount of strain on the 2G/3G layers as subs constantly flip-flopped between them.

To address this the SGs-AP interface was introduced, linking the 4G core (MME) with the 2G/3G core (MSC) to support this stage where you had 4G/LTE but only for data, SMS and calls still relied on the 2G/3G core (MSC).

The SGs-AP interface has two purposes;
One, It can tell a phone on 4G to fallback to 2G/3G when it’s got an incoming call, and two; it can send and receive SMS.

SMS traffic over this interface is sometimes described as SMS-over-NAS, as it’s transported over a signaling channel to the UE.

This also worked when roaming, as the MSC from the 2G/3G network was still used, so SMS delivery worked the same when roaming as if you were in the home 2G/3G network.

Enter VoLTE & IMS

Of course when VoLTE entered the scene, it also came with it’s own option for delivering SMS to users, using IP, rather than the NAS signaling. This removed the reliance on a link to a 2G/3G core (MSC) to make calls and send texts.

This was great because it allowed operators to build networks without any 2G/3G network elements and build a fully standalone LTE only network, like Jio, Rakuten, etc.

VoLTE didn’t change anything about the GSM 2G/3G SMS PDU, it just bundled it up in an SIP message body, this is often referred to as SMS-over-IP.

SMS-over-IP doesn’t address any of the limitations from 2G/3G, including limiting multipart messages to send payloads above 160 characters, and carries all the same limitations in order to be backward compatible, but it is over IP, and it doesn’t need 2G or 3G.

In roaming scenarios, S8 Home Routing for VoLTE enabled SMS to be handled when roaming the same way as voice calls, which made SMS roaming a doddle.

4G SMS: SMS over IP vs SMS over NAS

So if you’re operating a 4G network, should you deliver your SMS traffic using SMS-over-IP or SMS-over-NAS?

Generally, if you’ve been evolving your network over the years, you’ve got an MSC and a 2G/3G network, you still may do CSFB so you’ve probably ended up using SMS over NAS using the SGs-AP interface.
This method still relies on “the old ways” to work, which is fine until a discussion starts around sunsetting the 2G/3G networks, when you’d need to move calling to VoLTE, and SMS over NAS is a bit of a mess when it comes to roaming.

Greenfield operators generally opt for SMS over IP from the start, but this has its own limitations; SMS over IP is has awful efficiency which makes it unsuitable for use with NB-IoT applications which are bandwidth constrained, support for SMS over IP is generally limited to more expensive chipsets, so the bargain basement chips used for IoT often don’t support SMS over IP either, and integration of VoLTE comes with its own set of challenges regarding VoLTE enablement.

5G enters the scene (Nsmsf_SMService)

5G rolled onto the scene with the opportunity to remove the SMS over NAS option, and rely purely on SMS over IP (IMS); forcing the industry to standardise on an option alas this did not happen.

Instead 5GC introduces another delivery mechanism for SMS, just for 5GC without VoNR, the SMSf which can still send messages over the 5G NAS messaging.

This added another option for SMS delivery dependent on the access network used, and the Nsmsf_SMService interface does not support roaming.

Of course if you are using Voice over NR (VoNR) then like VoLTE, SMS is carried in a SIP message to the IMS, so this negates the need for the Nsmsf_SMService.

2G/3G Shutdown – Diameter to replace SGs-AP (SGd)

With the 2G/3G shutdown in the US operators who had up until this point been relying on SMS-over-NAS using the SGs-AP interface back to their MSCs were forced to make a decision on how to route SMS traffic, after the MSCs were shut down.

This landed with SMS-over-Diameter, where the 4G core (MME) communicates over Diameter with the SMSc.

The advantage of this approach is the Diameter protocol stack is the backbone of 4G roaming, and it’s not a stretch to get existing Diameter Routing Agents to start flicking SMS over Diameter messages between operators.

This has adoption by all the US operators, but we’re not seeing it so widely deployed in the rest of the world.

State of Play

OptionConditionsNotes
MAP2G/3G OnlyRelies on SS7 signaling and is very old
Supports roaming
SGs-AP (SMS-over-NAS)4G only relies on 2G/3GNeeds an MSC to be present in the network (generally because you have a 2G/3G network and have not deployed VoLTE)
Supports limited roaming
SMS over IP (IMS)4G / 5GNot supported on 2G/3G networks
Relies on a IMS enabled handset and network
Supports roaming in all S8 Home Routed scenarios
Device support limited, especially for IoT devices
Diameter SGd4G only / 5G NSAOnly works on 4G or 5G NSA
Better device support than 4G/5G
Supports roaming in some scenarios
Nsmsf_SMService5G standalone onlyOnly works on 5GC
Doesn’t support roaming
The convoluted world of SMS delivery options

A Way Forward:

While the SMS payload hasn’t changed in the past 31 years, how it is transported has opened up a lot of potential options for operators to use, with no clear winner, while SMS revenues and traffic volumes have continued to fall.

For better or worse, the industry needs to accept that SMS over NAS is an option to use when there is no IMS, and that in order to decommission 2G/3G networks, IMS needs to be embraced, and so SMS over IP (IMS) supported in all future networks, seems like the simple logical answer to move forward.

And with that clear path forward, we add in another wildcard…

Direct to device Satellite messes everything up…

Remember way back in this post when I said SMS over IP using IMS is a really really inefficient way of getting data? Well that hasn’t been a problem as we progressed up the generations of cellular tech as with each “G” we had more and more bandwidth than the last.

To throw a spanner in the works, let’s introduce NB-IoT and Non-Terrestrial Networks which rely on Non-IP-Data-Delivery.

These offer the ability to cover the globe with a low bandwidth / high latency service, that would ensure a subscriber is always just a message away, we’re seeing real world examples of these networks getting deployed for messaging applications already.

But, when you’ve only got a finite resource of bandwidth, and massive latencies to contend with, the all-IP architecture of IMS (VoLTE / VoNR) and it’s woeful inefficiency starts to really sting.

Of course there are potential workarounds here, Robust Header Correction (ROHC) can shrink this down, but it’s still going to rely on the 3 way handshake of TCP, TCP keepalive timers and IMS registrations, which in turn can starve the radio resources of the satellite link.

For NTN (Satelite) networks the case is being heavily made to rely on Non-IP-Data-Delivery, so the logical answer for these applications is to move the traffic back to SMS over NAS.

End Note

Even with SMS over 30 years old, we can still expect it to be a part of networks for years to come, even as WhatsApp / iMessage, etc, offer enhanced services. As to how it’s transported and the myriad of options here, I’m expecting that we’ll keep seeing a multi-transport mix long into the future.

For simple, cut-and-dried 4G/5G only network, IMS and SMS over IP makes the most sense, but for anything outside of that, you’ve got a toolbox of options for use to make a solution that best meets your needs.

What’s the maximum speed for LTE and 5G?

Even before 5G was released, the arms race to claim the “fastest” speeds on LTE, NSA and SA networks has continued, with pretty much every operator claiming a “first” or “fastest”.

I myself have the fastest 5G network available* but I thought I’d look at how big the values are we can put in for speed, these are the Maximum Bitrate Values (like AMBR) we can set on an APN/DNN, or on a Charging Rule.

*Measurement is of the fastest 5G network in an eastward facing office, operated by a person named Nick, in a town in Australia. Other networks operated by people other than those named Nick in eastward facing office outside of Australia were not compared.

The answer for Release 8 LTE is 4294967294 bytes per second, aka 4295 Mbps 4.295 Gbps.

Not bad, but why this number?

The Max-Requested-Bandwidth-DL AVP tells the PGW the max throughput allowed in bits per second. It’s a Unsigned32 so max value is 4294967294, hence the value.

But come release 15 some bright spark thought we may in the not to distant future break this barrier, so how do we go above this?

The answer was to bolt on another AVP – the “Extended-Max-Requested-BW-DL” AVP ( 554 ) was introduced, you might think that means the max speed now becomes 2x 4.295 Gbps but that’s not quite right – The units was shifted.

This AVP isn’t measuring bits per second it’s measuring kilobits per second.

So the standard Max-Requested-Bandwidth-DL AVP gives us 4.3 Gbps, while the Extended-Max-Requested-Bandwidth gives us a 4,295 Gbps.

We add the Extended-Max-Requested-Bandwidth AVP (4295 Gbps) onto the Max-Requested Bandwidth AVP (4.3 Gbps) giving us a total of 4,4299.3 Gbps.

So the short answer:

Pre release 15: 4.3 Gbps

Post release 15: 4,4299.3 Gbps

Huawei BBU 3900 Architecture

Huawei BTS3900 eNB Configuration

Last year I purchased a cheap second hand Huawei macro base station – there’s lots of these on the market at the moment due to the fact they’re being replaced in many countries.

I’m using it in my lab environment, and as such the config I’ve got is very “bare bones” and basic. Keep in mind if you’re looking to deploy a Macro eNodeB in production, you may need more than just a blog post to get everything tuned and functioning properly…

In this post we’ll cover setting up a Huawei BTS3900 eNodeB from scratch, using the MML interface, without relying on the U2020 management tool.

Obviously the details I setup (IP Addressing, PLMN and RF parameters) are going to be different to what you’re configuring, so keep that in mind, where I’ve got my MME Addresses, site IDs, TACs, IP Addresses, RFUs, etc, you’ll need to substitute your own values.

A word on Cabinets

Typically these eNodeBs are shipped in cabinets, that contain the power supplies, alarm / environmental monitoring, power distribution, etc.

Early on in the setup process we’ll be setting the cabinet types we’ve got, and then later on we’ll tell the system what we have installed in which slots.

This is fine if you have a cabinet and know the type, but in my case at least I don’t have a cabinet manufactured by Huawei, just a rack with some kit mounted in it.

This is OK, but it leads to a few gotchas I need to add a cabinet (even though it doesn’t physically exist) and when I setup my RRUs I need to define what cabinet, slot and subrack it’s in, even though it isn’t in any. Keep this in mind as we go along and define the position of the equipment, that if you’re not using a real-world cabinet, the values mean nothing, but need to be kept consistent.

The Basics

Before we get started, familiarise yourself with the Huawei MML we’ll use for configuring the unit, and log into the Web UI and bring up an MML shell.

To begin we’ll need to setup the basics, by disabling DHCP and setting an local IP Address for the unit.

 SET DHCPSW: SWITCH=DISABLE;
 SET LOCALIP: IP="192.168.5.234", MASK="255.255.248.0";

Obviously your IP address details will be different.
Next we’ll add an eNodeB function, the LMPT / UMPT can have multiple functions and multiple eNodeBs hosted on the same hardware, but in our case we’re just going to configure one:

 ADD ENODEBFUNCTION: eNodeBFunctionName="LTE", ApplicationRef=1, eNodeBId=9527;
 SET NE: NENAME="HUAWEI", LOCATION="NewSite", DID="NewSite12345", SITENAME="NewSite1", USERLABEL="NewInitSite";
 ADD LOCATION: LOCATIONNAME="NewSite", GCDF=Degree, LATITUDEDEGFORMAT=0, LONGITUDEDEGFORMAT=0; 

Again, your eNodeB ID, location, site name, etc, are all going to be different, as will your location.

Next we’ll set the system to maintenance mode (MNTMODE), so we can make changes on the fly (this takes the eNB off the air, but we’re already off the air), you’ll need to adjust the start and end times to reflect the current time for the start time, and end time to be after you’re done setting all this up.

 SET MNTMODE: MNTMode=INSTALL, ST=2013&09&20&15&00&00, ET=2013&09&25&15&00&00, MMSetRemark="NewSite Install";

Next we’ll set the operator details, this is the PLMN of the eNodeB, and create a new tracking area.

 ADD CNOPERATOR: CnOperatorId=0, CnOperatorName="NickTest", CnOperatorType=CNOPERATOR_PRIMARY, Mcc="001", Mnc="01";
ADD CNOPERATORTA: TrackingAreaId=0, CnOperatorId=0, Tac=1;

Next we’ll be setting and populating the cabinets I mentioned earlier. I’ll be telling the unit it’s inside a APM30 (Cabinet 0), and in Cabinet Number 0, Subrack 0, is a BBU3900.

 //To modify the cabinet type, run the following command:
ADD CABINET:CN=0,TYPE=APM30;
//Add a BBU3900 subrack, run the following command:
ADD SUBRACK:CN=0,SRN=0,TYPE=BBU3900;
//To configure boards and RF datas, run the following commands:

And inside the BBU3900 there’s some cards of course, and each card has as slot, as per the drawing below.

In my environment I’ve got a LMPT in slot 7, and a LBBP in Slot 3. There’s a fan and a UPEU too, so:
We’ll add a board in Slot No. 7, of type LMPT,
We’ll add a board in Slot No. 3, of type LBBP working on FDD,
We’ll add a fan board in Slot No. 16, and a UPEU in Slot No. 18.

 ADD BRD:SN=7,BT=LMPT;
 ADD BRD:CN=0,SRN=0,SN=3,BT=LBBP,WM=TDD;
 ADD BRD:CN=0,SRN=0,SN=16,BT=FAN;
 ADD BRD:CN=0,SRN=0,SN=18,BT=UPEU;

Huawei publish design guides for which cards should be in which slots, the general rule is that your LMPT / UMPT card goes in Slot 7, with your BBP cards (UBBP or LBBP) in slots 3, then 2, then 1, then 0. Fans and UPEUs can only go in the slots designed to fit them, so that makes it a bit easier.

Next we’ll need to setup our RRUs, for this we’ll need to setup an RRU chain, which is the Huawei term for the CPRI links and add an RRU into it:

ADD RRUCHAIN:RCN=10,TT=CHAIN,BM=COLD,HSRN=70,HSN=0,HPN=0;

ADD RRU:CN=0,SRN=60,SN=0,TP=BRANCH,RCN=10,PS=0,RT=MPMU,RS=TDL,RXNUM=0,TXNUM=0;

With our RRU chains defined, we’ll need to setup our transport network to get the traffic back to the S-GW / MME:

SET ETHPORT: SN=7, SBT=BASE_BOARD, PA=COPPER, SPEED=AUTO, DUPLEX=AUTO;
ADD DEVIP: SN=7, SBT=BASE_BOARD, PT=ETH, PN=0, IP="10.10.10.67", MASK="255.255.255.0";
ADD IPRT: RTIDX=0, SN=7, SBT=BASE_BOARD, DSTIP="10.166.1.251", DSTMASK="255.255.255.255", RTTYPE=NEXTHOP, NEXTHOP="10.10.10.1"; 
ADD IPRT: RTIDX=1, SN=7, SBT=BASE_BOARD, DSTIP="10.4.3.3", DSTMASK="255.255.255.255", RTTYPE=NEXTHOP, NEXTHOP="10.10.10.1"; 
ADD IPRT: RTIDX=2, SN=7, SBT=BASE_BOARD, DSTIP="10.3.3.3", DSTMASK="255.255.255.255", RTTYPE=NEXTHOP, NEXTHOP="10.10.10.1";
ADD IPRT: RTIDX=3, SN=7, SBT=BASE_BOARD, DSTIP="10.60.60.60", DSTMASK="255.255.255.255", RTTYPE=NEXTHOP, NEXTHOP="10.10.10.1";
ADD OMCH: IP="10.10.10.67", MASK="255.255.255.0", PEERIP="10.166.1.251", PEERMASK="255.255.255.255", BEAR=IPV4, BRT=YES, RTIDX=0, BINDSECONDARYRT=NO, CHECKTYPE=NONE;
ADD VLANMAP: NEXTHOPIP="10.10.10.1", MASK="255.255.248.0", VLANMODE=SINGLEVLAN, VLANID=3721, SETPRIO=DISABLE; 
ADD SCTPTEMPLATE: SCTPTEMPLATEID=0, SWITCHBACKFLAG=ENABLE;
ADD SCTPHOST: SCTPHOSTID=0, IPVERSION=IPv4, SIGIP1V4="10.10.10.67", SIGIP1SECSWITCH=DISABLE, SIGIP2SECSWITCH=DISABLE, PN=2000, SCTPTEMPLATEID=0;
ADD SCTPPEER: SCTPPEERID=0, IPVERSION=IPv4, SIGIP1V4="10.3.3.3", SIGIP1SECSWITCH=DISABLE, SIGIP2SECSWITCH=DISABLE, PN=2000;
ADD USERPLANEHOST: UPHOSTID=0, IPVERSION=IPv4, LOCIPV4="10.10.10.67", IPSECSWITCH=DISABLE;
ADD EPGROUP: EPGROUPID=0;
ADD SCTPHOST2EPGRP: EPGROUPID=0, SCTPHOSTID=0; 
ADD SCTPPEER2EPGRP: EPGROUPID=0, SCTPPEERID=0;
ADD UPHOST2EPGRP: EPGROUPID=0, UPHOSTID=0;
ADD S1: S1Id=0, CnOperatorId=0, EpGroupCfgFlag=CP_UP_CFG, CpEpGroupId=0, UpEpGroupId=0;


We’ll need clocking and time as well, we’ll use NTP and GPS:

SET TIMESRC: TIMESRC=NTP; 
ADD NTPC: MODE=IPV4, IP="10.166.1.251", PORT=123, SYNCCYCLE=60, AUTHMODE=PLAIN; 
SET MASTERNTPS: MODE=IPV4, IP="10.166.1.251"; 
SET TZ: ZONET=GMT+0800, DST=NO;

ADD GPS: SRN=0, SN=7;
SET CLKMODE: MODE=MANUAL, CLKSRC=GPS, SRCNO=0;
SET CLKSYNCMODE:CLKSYNCMODE=TIME;

Next we’ll need to define a sector, sector equipment & cell, then link it to a sector equipment group:

ADD SECTOR:SECTORID=0,ANTNUM=2,ANT1CN=0,ANT1SRN=60,ANT1SN=255, ANT1N=R0A,ANT2CN=0,ANT2SRN=60,ANT2SN=255,ANT2N=R0B,CREATESECTOREQM=FALSE;

ADD SECTOREQM:SECTOREQMID=0,SECTORID=0,ANTNUM=2,ANT1CN=0, ANT1SRN=60,ANT1SN=255,ANT1N=R0A,ANTTYPE1=RXTX_MODE,ANT2CN=0,ANT2SRN=60,ANT2SN=255,ANT2N=R0B,ANTTYPE2=RXTX_MODE;

ADD CELL:LOCALCELLID=1,CELLNAME="CELL1",FREQBAND=41,ULEARFCNCFGIND=NOT_CFG,DLEARFCN=40340,ULBANDWIDTH=CELL_BW_N100,DLBANDWIDTH=CELL_BW_N100,CELLID=1,PHYCELLID=1,FDDTDDIND=CELL_TDD,SUBFRAMEASSIGNMENT=SA2,SPECIALSUBFRAMEPATTERNS=SSP5,ROOTSEQUENCEIDX=0,CUSTOMIZEDBANDWIDTHCFGIND=NOT_CFG,EMERGENCYAREAIDCFGIND=NOT_CFG,UEPOWERMAXCFGIND=NOT_CFG,MULTIRRUCELLFLAG=BOOLEAN_TRUE,MULTIRRUCELLMODE=MPRU_AGGREGATION, CPRICOMPRESSION=NORMAL_COMPRESSION,TXRXMODE=2T2R;

ADD EUSECTOREQMGROUP:LOCALCELLID=1,SECTOREQMGROUPID=1;
ADD EUSECTOREQMID2GROUP:LOCALCELLID=1,SECTOREQMGROUPID=1, SECTOREQMID=0;

Alright, now we can activate it:

//Modify the reference signal power.
MOD PDSCHCFG: LocalCellId=1, ReferenceSignalPwr=-81;

//Add an operator for the cell.
ADD CELLOP: LocalCellId=0, TrackingAreaId=0;

//Activate the cell.
ACT CELL: LocalCellId=1;

And lastly we can define some neighboring cells:

//Configure neighboring cells. 
ADD EUTRANINTERNFREQ: LocalCellId=1, DlEarfcn=3100, UlEarfcnCfgInd=NOT_CFG, CellReselPriorityCfgInd=NOT_CFG, SpeedDependSPCfgInd=NOT_CFG, MeasBandWidth=MBW100, PmaxCfgInd=NOT_CFG, QqualMinCfgInd=NOT_CFG;
ADD EUTRANEXTERNALCELL: Mcc="460", Mnc="02", eNodeBId=236, CellId=0, DlEarfcn=3100, UlEarfcnCfgInd=NOT_CFG, PhyCellId=236, Tac=33;
ADD EUTRANINTERFREQNCELL: LocalCellId=1, Mcc="460", Mnc="02", eNodeBId=236, CellId=0;

BSF Addresses

The Binding Support Function is used in 4G and 5G networks to allow applications to authenticate against the network, it’s what we use to authenticate for XCAP and for an Entitlement Server.

Rather irritatingly, there are two BSF addresses in use:

If the ISIM is used for bootstrapping the FQDN to use is:

bsf.ims.mncXXX.mccYYY.pub.3gppnetwork.org

But if the USIM is used for bootstrapping the FQDN is

bsf.mncXXX.mccYYY.pub.3gppnetwork.org

You can override this by setting the 6FDA EF_GBANL (GBA NAF List) on the USIM or equivalent on the ISIM, however not all devices honour this from my testing.

Will 5GC be used in Wireline Access? No. Here’s why.

One of the hyped benefits of a 5G Core Networks is that 5GC can be used for wired networks (think DSL or GPON) – In marketing terms this is called “Wireless Wireline Convergence” (5G WWC) meaning DSL operators, cable operators and fibre network operators can all get in on this sweet 5GC action and use this sexy 5G Core Network tech.

This is something that’s in the standards, and that the big kit vendors are pushing heavily in their marketing materials. But will it take off? And should operators of wireline networks (fixed networks) be looking to embrace 5GC?

Comparing 5GC with current wireline network technologies isn’t comparing apples to apples, it’s apples to oranges, and they’re different fruits.

At its heart, the 3GPP Core Networks (including 5G Core) address one particular use cases of the cellular industry: Subscriber mobility – Allowing a customer to move around the network, being served by different kit (gNodeBs) while keeping the same IP Address.

The most important function of 5GC is subscriber mobility.

This is achieved through the use of encapsulating all the subscriber’s IP data into a GTP (A protocol that’s been around since 2G first added data).

Do I need a 5GC for my Fixed Network?

Wireline networks are fixed. Subscribers don’t constantly move around the network. A GPON customer doesn’t need to move their OLT every 30 minutes to a new location.

Encapsulating a fixed subscriber’s traffic in GTP adds significant processing overhead, for almost no gain – The needs of a wireline network operator, are vastly different to the needs of a cellular core.

Today, you can take a /24 IPv4 block, route it to a DSLAM, OLT or CMTS, and give an IP to 254 customers – No cellular core needed, just a router and your access device and you’re done, and this has been possible for decades.
Because there’s no mobility the GTP encapsulation that is the bedrock for cellular, is not needed.

Rather than routing directly to Access Network kit, most fixed operators deploy BRAS systems used for fixed access. Like the cellular packet core, BRAS has been around for a very long time, with a massive install base and a sea of engineering experience in house, it meets the needs of the wireline industry who define its functions and roles along with kit vendors of wireline kit; the fixed industry working groups defined the BRAS in the same way the 3GPP and cellular industry working groups defined 5G Core.

I don’t forsee that we’ll see large scale replacement of BRAS by 5GC, for the same reason a wireless operator won’t replace their mobile core with a BRAS and PPPoE – They’re designed to meet different needs.

All the other features that have been added to the 3GPP Core Network functionality, like limiting speed, guaranteed throughput bearers, 5QI / QCI values, etc, are addons – nice-to-haves. All of these capabilities could be implemented in wireline networks today – if the business case and customer demand was there.

But what about slicing?

With dropping ARPUs across the board, additional services relating to QoS (“Network Slicing”) are being held up as the saving grace of revenues for cellular operators and 5G as a whole, however this has yet to be realized and early indications suggest this is not going to be anywhere near as lucrative as previously hoped.

What about cost savings?

In terms of cost-per-bit of throughput, the existing install base wireline operators have of heavy-metal kit capable of terabit switching and routing has been around for some time in fixed world, and is what most 5G Cores will connect to as their upstream anyway, so there won’t be any significant savings on equipment, power consumption or footprint to be gained.

Fixed networks transport the majority of the world’s data today – Wireline access still accounts for the majority of traffic volumes, so wireline kit handles a higher magnitude of throughput than it’s Packet Core / 5GC cousins already.

Cutting down the number of parts in the network is good though right?

If you’re operating both a Packet Core for Cellular, and a fixed network today, then you might think if you moved from the traditional BRAS architecture fore the wired network to 5GC, you could drop all those pesky routers and switches clogging up your CO, Exchanges and Data Centers.

The problem is that you still need all of those after the 5GC to be able to get the traffic anywhere users want to go. So the 5GC will still need all of that kit, all your border routers and peering routers will remain unchanged, as well as domestic transmission, MPLS and transport.

The parts required for operating fixed networks is actually pretty darn small in comparison to that of 5GC.

TL;DR?

While cellular vendors would love to sell their 5GC platform into fixed operators, the premise that they are willing to replace existing BRAS architectures with 5GC, is as unlikely in my view as 5GC being replaced by BRAS.

Inside a 32×32 MIMO Antenna

For the past few months I’ve had a Band 78 NR active antenna unit sitting next to my desk.

It’s a very cool bit of kit that doesn’t get enough love, but I thought I’d pop open the radome and take a peek inside.

Individual antenna elements

What I found very interesting is that it’s not all antennas in there!

… 29, 30, 31, 32. Yup. Checks out.

There are the expected number of antennas (I mean if I opened it up and found 31 antennas I’d have been surprised) but they don’t take up the whole volume of the unit, only about half,

AAU with Radome reinstalled

Well, after that strip show, back to sitting in my office until I need to test something 5G SA again…

Getting to know the PCRF for traffic Policy, Rules & Rating

Misunderstood, under appreciated and more capable than people give it credit for, is our PCRF.

But what does it do?

Most folks describe the PCRF in hand wavy-terms – “it does policy and charging” is the answer you’ll get, but that doesn’t really tell you anything.

So let’s answer it in a way that hopefully makes some practical sense, starting with the acronym “PCRF” itself, it stands for Policy and Charging Rules Function, which is kind of two functions, one for policy and one for rules, so let’s take a look at both.

Policy

In cellular world, as in law, policy is the rules.

For us some examples of policy could be a “fair use policy” to limit customer usage to acceptable levels, but it can also be promotional packages, services like “free Spotify” packages, “Voice call priority” or “unmetered access to Nick’s Blog and maximum priority” packages, can be offered to customers.

All of these are examples of policy, and to make them work we need to target which subscribers and traffic we want to apply the policy to, and then apply the policy.

Charging Rules

Charging Rules are where the policy actually gets applied and the magic happens.

It’s where we take our policy and turn it into actionable stuff for the cellular world.

Let’s take an example of “unmetered access to Nick’s Blog and maximum priority” as something we want to offer in all our cellular plans, to provide access that doesn’t come out of your regular usage, as well as provide QCI 5 (Highest non dedicated QoS) to this traffic.

To achieve this we need to do 3 things:

  • Profile the traffic going to this website (so we capture this traffic and not regular other internet traffic)
  • Charge it differently – So it’s not coming from the subscriber’s regular balance
  • Up the QoS (QCI) on this traffic to ensure it’s high priority compared to the other traffic on the network

So how do we do that?

Profiling Traffic

So the first step we need to take in providing free access to this website is to filter out traffic to this website, from the traffic not going to this website.

Let’s imagine that this website is hosted on a single machine with the IP 1.2.3.4, and it serves traffic on TCP port 443. This is where IPFilterRules (aka TFTs or “Traffic Flow Templates”) and the Flow-Description AVP come into play. We’ve covered this in the past here, but let’s recap:

IPFilterRules are defined in the Diameter Base Protocol (IETF RFC 6733), where we can learn the basics of encoding them,

They take the format:

action dir proto from src to dst

The action is fairly simple, for all our Dedicated Bearer needs, and the Flow-Description AVP, the action is going to be permit. We’re not blocking here.

The direction (dir) in our case is either in or out, from the perspective of the UE.

Next up is the protocol number (proto), as defined by IANA, but chances are you’ll be using 17 (UDP) or 6 (TCP).

The from value is followed by an IP address with an optional subnet mask in CIDR format, for example from 10.45.0.0/16 would match everything in the 10.45.0.0/16 network.

Following from you can also specify the port you want the rule to apply to, or, a range of ports.

Like the from, the to is encoded in the same way, with either a single IP, or a subnet, and optional ports specified.

And that’s it!

So let’s create a rule that matches all traffic to our website hosted on 1.2.3.4 TCP port 443,

permit out 6 from 1.2.3.4 443 to any 1-65535
permit out 6 from any 1-65535 to 1.2.3.4 443

All this info gets put into the Flow-Information AVPs:

With the above, any traffic going to/from 1.23.4 on port 443, will match this rule (unless there’s another rule with a higher precedence value).

Charging Actions

So with our traffic profiled, the next question is what actions are we going to take, well there’s two, we’re going to provide unmetered access to the profiled traffic, and we’re going to use QCI 4 for the traffic (because you’ll need a guaranteed bit rate bearer to access!).

Charging-Group for Profiled Traffic

To allow for Zero Rating for traffic matching this rule, we’ll need to use a different Rating Group.

Let’s imagine our default rating group for data is 10000, then any normal traffic going to the OCS will use rating group 10000, and the OCS will apply the specific rates and policies based on that.

Rating Groups are defined in the OCS, and dictate what rates get applied to what Rating Groups.

For us, our default rating group will be charged at the normal rates, but we can define a rating group value of 4000, and set the OCS to provide unlimited traffic to any Credit-Control-Requests that come in with Rating Group 4000.

This is how operators provide services like “Unlimited Facebook” for example, a Charging Rule matches the traffic to Facebook based on TFTs, and then the Rating Group is set differently to the default rating group, and the OCS just allows all traffic on that rating group, regardless of how much is consumed.

Inside our Charging-Rule-Definition, we populate the Rating-Group AVP to define what Rating Group we’re going to use.

Setting QoS for Profiled Traffic

The QoS Description AVP defines which QoS parameters (QCI / ARP / Guaranteed & Maximum Bandwidth) should be applied to the traffic that matches the rules we just defined.

As mentioned at the start, we’ll use QCI 4 for this traffic, and allocate MBR/GBR values for this traffic.

Putting it Together – The Charging Rule

So with our TFTs defined to match the traffic, our Rating Group to charge the traffic and our QoS to apply to the traffic, we’re ready to put the whole thing together.

So here it is, our “Free NVN” rule:

I’ve attached a PCAP of the flow to this post.

In our next post we’ll talk about how the PGW handles the installation of this rule.

Ericsson & Nokia RRU Power Connectors – Wiring and Tricks

Something that’s kind of great is that the current generation of Ericsson RRUs and Nokia RRUs, use the same power connector – The Amphenol “Amphe-OBTS” series connector.

Construction and wiring of these connectors is the same for both, and with one little trick, we can use the connector for both Ericsson and Nokia RRUs (Airscale and later).

This pin that stops the connector from being “universal” but is easily removed.

The connectors are not quite universal, in order to use it in both you need to knock off a small pin on the connector, I’d suggest doing this before you assemble it, put the connector on it’s back, facing upwards, and hit this with a screwdriver / chisel and it’ll pop off with very little effort.

Assembling the connectors starts by working out the diameter of the grommet you need to fit your cable, the connector comes with the grommet for 9-14mm, but in the bag you’ll usually get grommets for 6-9mm cable and 14-18mm cable.

Grab the correct one for your cable diameter, and pop into the black fingered cage (‘gland adapter’) shown in the bottom right of the below photo.

Grommets and gland adapter

Next we line all the parts up along the cable and screw it all together:

The end-cap is actually very useful for stopping the female end of the connector from spinning when you’re assembling the cable, so don’t throw it away!

The finished product

Diameter Routing Agents – Part 5 – AVP Transformations with FreeDiameter and Python in rt_pyform

In our last post we talked about why we’d want to perform Diameter AVP translations / rewriting on our Diameter Routing Agent.

Now let’s look at how we can actually achieve this using rt_pyform extension for FreeDiameter and some simple Python code.

Before we build we’ll need to make sure we have the python3-devel package (I’m using python3-devel-3.10) installed.

Then we’ll build FreeDiameter with the rt_pyform, this branch contains the rt_pyform extension in it already, or you can clone the extension only from this repo.

Now once FreeDiameter is installed we can load the extension in our freeDiameter.conf file:

LoadExtension = "rt_pyform.fdx" : "<Your config filename>.conf";

Next we’ll need to define our rt_pyform config, this is a super simple 3 line config file that specifies the path of what we’re doing:

DirectoryPath = "."        # Directory to search
ModuleName = "script"      # Name of python file. Note there is no .py extension
FunctionName = "transform" # Python function to call

The DirectoryPath directive specifies where we should search for the Python code, and ModuleName is the name of the Python script, lastly we have FunctionName which is the name of the Python function that does the rewriting.

Now let’s write our Python function for the transformation.

The Python function much have the correct number of parameters, must return a string, and must use the name specified in the config.

The following is an example of a function that prints out all the values it receives:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    
    return value

Note the order of the arguments and that return is of the same type as the AVP value (string).

We can expand upon this and add conditionals, let’s take a look at some more complex examples:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    #IMSI Translation - if App ID = 16777251 and the AVP being evaluated is the Username
    if (int(appId) == 16777251) and int(AVP_Code) == 1:
        print("This is IMSI '" + str(value) + "' - Evaluating transformation")
        print("Original value: " + str(value))
        value = str(value[::-1]).zfill(15)

The above look at if the App ID is S6a, and the AVP being checked is AVP Code 1 (Username / IMSI ) and if so, reverses the username, so IMSI 1234567 becomes 7654321, the zfill is just to pad with leading 0s if required.

Now let’s do another one for a Realm Rewrite:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):

    #Print Debug Info
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    #Realm Translation
    if int(AVP_Code) == 283:
        print("This is Destination Realm '" + str(value) + "' - Evaluating transformation")
    if value == "epc.mnc001.mcc001.3gppnetwork.org":
        new_realm = "epc.mnc999.mcc999.3gppnetwork.org"
        print("translating from " + str(value) + " to " + str(new_realm))
        value = new_realm
    else:
        #If the Realm doesn't match the above conditions, then don't change anything
        print("No modification made to Realm as conditions not met")
    print("Updated Value: " + str(value))

In the above block if the Realm is set to epc.mnc001.mcc001.3gppnetwork.org it is rewritten to epc.mnc999.mcc999.3gppnetwork.org, hopefully you can get a handle on the sorts of transformations we can do with this – We can translate any string type AVPs, which allows for hostname, realm, IMSI, Sh-User-Data, Location-Info, etc, etc, to be rewritten.

NB-IoT NIDD Basics

NB-IoT introduces support for NIDD – Non-IP Data Delivery (NIDD) which is one of the cool features of NB-IoT that’s gaining more widespread adoption.

Let’s take a deep dive into NIDD.

The case against IP for IoT

In the over 40 years since IP was standardized, we’ve shoehorned many things onto IP, but IP was never designed or optimized for low power, low throughput applications.

For the battery life of an IoT device to be measured in years, it has to be very selective about what power hungry operations it does. Transmitting data over the air is one of the most power-intensive operations an IoT device can perform, so we need to do everything we can to limit how much data is sent, and how frequently.

Use Case – NB-IoT Tap

Let’s imagine we’re launching an IoT tap that transmits information about water used, as part of our revolutionary new “Water as a Service” model (WaaS) which removes the capex for residents building their own water treatment plant in their homes, and instead allows dynamic scaling of waterloads as they move to our new opex model.

If I turn on the tap and use 12L of water, when I turn off the tap, our IoT tap encodes the usage onto a single byte and sends the usage information to our rain-cloud service provider.

So we’re not constantly changing the batteries in our taps, we need to send this one byte of data as efficiently as possible, so as to maximize the battery life.

If we were to transport our data on TCP, well we’d need a 3 way handshake and several messages just to transmit the data we want to send.

Let’s see how our one byte of data would look if we transported it on TCP.

That sliver of blue in the diagram is our usage component, the rest is overhead used to get it there. Seems wasteful huh?

Sure, TCP isn’t great for this you say, you should use UDP! But even if we moved away from TCP to UDP, we’ve still got the IPv4 header and the UDP header wasting 28 bytes.

For efficiency’s sake (To keep our batteries lasting as long as possible) we want to send as few messages as possible, and where we do have to send messages, keep them very short, so IP is not a great fit here.

Enter NIDD – Non-IP Data Delivery.

Through NIDD we can just send the single hex byte, only be charged for the single hex byte, and only stay transmitting long enough to send this single byte of hex (Plus the NBIoT overheads / headers).

Compared to UDP transport, NIDD provides us a reduction of 28 bytes of overhead for each message, or a 96% reduction in message size, which translates to real power savings for our IoT device.

In summary – the more sending your device has to do, the more battery it consumes.
So in a scenario where you’re trying to maximize power efficiency to keep your batter powered device running as long as possible, needing to transmit 28 bytes of wasted data to transport 1 byte of usable data, is a real waste.

Delivering the Payload

NIDD traffic is transported as raw hex data end to end, this means for our 1 byte of water usage data, the device would just send the hex value to be transferred and it’d pop out the other end.

To support this we introduce a new network element called the SCEFService Capability Exposure Function.

From a developer’s perspective, the SCEF is the gateway to our IoT devices. Through the RESTful API on the SCEF (T8 API), we can send and receive raw hex data to any of our IoT devices.

When one of our Water-as-a-Service Taps sends usage data as a hex byte, it’s the software talking on the T8 API to the SCEF that receives this data.

Data of course needs to be addressed, so we know where it’s coming from / going to, and T8 handles this, as well as message reliability, etc, etc.

This is a telco blog, so we should probably cover the MME connection, the MME talks via Diameter to the SCEF. In our next post we’ll go into these signaling flows in more detail.

If you’re wondering what the status of Open Source SCEF implementations are, then you may have already guessed I’m working on one!

Hopefully by now you’ve got a bit of an idea of how NIDD works in NB-IoT, and in our next posts we’ll dig deeper into the flows and look at some PCAPs together.

Diameter Routing Agents – Part 5 – AVP Transformations

Having a central pair of Diameter routing agents allows us to drastically simplify our network, but what if we want to perform some translations on AVPs?

For starters, what is an AVP transformation? Well it’s simply rewriting the value of an AVP as the Diameter Request/Response passes through the DRA. A request may come into the DRA with IMSI xxxxxx and leave with IMSI yyyyyy if a translation is applied.

So why would we want to do this?

Well, what if we purchased another operator who used Realm X, and we use Realm Y, and we want to link the two networks, then we’d need to rewrite Realm Y to Realm X, and Realm X to Realm Y when they communicate, AVP transformations allow for this.

If we’re an MVNO with hosted IMSIs from an MNO, but want to keep just the one IMSI in our HSS/OCS, we can translate from the MNO hosted IMSI to our internal IMSI, using AVP transformations.

If our OCS supports only one rating group, and we want to rewrite all rating groups to that one value, AVP transformations cover this too.

There are lots of uses for this, and if you’ve worked with a bit of signaling before you’ll know that quite often these sorts of use-cases come up.

So how do we do this with freeDiameter?

To handle this I developed a module for passing each AVP to a Python function, which can then apply any transformation to a text based value, using every tool available to you in Python.

In the next post I’ll introduce rt_pyform and how we can use it with Python to translate Diameter AVPs.

Diameter Routing Agents – Part 4 – Advanced FreeDiameter DRA Routing

Way back in part 2 we discussed the basic routing logic a DRA handles, but what if we want to do something a bit outside of the box in terms of how we route?

For me, one of the most useful use cases for a DRA is to route traffic based on IMSI / Username.
This means I can route all the traffic for MVNO X to MVNO X’s HSS, or for staging / test subs to the test HSS enviroment.

FreeDiameter has a bunch of built in logic that handles routing based on a weight, but we can override this, using the rt_default module.

In our last post we had this module commented out, but let’s uncomment it and start playing with it:

#Basic Diameter config for this box
Identity = "dra.mnc001.mcc001.3gppnetwork.org";
Realm = "mnc001.mcc001.3gppnetwork.org";
Port = 3868;

LoadExtension = "dbg_msg_dumps.fdx" : "0x8888";
LoadExtension = "rt_redirect.fdx":"0x0080";
LoadExtension = "rt_default.fdx":"rt_default.conf";

TLS_Cred = "/etc/freeDiameter/cert.pem", "/etc/freeDiameter/privkey.pem";
TLS_CA = "/etc/freeDiameter/cert.pem";
TLS_DH_File = "/etc/freeDiameter/dh.pem";

ConnectPeer = "mme01.mnc001.mcc001.3gppnetwork.org" { ConnectTo = "10.98.0.10"; No_TLS; };
ConnectPeer = "hss01" { ConnectTo = "10.0.1.252"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
ConnectPeer = "hss02" { ConnectTo = "10.0.1.253"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
ConnectPeer = "hss-mvno-x" { ConnectTo = "10.98.0.22"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
ConnectPeer = "hss-lab" { ConnectTo = "10.0.2.2"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};

In the above code we’ve uncommented rt_default and rt_redirect.

You’ll notice that rt_default references a config file, so we’ll create a new file in our /etc/freeDiameter directory called rt_default.conf, and this is where the magic will happen.

A few points before we get started:

  • This overrides the default routing priorities, but in order for a peer to be selected, it has to be in an Open (active) state
  • The peer still has to have advertised support for the requested application in the CER/CEA dialog
  • The peers will still need to have all been defined in the freeDiameter.conf file in order to be selected

So with that in mind, and the 5 peers we have defined in our config above (assuming all are connected), let’s look at some rules we can setup using rt_default.

Intro to rt_default Rules

The rt_default.conf file contains a list of rules, each rule has a criteria that if matched, will result in the specified action being taken. The actions all revolve around how to route the traffic.

So what can these criteria match on?
Here’s the options:

Item to MatchCode
Any*
Origin-Hostoh=”STR/REG”
Origin-Realmor=”STR/REG”
Destination-Hostdh=”STR/REG”
Destination-Realmdr=”STR/REG”
User-Nameun=”STR/REG”
Session-Idsi=”STR/REG”
rt_default Matching Criteria

We can either match based on a string or a regex, for example, if we want to match anything where the Destination-Realm is “mnc001.mcc001.3gppnetwork.org” we’d use something like:

#Low score to HSS02
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += -70 ;

Now you’ll notice there is some stuff after this, let’s look at that.

We’re matching anything where the destination-host is set to hss02 (that’s the bit before the colon), but what’s the bit after that?

Well if we imagine that all our Diameter peers are up, when a message comes in with Destination-Realm “mnc001.mcc001.3gppnetwork.org”, looking for an HSS, then in our example setup, we have 4 HHS instances to choose from (assuming they’re all online).

In default Diameter routing, all of these peers are in the same realm, and as they’re all HSS instances, they all support the same applications – Our request could go to any of them.

But what we set in the above example is simply the following:

If the Destination-Realm is set to mnc001.mcc001.3gppnetwork.org, then set the priority for routing to hss02 to the lowest possible value.

So that leaves the 3 other Diameter peers with a higher score than HSS02, so HSS02 won’t be used.

Let’s steer this a little more,

Let’s specify that we want to use HSS01 to handle all the requests (if it’s available), we can do that by adding a rule like this:

#Low score to HSS02
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += -70 ;
#High score to HSS01
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss01" += 100 ;

But what if we want to route to hss-lab if the IMSI matches a specific value, well we can do that too.

#Low score to HSS02
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += -70 ;
#High score to HSS01
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss01" += 100 ;
#Route traffic for IMSI to Lab HSS
un="001019999999999999" : dh="hss-lab" += 200 ;

Now that we’ve set an entry with a higher score than hss01 that will be matched if the username (IMSI) equals 001019999999999999, the traffic will get routed to hss-lab.

But that’s a whole IMSI, what if we want to match only part of a field?

Well, we can use regex in the Criteria as well, so let’s look at using some Regex, let’s say for example all our MVNO SIMs start with 001012xxxxxxx, let’s setup a rule to match that, and route to the MVNO HSS with a higher priority than our normal HSS:

#Low score to HSS02
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += -70 ;
#High score to HSS01
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss01" += 100 ;
#Route traffic for IMSI to Lab HSS
un="001019999999999999" : dh="hss-lab" += 200 ;
#Route traffic where IMSI starts with 001012 to MVNO HSS
un=["^001012.*"] : dh="hss-mvno-x" += 200 ;

Let’s imagine that down the line we introduce HSS03 and HSS04, and we only want to use HSS01 if HSS03 and HSS04 are unavailable, and only to use HSS02 no other HSSes are available, and we want to split the traffic 50/50 across HSS03 and HSS04.

Firstly we’d need to add HSS03 and HSS04 to our FreeDiameter.conf file:

...
ConnectPeer = "hss02" { ConnectTo = "10.0.1.253"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
ConnectPeer = "hss03" { ConnectTo = "10.0.3.3"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
ConnectPeer = "hss04" { ConnectTo = "10.0.4.4"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};
...

Then in our rt_default.conf we’d need to tweak our scores again:

#Low score to HSS02
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += 10 ;
#Medium score to HSS01
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss01" += 20 ;
#Route traffic for IMSI to Lab HSS
un="001019999999999999" : dh="hss-lab" += 200 ;
#Route traffic where IMSI starts with 001012 to MVNO HSS
un=["^001012.*"] : dh="hss-mvno-x" += 200 ;
#High Score for HSS03 and HSS04
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss02" += 100 ;
dr="mnc001.mcc001.3gppnetwork.org" : dh="hss04" += 100 ;

One quick tip to keep your logic a bit simpler, is that we can set a variety of different values based on keywords (listed below) rather than on a weight/score:

BehaviourNameScore
Do not deliver to peer (set lowest priority)NO_DELIVERY-70
The peer is a default route for all messagesDEFAULT5
The peer is a default route for this realmDEFAULT_REALM10
REALM15
Route to the specified Host with highest priorityFINALDEST100
Rather than manually specifying the store you can use keywords like above to set the value

In our next post we’ll look at using FreeDiameter based DRA in roaming scenarios where we route messages across Diameter Realms.

Diameter Routing Agents – Part 3 – Building a DRA with FreeDiameter

I’ve covered the basics of Diameter Routing Agents (DRAs) in the past, and even shared an unstable DRA built using Kamailio, but today I thought I’d cover building something a little more “production ready”.

FreeDiameter has been around for a while, and we’ve covered configuring the FreeDiameter components in Open5GS when it comes to the S6a interface, so you may have already come across FreeDiameter in the past, but been left a bit baffled as to how to get it to actually do something.

FreeDiameter is a FOSS implimentation of the Diameter protocol stack, and is predominantly used as a building point for developers to build Diameter applications on top of.

But for our scenario, we’ll just be using plain FreeDiameter.

So let’s get into it,

You’ll need FreeDiameter installed, and you’ll need a certificate for your FreeDiameter instance, more on that in this post.

Once that’s setup we’ll need to define some basics,

Inside freeDiameter.conf we’ll need to include the identity of our DRA, load the extensions and reference the certificate files:

#Basic Diameter config for this box
Identity = "dra.mnc001.mcc001.3gppnetwork.org";
Realm = "mnc001.mcc001.3gppnetwork.org";
Port = 3868;

LoadExtension = "dbg_msg_dumps.fdx" : "0x8888";
#LoadExtension = "rt_redirect.fdx":"0x0080";
#LoadExtension = "rt_default.fdx":"rt_default.conf";

TLS_Cred = "/etc/freeDiameter/cert.pem", "/etc/freeDiameter/privkey.pem";
TLS_CA = "/etc/freeDiameter/cert.pem";
TLS_DH_File = "/etc/freeDiameter/dh.pem";

Next up we’ll need to define the Diameter peers we’ll be routing between.

For each connection / peer / host we’ll need to define here:

ConnectPeer = "mme01.mnc001.mcc001.3gppnetwork.org" { ConnectTo = "10.98.0.10"; No_TLS; };
ConnectPeer = "hss01" { ConnectTo = "10.0.1.252"; No_TLS; Port = 3868; Realm = "mnc001.mcc001.3gppnetwork.org";};

And we’ll configure our HSS and MME defined in the ConnectPeers to connect/accept connections from, dra.mnc001.mcc001.3gppnetwork.org.

Now if we start freeDiameter, we can start routing between the hosts. No config needed.

If we define another HSS in the ConnectPeers, any S6a requests from the MME may get routed to that as well (50/50 split).

In our next post, we’ll look at using the rt_default extension to control how we route and look at some more advanced use cases.

Diameter Routing Agents (Why you need them, and how to build them) – Part 2 – Routing

What I typically refer to as Diameter interfaces / reference points, such as S6a, Sh, Sx, Sy, Gx, Gy, Zh, etc, etc, are also known as Applications.

Diameter Application Support

If you look inside the Capabilities Exchange Request / Answer dialog, what you’ll see is each side advertising the Applications (interfaces) that they support, each one being identified by an Application ID.

CER showing support for the 3GPP Zh Application-ID (Interface)

If two peers share a common Application-Id, then they can communicate using that Application / Interface.

For example, the above screenshot shows a peer with support for the Zh Interface (Spoiler alert, XCAP Gateway / BSF coming soon!). If two Diameter peers both have support for the Zh interface, then they can use that to send requests / responses to each other.

This is the basis of Diameter Routing.

Diameter Routing Tables

Like any router, our DRA needs to have logic to select which peer to route each message to.

For each Diameter connection to our DRA, it will build up a Diameter Routing table, with information on each peer, including the realm and applications it advertises support for.

Then, based on the logic defined in the DRA to select which Diameter peer to route each request to.

In its simplest form, Diameter routing is based on a few things:

  1. Look at the DestinationRealm, and see if we have any peers at that realm
  2. If we do then look at the DestinationHost, if that’s set, and the host is connected, and if it supports the specified Application-Id, then route it to that host
  3. If no DestinationHost is specified, look at the peers we have available and find the one that supports the specified Application-Id, then route it to that host
Simplified Diameter Routing Table used by DRAs

With this in mind, we can go back to looking at how our DRA may route a request from a connected MME towards an HSS.

Let’s look at some examples of this at play.

The request from MME02 is for DestinationRealm mnc001.mcc001.3gppnetwork.org, which our DRA knows it has 4 connected peers in (3 if we exclude the source of the request, as we don’t want to route it back to itself of course).

So we have 3 contenders still for who could get the request, but wait! We have a DestinationHost specified, so the DRA confirms the host is available, and that it supports the requested ApplicationId and routes it to HSS02.

So just because we are going through a DRA does not mean we can’t specific which destination host we need, just like we would if we had a direct link between each Diameter peer.

Conversely, if we sent another S6a request from MME01 but with no DestinationHost set, let’s see how that would look.

Again, the request is from MME02 is for DestinationRealm mnc001.mcc001.3gppnetwork.org, which our DRA knows it has 3 other peers it could route this to. But only two of those peers support the S6a Application, so the request would be split between the two peers evenly.

Clever Routing with DRAs

So with our DRA in place we can simplify the network, we don’t need to build peer links between every Diameter device to every other, but let’s look at some other ways DRAs can help us.

Load Control

We may want to always send requests to HSS01 and only use HSS02 if HSS01 is not available, we can do this with a DRA.

Or we may want to split load 75% on one HSS and 25% on the other.

Both are great use cases for a DRA.

Routing based on Username

We may want to route requests in the DRA based on other factors, such as the IMSI.

Our IMSIs may start with 001010001xxx, but if we introduced an MVNO with IMSIs starting with 001010002xxx, we’d need to know to route all traffic where the IMSI belongs to the home network to the home network HSS, and all the MVNO IMSI traffic to the MVNO’s HSS, and DRAs handle this.

Inter-Realm Routing

One of the main use cases you’ll see for DRAs is in Roaming scenarios.

For example, if we have a roaming agreement with a subscriber who’s IMSIs start with 90170, we can route all the traffic for their subs towards their HSS.

But wait, their Realm will be mnc901.mcc070.3gppnetwork.org, so in that scenario we’ll need to add a rule to route the request to a different realm.

DRAs handle this also.

In our next post we’ll start actually setting up a DRA with a default route table, and then look at some more advanced options for Diameter routing like we’ve just discussed.

One slight caveat, is that mutual support does not always mean what you may expect.
For example an MME and an HSS both support S6a, which is identified by Auth-Application-Id 16777251 (Vendor ID 10415), but one is a client and one is a server.
Keep this in mind!

Diameter Routing Agents (Why you need them, and how to build them) – Part 1

Answer Question 1: Because they make things simpler and more flexible for your Diameter traffic.
Answer Question 2: With free software of course!

All about DRAs

But let’s dive a little deeper. Let’s look at the connection between an MME and an HSS (the S6a interface).

Direct Diameter link between two Diameter Peers

We configure the Diameter peers on MME1 and HSS01 so they know about each other and how to communicate, the link comes up and presto, away we go.

But we’re building networks here! N+1 redundancy and all that, so now we have two HSSes and two MMEs.

Direct Diameter link between 4 Diameter peers

Okay, bit messy, but that’s okay…

But then our network grows to 10 MMEs, and 3 HSSes and you can probably see where this is going, but let’s drive the point home.

Direct Diameter connections for a network with 10x MME and 3x HSS

Now imagine once you’ve set all this up you need to do some maintenance work on HSS03, so need to shut down the Diameter peer on 10 different MMEs in order to isolate it and deisolate it.

The problem here is pretty evident, all those links are messy, cumbersome and they just don’t scale.

If you’re someone with a bit of networking experience (and let’s face it, you’re here after all), then you’re probably thinking “What if we just had a central system to route all the Diameter messages?”

An Agent that could Route Diameter, a Diameter Routing Agent perhaps…

By introducing a DRA we build Diameter peer links between each of our Diameter devices (MME / HSS, etc) and the DRA, rather than directly between each peer.

Then from the DRA we can route Diameter requests and responses between them.

Let’s go back to our 10x MME and 3x HSS network and see how it looks with a DRA instead.

So much cleaner!

Not only does this look better, but it makes our life operating the network a whole lot easier.

Each MME sends their S6a traffic to the DRA, which finds a healthy HSS from the 3 and sends the requests to it, and relays the responses as well.

We can do clever load balancing now as well.

Plus if a peer goes down, the DRA detects the failure and just routes to one of the others.

If we were to introduce a new HSS, we wouldn’t need to configure anything on the MMEs, just add HSS04 to the DRA and it’ll start getting traffic.

Plus from an operations standpoint, now if we want to to take an HSS offline for maintenance, we just shut down the link on the HSS and all HSS traffic will get routed to the other two HSS instances.

In our next post we’ll talk about the Routing part of the DRA, how the decisions are made and all the nuances, and then in the following post we’ll actually build a DRA and start routing some traffic around!

Filtering for 3GPP DNS in Wireshark

If you work with IMS or Packet Core, there’s a good chance you need DNS to work, and it doesn’t always.

When I run traces, I’ve always found I get swamped with DNS traffic, UE traffic, OS monitoring, updates, etc, all combine into a big firehose – while my Wireshark filters for finding EPC and IMS traffic is pretty good, my achilles heel has always been filtering the DNS traffic to just get the queries and responses I want out of it.

Well, today I made that a bit better.

By adding this to your Wireshark filter:

dns contains 33:67:70:70:6e:65:74:77:6f:72:6b:03:6f:72:67:00

You’ll only see DNS Queries and Responses for domains at the 3gppnetwork.org domain.

This makes my traces much easier to read, and hopefully will do the same for you!

Bonus, here’s my current Wireshark filter for working EPC/IMS:

(diameter and diameter.cmd.code != 280) or  (sip and !(sip.Method == "OPTIONS") and !(sip.CSeq.method == "OPTIONS")) or (smpp and (smpp.command_id != 0x00000015 and smpp.command_id != 0x80000015)) or (mgcp and !(mgcp.req.verb == "AUEP") and !(mgcp.rsp.rspcode == 500)) or isup or sccp or rtpevent or s1ap or gtpv2 or pfcp or (dns contains 33:67:70:70:6e:65:74:77:6f:72:6b:03:6f:72:67:00)

FreeDiameter – Generating Certificates

Even if you’re not using TLS in your FreeDiameter instance, you’ll still need a certificate in order to start the stack.

Luckily, creating a self-signed certificate is pretty simple,

Firstly we generate your a private key and public certificate for our required domain – in the below example I’m using dra01.epc.mnc001.mcc001.3gppnetwork.org, but you’ll need to replace that with the domain name of your freeDiameter instance.

openssl req -new -batch -x509 -days 3650 -nodes     \
   -newkey rsa:1024 -out /etc/freeDiameter/cert.pem -keyout /etc/freeDiameter/privkey.pem \
   -subj /CN=dra01.epc.mnc001.mcc001.3gppnetwork.org

Next we generate a new set of Diffie-Hellman parameter set using OpenSSL.

openssl dhparam -out /etc/freeDiameter/dh.pem 1024 

Lastly we’ll put all this config into the freeDiameter config file:

TLS_Cred = "/etc/freeDiameter/cert.pem", "/etc/freeDiameter/privkey.pem";
TLS_CA = "/etc/freeDiameter/cert.pem";
TLS_DH_File = "/etc/freeDiameter/dh.pem";

If you’re using freeDiameter as part of another software stack (Such as Open5Gs) the below filenames will contain the config for that particular freeDiameter components of the stack:

  • freeDiameter.conf – Vanilla freeDiameter
  • mme.conf – Open5Gs MME
  • pcrf.conf – Open5Gs PCRF
  • smf.conf – Open5Gs SMF / P-GW-C
  • hss.conf – Open5Gs HSS

Testing Mobile Networks with Remote Test Phones

I build phone networks, and unfortunately, I’m not able to be everywhere at once.

This means sometimes I have to test things in networks I may not be within the coverage of.

To get around this, I’ve setup something pretty simple, but also pretty powerful – Remote test phones.

Using a Raspberry Pi, Intel NUC, or any old computer, I’m able to remotely control Android handsets out in the field, in the coverage footprint of whatever network I need.

This means I can make test calls, run speed testing, signal strength measurements, on real phones out in the network, without leaving my office.

Base OS

Because of some particularities with Wayland and X11, for this I’d steer clear of Ubuntu distributions, and suggest using Debian if you’re using x86 hardware, and Raspbian if you’re using a Pi.

Setup Android Debug Bridge (adb)

The base of this whole system is ADB, the Android Debug Bridge, which exposes the ability to remotely control an Android phone over USB.

You can also do this over WiFi, but I find for device testing, wired allows me to airplane mode a device or disable data, which I can’t do if the device is connected to ADB via WiFi.

There’s lot of info online about setting Android Debug Bridge up on your device, unlocking the Developer Mode settings, etc, if you’ve not done this before I’ll just refer you to the official docs.

Before we plug in the phones we’ll need to setup the software on our remote testing machine, which is simple enough:

[email protected]:~$ sudo apt install android-tools-adb
sudo apt install android-tools-fastboot

Now we can plug in each of the remote phones we want to use for testing and run the command “adb devices” which should list the phones with connected to the machine with ADB enabled:

[email protected]:~$ adb devices
List of devices attached
ABCDEFGHIJK	unauthenticated
LMNOPQRSTUV	unauthenticated

You’ll get a popup on each device asking if you want to allow USB debugging – If this is going to be a set-and-forget deployment, make sure you tick “Always allow from this Computer” so you don’t have to drive out and repeat this step, and away you go.

How to Access Developer Options and Enable USB Debugging on Android

Lastly we can run adb devices again to confirm everything is in the connected state

Scrcpy

scrcpy an open-source remote screen mirror / controller that allows us to control Android devices from a computer.

In our case we’re going to install with Snap (if you hate snaps as many folks do, you can also compile from source):

[email protected]:~$ snap install scrcpy

Remote Access

If you’re a regular Linux user, the last bit is the easiest.

We’re just going to use SSH to access the Linux machine, but with X11 forwarding.

If you’ve not come across X11 fowarding before, from a Linux machine just add the -X option to your SSH command, for example from my laptop I run:

nick@oldfaithful:~$ ssh [email protected] -X

Where 10.0.1.4 is the remote tester device.

After SSHing into the box, we can just run scrcpy and boom, there’s the window we can interact with.

If you’ve got multiple devices connected to the same device, you’ll need to specify the ADB device ID, and of course, you can have multiple sessions open at the same time.

scrcpy -s 61771fe5

That’s it, as simple as that.

Tweaking

A few settings you may need to set:

I like to enable the “Show taps” option so I can see where my mouse is on the touchscreen and see what I’ve done, it makes it a lot easier when recording from the screen as well for the person watching to follow along.

You’ll probably also want to disable the lock screen and keep the screen awake

Some OEMs have an additonal tick box if you want to be able to interact with the device (rather than just view the screen), which often requires signing into an account, if you see this toggle, you’ll need to turn it on:

Ansible Playbook

I’ve had to build a few of these, so I’ve put an Ansible Playbook on Github so you can create your own.

You can grab it from here.