Tag Archives: 5G

Uncomfortable Questions to ask about 5G Standalone at MWC – Part 2 – Has this Cash cow got Milk?

This is the second post of 3 presenting the argument against introducing 5G-SA.

There’s an old adage that businesses spend money for one of three reasons:

  • To Save Money (Which I covered yesterday)
  • To make more Money (This post, congratulations, you’re reading it!)
  • Because they have to (Regulatory compliance, insurance, taxes, etc) – That’s the next post

So let’s look at SA in this context.

5G-SA can drive new revenue streams

We (as an industry) suck at this.

Last year on the Telecoms.com podcast, Scott Bicheno made the point that if operators took all the money they’d gambled (and lost) on trying to play in the sports rights, involvement in media companies, building their own streaming apps, attempts at bundling other utilities, digital identity, etc, and just left the cash in the bank and just operated the network, they’d be better off.

Uber, Spotify, “OTTs”, etc, utilize MNOs to enable their services, but operators don’t see this extra revenue.
While some operators may talk of “fair share” the truth is, these companies add value to our product (connectivity) which as an industry, we’ve failed to add ourselves.

Last year at MWC we saw vendors were still beating the drum about 5G being critical for the “Metaverse”, just weeks before Meta announced they were moving away from the Metaverse.

Today the only device getting any attention from consumers is Apple’s Vision Pro, a very pricey, currently niche offering, which has no SIM card or cellular connectivity.

If the Metaverse does turn out to be a cash cow, it is unlikely the telecommunications industry will be the ones milking it.

Claim: Customers are willing to pay more for 5G-SA

This myth seems to be fairly persistent, but with minimal data to support this claim.

While BSS vendors talk about “5G Monetization”, the truth is, people use their MNO to provide them connectivity. If the coverage is adequate, and the speed enough to do what they need to do, few would be willing to pay any additional cash each month to see higher numbers on a speedtest result (enabled by 5G-NSA) and even fewer would pay extra cash for, well, whatever those features only enabled by 5G-Standalone are?

With most consumers now also holding onto their mobile devices for longer periods of time, and with interest rates reining in consumer spending across the board, we are seeing the rise of a more cost conscious consumer than ever before. If we want to see higher ARPUs, we need to give the consumer a compelling reason to care and spend their cash, beyond a speed test result.

We talk a little about APIs lower down in the post.

Claim: Users want Ultra-Low Latency / High Reliability Comms that only 5G-SA delivers

Wanting to offer a product to the market, is not the same as the market wanting a product to consume.

Telecom operators want customers to want these services, but customer take up rates tell a different story. For a product like this to be viable, it must have a wide enough addressable market to justify the investment.

Reliability

The URLCC standards focus on preventing packet loss, but the world has moved on from needing zero packet loss.

The telecom industry has a habit of deciding what customers want without actually listening.
When a customer talks about wanting “reliable” comms, they aren’t saying they want zero packet loss, but rather fewer dropouts or service flaps.
For us to give the customer what they are actually asking for involves us expanding RAN footprint and adding transmission diversity, not 5G-SA.

The “protocols of the internet” (TCP/IP) have been around for more than 50 years now.

These protocols have always flowed over transport links with varied reliability and levels of packet loss.

Thanks to these error correction and retransmission techniques built into these protocols, a lost packet will not interrupt the stream. If your nuclear command and control network were carried over TCP/IP over the public internet (please don’t do this), a missing packet won’t lead to worldwide annihilation, but rather the sender will see the receiver never acknowledged the receipt of the packet at the other end, and resend it, end of.

If you walk into a hospital today, you’ll find patient monitoring devices, tracking the vital signs for patients and alerting hospital staff if a patient’s vital signs change. It is hard to think of more important services for reliability than this.

And yet they use WiFi, and have done for a long time, if a packet is lost on WiFi (as happens regularly) it’s just retransmitted and the end user never knows.

Autonomous cars are unlikely to ever rely on a 5G connection to operate, for the simple reason that coverage will never be 100%. If your car stops because you’re in a not-spot, you won’t be a happy customer. While plenty of cars have cellular modems in them, that are used to upload telemetry data back to the manufacturer, but not to drive the car.

One example of wireless controlled vehicles in the wild is autonomous haul trucks in mines. Historically, these have used WiFi for their comms. Mine sites are often a good fit for Private LTE, but there’s nothing inherent in the 5G Standalone standard that means it’s the only tool for the job here.

Slicing

Slicing is available in LTE (4G), with an architecture designed to allow access to others. It failed to gain traction, but is in networks today.

See: Pre-5G Network Slicing.

What is different this time?

Low Latency

The RAN a piece of the latency puzzle here, but it is just one piece of the puzzle.

If we look at the flow a packet takes from the user’s device to the server they want to talk to we’ve got:

  1. Time it takes the UE to craft the packet
  2. Time it takes for the packet to be transmitted over the air to the base station
  3. Time it takes for the packet to get through the RAN transmission network to the core
  4. Time it takes the packet to traverse the packet core
  5. Time it takes for the packet to get out to transit/peering
  6. Time it takes to get the packet from the edge of the operators network to the edge of the network hosting the server
  7. Time it takes the packet through the network the server is on
  8. Time it takes the server to process the request

The “low latency” bit of the 5G puzzle only involves the two elements in bold.

If you’ve got to get from point A to point B along a series of roads, and the speed limit on two of the roads you traverse (short sections already) is increased. The overall travel time is not drastically reduced.

I’m lucky, I have access to a well kitted out lab which allows me to put all of these latency figures to the test and provide side by side metrics. If this is of interest to anyone, let me know. Otherwise in the meantime you’ll just have to accept some conjecture and opinion.

You could rebut this talking about Edge Compute, and having the datacenter at the base of the tower, but for a number of fairly well documented reasons, I think this is unlikely to attract widespread deployment in established carrier networks, and Intel’s recent yearly earning specifically called this out.


Claim: Customers want APIs and these needs 5G SA

Companies like Twilio have made it easy to interact with the carrier network via their APIs, but yet again, it’s these companies producing the additional value on a service operated by the MNOs.

My coffee machine does not have an API, and I’m OK with this because I don’t have a want or need to interact with it programatically.

By far, the most common APIs used by businesses involving telco markets are APIs to enable sending an SMS to a user.

These have been around for a long time, and the A2P market is pretty well established, and the good news is, operators already get a chunk of this pie, by charging for the SMS.

Imagine a company that makes medical booking software. They’re a tech company, so they want their stack to work anywhere in the world, and they want to be able to send reminder SMS to end users.

They could get an account manager with each of the telcos in each of the markets they work in, onboard and integrate the arcane complexities of each operators wholesale SMS system, or they could use Twilio or a similar service, which gives them global reach.

Often the cost of services like Twilio are cheaper than working directly with the carriers in each market, and even if it is marginally more expensive, the cost savings by not having to deal with dozens of carriers or integrate into dozens of systems, far outweighs this.

GSMA’s OpenGateway Initiative has sought to rectify this, but it lacks support for the use case we just discussed.

While it’s a great idea, in the context of 5G Standalone and APIs, it’s worth noting that none of the use cases in OpenGateway require 5G Standalone (Except possibly Edge discovery, but it is debatable).

Even Slicing existed before in LTE.

Critically, from a developer experience perspective:

I can sign up to services like Twilio without a credit card, and start using the service right away, with examples in my programming language of choice, the developer user experience is fantastic.

Jump on the OpenGateway website today and see if you can even find a way to sign up to use the service?

Claim: Fixed Wireless works best with 5G-SA

Of all the touted use cases and applications for 5G, Fixed Wireless (FWA) has been the most successful.

The great thing about FWA on Cellular networks is you can use the same infrastructure you use for your mobile customers, and then sell excess capacity in the network to deliver Fixed Wireless Access services, better utilizing an asset (great!).

But again, this does not require Standalone 5G. If you deploy your FWA network using 5G SA, then you won’t be able to sweat that same asset for both mobile subscribers and FWA subscribers.

Today at least, very few handsets short of this generation of flagship phones, supports 5G SA. Even the phones sold as supporting 5G over the past few years, are almost all only supporting 5G-NSA, so if you rolled out your FWA network as Standalone, you can’t better utilize the asset by sharing with your existing LTE/5G-NSA customers.

Claim: The Killer App is coming for 5G and it needs 5G SA

This space is reserved for the killer app that requires 5G Standalone.

Whenever that comes?

Anyone?

I’m not paying to build a marina berth for my mega yacht, mostly because I don’t have one. Ditto this.

Could you explain to everyone on an investor call that you’re investing in something where the vessel of the payoff isn’t even known to exist? Telecom is “blue chip”, hardly speculative.

The Future for Revenue Growth?

Maybe there isn’t one.

I know it’s an unthinkable thought for a lot of operators, but let’s look at it rationally; in the developed world, everyone who wants a mobile service already has one.

This leaves operators with two options; gaining market share from their competitors and selling more/higher priced services to existing customers.

You don’t steal away customers from other operators by offering a higher priced product, and with reduced consumer spending people aren’t queuing up to spend more each month.

But there is a silver lining, if you can’t grow revenues, you can still shrink expenditure, which in the end still gets the same result at the end of the quarter – More cash.

Simplify your operations, focus on what you do really well (mobile services), the whole 80/20 rule, get better at self service, all that guff.

There’s no shortage of pain points for consumers telecom operators could address, to make the customer experience better, but few that include the word Slicing.

Uncomfortable Questions to ask about 5G Standalone at MWC – Part 1 – Does $tandalone save $$$?

No one spends marketing dollars talking about the problems with a tech and vendors aren’t out there promoting sweating existing assets. But understanding your options as an operator is more important now than ever before.

Sidebar; This post got really long, so I’m splitting it into 3…

We’re often asked to help define a a 5G strategy for operators; while every case is different, there’s a lot of vendors pushing MNOs to move towards 5G standalone or 5G-SA.

I’m always a fan of playing “devil’s advocate“, and with so many articles and press releases singing the praises of standalone 5G/5G-SA, so as a counter in this post, I’ll be making the case against the narratives presented to operators by vendors that the “right” way to do 5G is to introduce 5G Standalone, that they should all be “upgrading” to Standalone 5G.

With Mobile World Congress around the corner, now seems like a good time to put forward the argument against introducing 5G Standalone, rebutting some common claims about 5G Standalone operators will be told. We’ll counterpoint these arguments and I’ll put forward the case for not jumping onto the 5G-SA bandwagon – just yet.

On a personal note, I do like 5G SA, it has some real advantages and some cool features, which are well documented, including on this blog. I’m not looking to beat up on any vendors, marketing hype or events, but just to provide the “other side” of the equation that operators should consider when making decisions and may not be aware of otherwise. It’s also all opinion of course (cited where possible), but if you’re going to build your network based on a blog post (even one as good as this) you should probably reconsider your life choices.

Some Arcane Detail: 5G Non-Standalone (NSA) vs Standalone (SA)

5G NSA (Non Standalone) uses LTE (4G) with an additional layer “bolted on” that uses 5G on the radio interface to provide “5G” speeds to users, while reusing the existing LTE (Evolved Packet Core) core and VoLTE for voice / SMS.

Image source: Samsung

From an operator perspective there is almost no change required in the network to support NSA 5G, other than in the RAN, and almost all the 5G networks in commercial use today use 5G NSA.

5G NSA is great, it gives the user 5G speeds for users with phones that support it, with no change to the rest of the network needed.

Standalone 5G on the other hand requires an a completely new core network with all the trimmings.

While it is possible to handover / interwork with LTE/4G (Inter-RAT Handovers), this is like 3G/4G interworking, where each has a different core network. Introducing 5G standalone touches every element of the network, you need new nodes supporting the new standards for charging, policy, user plane, IMS, etc.

Scope

There’s an old adage that businesses spend money for one of three reasons:

  • To Save Money (Which we’ll cover in this post)
  • To make more Money (Covered next – Will link when published)
  • Because they have to (Regulatory compliance, insurance, taxes, etc)

Let’s look at 5G Standalone in each of these contexts:

5G Cost Savings – Counterpoint: The cost-benefit doesn’t stack up

As an operator with an existing deployed 4G LTE network, deploying a new 5G standalone network will not save you money.

From an capital perspective this is pretty obvious, you’re going to need to invest in a new RAN and a new core to support this, but what about from an opex perspective?

Claim: 5G RAN is more efficient than 4G (LTE) RAN

Spectrum is both finite and expensive, so MNOs must find the most efficient way to use that spectrum, to squeeze the most possible value out of it.

Let’s look at some numbers:

In the case of 3G vs 4G (LTE) there was a strong cost saving case to be made; a single 5Mhz UMTS (3G) cell could carry a total of 14Mbps, while if that same 5Mhz channel was refarmed / shifted to a 4×4 LTE (4G) carrier we hit 75Mbps of downlink data.

In rough numbers, we can say we get 5x the spectral efficiency by moving from 3G to 4G. This means we can carry 5.2x more with the same spectrum on 4G than we can on 3G – A very compelling reason to upgrade.

The like-for-like spectral efficiency of 5G is not significantly greater than that of LTE.

In numbers the same 5Mhz of spectrum we refarmed from UMTS (3G) to 4G (LTE) provided a 5x gain in efficiency to deliver 75Mbps on LTE. The same configuration refarmed to 5G-NR would provide 80Mbps.

Refarming spectrum from 4G (LTE) to 5G (NR) only provides a 6% increase in spectral efficiency.

While 6% is not nothing, if refarmed to a 5G standalone network, the spectrum can no longer be used by LTE only devices (Unless Dynamic Spectrum Sharing is used which in itself leads to efficiency losses), which in itself reduces the efficiency and would add additional load to other layers.

The crazy speeds demonstrated by 5G are not due to meaningful increases in efficiency, but rather the ability to use more spectrum, spectrum that operators need to purchase at auction, purchase equipment to utilize and pay to run.

Claim: 5G Standalone Core is Cheaper to operate as it is “Cloud Native”

It has been widely claimed that the shift for the 5G Core Architecture to being “Cloud Native” can provide cost savings.

Operators should regard this in a skeptical manner; after all, we’ve been here before.

Did moving from big-iron to VNFs provide the promised cost savings to operators?

For many operators the shift from hardware to software added additional complexity to the network and increased the headcount to support this.

What were once big-iron appliances dedicated to one job, that sat in the corner and chugged away, are now virtual machines (VNFs).
Many operators have naturally found themselves needing a larger team to manage the virtual environment, compared to the size of the team they needed to just to plug power and data into a big box in an exchange before everything was virtualized.

Introducing a “Cloud Native” Kubernetes layer on top of the VNF / virtualization layer, on top of the compute layer, leaves us with a whole lot of layers. All of which require resources to be maintain, troubleshoot and kept running; each layer having associated costs for staffing, licensing and support.

Many mid size enterprises rushed into “the cloud” for the promised cost savings only to sheepishly admit it cost more than the expected.

Almost none of the operators are talking about running these workloads in the public cloud, but rather “Private Clouds” built on-premises, using “Cloud Native” best practices.

One of the central arguments about cloud revolves around “elastic scaling” where the network can automatically scale to match demand; think extra instances spun up a times of peak demand and shut down when the demand drops.

I explain elastic scaling to clients as having to move people from one place to another. Most of the time, I’m just moving myself, a push bike is fine, or I’ve got a 4 seater car, but occasionally I’ll need to move 25 people and for that I’d need a bus.

If I provide the transportation myself, I need to own a bike, a car and a bus.

But if use the cloud I can start with the push bike, and as I need to move more people, the “cloud” will provide me the vehicle I need to move the people I need to move at that moment, and I’ll just pay for the time I need the bus, and when I’m done needing the bus, I drop back to the (cheaper) push bike when I’m not moving lots of people.

This is a really compelling argument, and telecom operators regularly announces partnerships with the hyperscalers, except they’re always for non-core-network workloads.

While telecom operators are going to provide the servers to run this in “On-prem-cloud”, they need to dimension for the maximum possible load. This means they need to own a bike/car/bus, even if they’re not using it most of the time, and there’s really no cost savings to having a bus but not using it when you’re not paying by the hour to hire it.

Infrastructure aside, introducing a Standalone 5G Core adds another core network to maintain. Alongside the Circuit Switched Core (MSC/GGSN/SGSN) serving 2G/3G subscribers, Evolved Packet Core serving 4G (LTE) and 5G-NSA subscribers, adding a 5G Standalone Core to for the 5G-SA subscribers served by the 5G SA cells, is going to be more work (and therefore cost).

While the majority of operators have yet to turn off their 2G/3G core networks, introducing another core network to run in parallel is unlikely to lead to any cost savings.

Claim: Upgrading now can save money in the Future / Future Proofing

Life cycles of telecommunications are two fold, one is the equipment/platform life cycle (like the RAN components or Core network software being used to deliver the service) the other is the technology life cycle (the generation of technology being used).

The technology lifecycles in telecommunications are vastly longer than that for regular tech.

GSM (2G) was introduced into the UK in 1991, and will be phased out starting in 2033, a 42 year long technology life cycle.

No vendor today could reasonably expect the 5G hardware you deploy in 2024 to still be in production in 2066 – The platform/equipment life cycle is a lot shorter than the technology life cycle.

Operators will to continue relying on LTE (4G) well into the late 2030s.

I’d wager that there is not a single piece of equipment in the Vodafone UK GSM network today, that was there in 1991.
I’d go even further to say that any piece of equipment in the network today, didn’t even replace the 1991 equipment, but was probably 3 or 4 generations removed from the network built in 1991.

For most operators, RAN replacements happen between 4 to 7 years, often with targeted augmentation / expansion as needed in the form of adding extra layers / sectors between these times.

The question operators should be asking is therefore not what will I need to get me through to 2066, but rather what will I need to get to 2030?

The majority of operators outside the US today still operate a 2G or 3G network, generally with minimal bandwidth to support legacy handsets and devices, while the 4G (LTE) network does most of the heavy lifting for carrying user traffic. This is often with the aid of an additional 5G-NSA (Non-Standalone) layer to provide additional capacity.

Is there a cost saving angle to adding support for 5G-Standalone in addition to 2G/3G/4G (LTE) and 5G (Non-Standalone) into your RAN?

A logical stance would be that removing layers / technologies (such as 2G/3G sunsetting) would lead to cost savings, and adding a 5G Standalone layer would increase cost.

All of the RAN solutions on the market today from the major vendors include support for both Standalone 5G and Non Standalone, but the feature licensing for a non-standalone 5G is generally cheaper than that for Standalone 5G.

The question operators should be asking is on what timescale do I need Standalone 5G?

If you’ve rolled out 5G-NSA today, then when are you looking to sunset your LTE network?
If the answer is “I hope to have long since retired by that time”, then you’ve just answered that question and you don’t need to licence / deploy 5G-SA in this hardware refresh cycle.

Other Cost Factors

Roaming: The majority of roaming traffic today relies on 2G/3G for voice. VoLTE roaming is (finally) starting to establish a foothold, but we are a long way from ubiquitous global roaming for LTE and VoLTE, and even further away for 5G-SA roaming. Focusing on 5G roaming will enable your network for roaming use by a miniscule number of operators, compared to LTE/VoLTE roaming which covers the majority of the operators in the developed world who can utilize your service.

I decided to split this into 3 posts, next I’ll post the “5G can make us more money” post and finally a “5G because we have to” post. I’ll post that on LinkedIn / Twitter / Mailing list, so stick around, and feel free to trash me in the comments.

How 5G “Slices” are purchased and activated in Android

Slicing has long been held up as one of the monetizations opportunities for residential customers, but few seem to be familiar with it beyond a concept, so I thought I’d take a look at how it actually works in Android, and how an end user would interact with it.

For starters, there’s a little used hook in Android TelephonyManager called purchasePremiumCapability, this method can be called by a carrier’s self care app.

You can pass it the type of “Slice” (capability) to purchase, for example PREMIUM_CAPABILITY_PRIORITIZE_LATENCY for the slice.

Operators would need the Telephony Permission for their app, and a function from the app in order to activate this, but it doesn’t require on Android Carrier Privileges and a matching signature on the SIM card, although there’s a lot of good reasons to include this in your Android Manifest for a Carrier Self-Care app.

We’ve made a little test app we use for things like enabling VoLTE, setting the APNs, setting carrier config, etc, etc. I added the Purchase Slice capability to it and give it a shot.

Android Studio Carrier Privilages

And the hook works, I was able to “purchase” a Slice.

App running on a Samsung phone shown with SCRCPY

I did some sleuthing to find if any self-care apps from carriers have implemented this functionality for standards-based slicing, and I couldn’t find any, I’m curious to see if it takes off – as I’ve written about previously slicing capabilities are not new in cellular, but the attempt to monetise it is.

More info in Telephony Manager – purchasePremiumCapability – Android Developers

What’s the maximum speed for LTE and 5G?

Even before 5G was released, the arms race to claim the “fastest” speeds on LTE, NSA and SA networks has continued, with pretty much every operator claiming a “first” or “fastest”.

I myself have the fastest 5G network available* but I thought I’d look at how big the values are we can put in for speed, these are the Maximum Bitrate Values (like AMBR) we can set on an APN/DNN, or on a Charging Rule.

*Measurement is of the fastest 5G network in an eastward facing office, operated by a person named Nick, in a town in Australia. Other networks operated by people other than those named Nick in eastward facing office outside of Australia were not compared.

The answer for Release 8 LTE is 4294967294 bytes per second, aka 4295 Mbps 4.295 Gbps.

Not bad, but why this number?

The Max-Requested-Bandwidth-DL AVP tells the PGW the max throughput allowed in bits per second. It’s a Unsigned32 so max value is 4294967294, hence the value.

But come release 15 some bright spark thought we may in the not to distant future break this barrier, so how do we go above this?

The answer was to bolt on another AVP – the “Extended-Max-Requested-BW-DL” AVP ( 554 ) was introduced, you might think that means the max speed now becomes 2x 4.295 Gbps but that’s not quite right – The units was shifted.

This AVP isn’t measuring bits per second it’s measuring kilobits per second.

So the standard Max-Requested-Bandwidth-DL AVP gives us 4.3 Gbps, while the Extended-Max-Requested-Bandwidth gives us a 4,295 Gbps.

We add the Extended-Max-Requested-Bandwidth AVP (4295 Gbps) onto the Max-Requested Bandwidth AVP (4.3 Gbps) giving us a total of 4,4299.3 Gbps.

So the short answer:

Pre release 15: 4.3 Gbps

Post release 15: 4,4299.3 Gbps

BSF Addresses

The Binding Support Function is used in 4G and 5G networks to allow applications to authenticate against the network, it’s what we use to authenticate for XCAP and for an Entitlement Server.

Rather irritatingly, there are two BSF addresses in use:

If the ISIM is used for bootstrapping the FQDN to use is:

bsf.ims.mncXXX.mccYYY.pub.3gppnetwork.org

But if the USIM is used for bootstrapping the FQDN is

bsf.mncXXX.mccYYY.pub.3gppnetwork.org

You can override this by setting the 6FDA EF_GBANL (GBA NAF List) on the USIM or equivalent on the ISIM, however not all devices honour this from my testing.

Inside a 32×32 MIMO Antenna

For the past few months I’ve had a Band 78 NR active antenna unit sitting next to my desk.

It’s a very cool bit of kit that doesn’t get enough love, but I thought I’d pop open the radome and take a peek inside.

Individual antenna elements

What I found very interesting is that it’s not all antennas in there!

… 29, 30, 31, 32. Yup. Checks out.

There are the expected number of antennas (I mean if I opened it up and found 31 antennas I’d have been surprised) but they don’t take up the whole volume of the unit, only about half,

AAU with Radome reinstalled

Well, after that strip show, back to sitting in my office until I need to test something 5G SA again…

Some thoughts on NRF Security in 5G Core

So I’ve been waxing lyrical about how cool in the NRF is, but what about how it’s secured?

A matchmaking service for service-consuming NFs to find service-producing NFs makes integration between them a doddle, but also opens up all sorts of attack vectors.

Theoretical Nasty Attacks (PoC or GTFO)

Sniffing Signaling Traffic:
A malicious actor could register a fake UDR service with a higher priority with the NRF. This would mean UDR service consumers (Like the AUSF or UDM) would send everything to our fake UDR, which could then proxy all the requests to the real UDR which has a lower priority, all while sniffing all the traffic.

Stealing SIM Credentials:
Brute forcing the SUPI/IMSI range on a UDR would allow the SIM Card Crypto values (K/OP/Private Keys) to be extracted.

Sniffing User Traffic:
A dodgy SMF could select an attacker-controlled / run UPF to sniff all the user traffic that flows through it.

Obviously there’s a lot more scope for attack by putting nefarious data into the NRF, or querying it for data gathering, and I’ll see if I can put together some examples in the future, but you get the idea of the mischief that could be managed through the NRF.

This means it’s pretty important to secure it.

OAuth2

3GPP selected to use common industry standards for HTTP Auth, including OAuth2 (Clearly lessons were learned from COMP128 all those years ago), however OAuth2 is optional, and not integrated as you might expect. There’s a little bit to it, but you can expect to see a post on the topic in the next few weeks.

3GPP Security Recommendations

So how do we secure the NRF from bad actors?

Well, there’s 3 options according to 3GPP:

Option 1 – Mutual TLS

Where the Client (NF) and the Server (NRF) share the same TLS info to communicate.

This is a pretty standard mechanism to use for securing communications, but the reliance on issuing certificates and distributing them is often done poorly and there is no way to ensure the person with the certificate, is the person the certificate was issued to.

3GPP have not specified a mechanism for issuing and securely distributing certificates to NFs.

Option 2 – Network Domain Security (NDS)

Split the network traffic on a logical level (VLANs / VRFs, etc) so only NFs can access the NRF.

Essentially it’s logical network segregation.

Option 3 – Physical Security

Split the network like in NDS but a physical layer, so the physical cables essentially run point-to-point from NF to NRF.

Thoughts?

What’s interesting is these are presented as 3 options, rather than the layered approach.

OAuth2 is used, but

Summary


NRF and NF shall authenticate each other during discovery, registration, and access token request. If the PLMN uses
protection at the transport layer as described in clause 13.1, authentication provided by the transport layer protection
solution shall be used for mutual authentication of the NRF and NF.
If the PLMN does not use protection at the transport layer, mutual authentication of NRF and NF may be implicit by
NDS/IP or physical security (see clause 13.1).
When NRF receives message from unauthenticated NF, NRF shall support error handling, and may send back an error
message. The same procedure shall be applied vice versa.
After successful authentication between NRF and NF, the NRF shall decide whether the NF is authorized to perform
discovery and registration.
In the non-roaming scenario, the NRF authorizes the Nnrf_NFDiscovery_Request based on the profile of the expected
NF/NF service and the type of the NF service consumer, as described in clause 4.17.4 of TS23.502 [8].In the roaming
scenario, the NRF of the NF Service Provider shall authorize the Nnrf_NFDiscovery_Request based on the profile of
the expected NF/NF Service, the type of the NF service consumer and the serving network ID.
If the NRF finds NF service consumer is not allowed to discover the expected NF instances(s) as described in clause
4.17.4 of TS 23.502[8], NRF shall support error handling, and may send back an error message.
NOTE 1: When a NF accesses any services (i.e. register, discover or request access token) provided by the NRF ,
the OAuth 2.0 access token for authorization between the NF and the NRF is not needed.

TS 133 501 – 13.3.1 Authentication and authorization between network functions and the NRF

If you like Pina Coladas, and service the control plane – Intro to NRF in 5GC

The Network Repository Function plays matchmaker to all the elements in our 5G Core.

For our 5G Service-Based-Architecture (SBA) we use Service Based Interfaces (SBIs) to communicate between Network Functions. Sometimes a Network Function acts as a server for these interfaces (aka “Service Producer”) and sometimes it acts as a client on these interfaces (aka “Service Consumer”).

For service consumers to be able to find service producers (Clients to be able to find servers), we need a directory mechanism for clients to be able to find the servers to serve their needs, this is the role of the NRF.

With every Service Producer registering to the NRF, the NRF has knowledge of all the available Service Producers in the network, so when a Service Consumer NF comes along (Like an AMF looking for UDM), it just queries the NRF to get the details of who can serve it.

Basic Process – NRF Registration

In order to be found, a service producer NF has to register with the NRF, so the NRF has enough info on the service-producer to be able to recommend it to service-consumers.

This is all the basic info, the Service Based Interfaces (SBIs) that this NF serves, the PLMN, and the type of NF.

The NRF then stores this information in a database, ready to be found by SBI Service Consumers.

This is achieved by the Service Producing NF sending a HTTP2 PUT to the NRF, with the message body containing all the particulars about the services it offers.

Simplified example of an SMSc registering with the NRF in a 5G Core

Basic Process – NRF Discovery

With an NRF that has a few SBI Service Producers registered in it, we can now start querying it from SBI Service Consumers, to find SBI Service Producers.

The SBI Service Consumer looking for a SBI Service Producer, queries the NRF with a little information about itself, and the SBI Service Producer it’s looking for.

For example a SMF looking for a UDM, sends a request like:

http://[::1]:7777/nnrf-disc/v1/nf-instances?requester-nf-type=SMF&target-nf-type=UDM

To the NRF, and the NRF responds with SBI Service Producing NFs that match in JSON body of the response.

SMSF being found by the AMF using the NRF

More Info

I’ve written in a more technical detail on the NRF in this post, you can learn about setting up Open5Gs NRF in this post, and keep tuned for a lot more content on 5GC!

GTP Extension Headers (PDU session user plane protocol) in 5GC

The GPRS Tunneling Protocol is one of the last common bits of signaling seen in 5G networks, having existed since GPRS was standardized in 1998, and 23 years later, it’s still in use on the user plane.

But networks evolve, and 5G Networks required some extensions to GTP to support these on the N9 and N3 reference points. (UPF to UPF and UPF to gNodeB / Access Network).

3GPP TS 38.415 outlines the PDU session user plane protocol used in 5GC.

The Need for GTP Header Extensions

As increasingly complex QoS capabilities are introduced into 5GC, there is a need to signal certain information on a per-packet basis.

In previous generations of mobile network, traffic could be differentiated with different Tunnel Endpoint Identifiers (TEIDs) but not on a per-packet basis,

The expansion of QoS in 5GC means the UPF of gNodeB may need to set the QoS Flow Identifier per-packet, include delay measurements or signal that Reflective QoS is being used per packet, for this, you need to extend GTP.

Fortunately GTP has support for Extension Headers and this has been leveraged to add the PDU Session Container in the Extension Header of a GTP packet.

In here you can set on a per packet basis:

  • QoS Flow Identifier (QFI) – Used to identify the QoS flow to be used (Pretty self explanatory)
  • Reflective QoS Indicator (RQI) – To indicate reflective QoS is supported for the encapsulated packet
  • Paging Policy Presence (PPP) – To indicate support for Paging Policy Indicator (PPI)
  • Paging Policy Indicator (PPI) – Sets parameters of paging policy differentiation to be applied
  • QoS Monitoring Packet – Indicates packet is used for QoS Monitoring and DL & UL Timestamps to come
  • UL/DL Sending Time Stamps – 64 bit timestamp generated at the time the UPF or UE encodes the packet
  • UL/DL Received Time Stamps – 64 bit timestamp generated at the time the UPF or UE received the packet
  • UL/DL Delay Indicators – Indicates Delay Results to come
  • UL/DL Delay Results – Delay measurement results
  • Sequence Number Presence – Indicates if QFI sequence number to come
  • UL/DL QFI Sequence Number – Sequence number as assigned by the UPF or gNodeB

Framed Routing in 5G

Previous generations of core mobile network, would only allocate a single IP address per UE (Well, two if dual-stack IPv4/IPv6 if you want to be technical). But one of the cool features in 5GC is the support for Framed Routing natively.

You could do this on several EPC platforms on LTE, but it’s support was always a bit shoe-horned in, and the UE was not informed of the framed addresses.

If you’ve worked in a wireline ISP you’re probably familiar with the concept of framed routing already, in short it’s one or more static routes, typically returned from a AAA server (Normally RADIUS) that are then routed to the subscriber.

Each subscriber gets allocated an IP by the network, but other IPs can also be routed to the subscriber, based on the network and CIDR mask.

So let’s say we allocate a public IP of 1.2.3.4/32 to our subscriber, but our subscriber is a fixed-wireless user running a business and they want a extra public IP Addresses.

How do we do this? With Framed Routing.

Now in our UDM we can add a “Framed IP”, and when the SMF sets up a session for our subscriber, the extra networks specified in the framed routes will get routed to that UE.

If we add 203.176.196.0/30 in our UDM for a subscriber, when the subscriber attaches the UPF will be setup to forward traffic to 1.2.3.4/32 and also traffic to 203.176.196.0/30 to the UE.

Update: I previously claimed:
Best of all this is signaled to the UE during the attach, so the UE is say a router, it becomes aware of the Framed IPs allocated to it.
This is incorrect! Thanks to Anonymous Telco Engineer from an Anonymous Nordic Country for pointing this out, it is not signaled to the UE.

More info in 3GPP TS 23.501 section 5.6.14 Support of Framed Routing.

Reflective QoS in 5G

Reflective QoS is a clever new concept introduced in 5G SA networks.

The concept is rather simple, apply QoS in the downlink, and let the UE reply using the QoS in the uplink.

So what is Reflective QoS?
If I send an ICMP ping request to a UE with a particular QoS Flow setup on the downlink, if Reflective QoS is enabled, the ICMP reply will have the same QoS applied on the uplink. Simple as that.

The UE looks at the QoS applied on the downlink traffic, and applies the same to the uplink traffic.

Let’s take another example, if a user starts playing an online game, and the traffic to the user (Downlink) has certain QoS parameters set, if Reflective QoS is enabled, the UE builds rules based on the incoming traffic based on the source IP / port / protocol of the traffic received, and the QoS used on the downlink, and applies the same on the uplink.

But actually getting Reflective QoS enabled requires a few more steps…

Reflective QoS is enabled on a per-packet basis, and is indicated by the UPF setting the Reflective QoS Indication (RQI) bit in the encapsulation header next to the QFI (This is set in the GTP header, as an extension header, used on the N3 and N9 reference points).

But before this is honored, a few other parameters have to be setup.

  • A Reflective QoS Timer (RQ Timer) has to be set, this can be done during the PDU Session Establishment, PDU Session Modification procedure, or set to a default value.
  • SMF has to set Reflective QoS Attribute (RQA) on the QoS profile for this traffic on the N2 reference point towards gNodeB
  • SMF must instruct UPF to use uplink reflective QoS by generating a new UL PDR for this SDF via the N4 reference point

When these requirements have been met, the traffic from the UPF to the gNodeB (N3 reference point) has the Reflective QoS Indication (RQI) bit in the encapsulation header, which is encapsulated and signaled down to the UE, which builds a rule based on the received IP source / port / protocol, and sends responses using the same QoS attributes.

N20 5G SBI for Nsmsf for SMS over 5GC

SMS in 5GC

Like in EPS / LTE, there are two ways to send SMS in Standalone 5G Core networks.

SMS over IMS or SMS over NAS – Both can be used on the same network, or just one, depending on operator preferences.

SMS over IMS in 5G

SMS over IMS uses the IMS network to send SMS. SIP MESSAGE methods are used to deliver SMS between users. While most operators have deployed IMS for 4G/LTE subscribers to use VoLTE some time ago, there are some changes required to the IMS architecture to support VoNR (Voice over New Radio) on the carrier side, and support for VoNR in commercial devices is currently in its early stages. Because of this many 5G devices and networks do not yet support SMS over IMS.

I’ve read in some places that RCS – The GSMA’s Rich Communications Service will replace SMS in 5GC. If this is the case, it reflected in any of the 3GPP standards.

SMS over NAS

To make a voice call on a device or network that does not support VoNR, EPS (VoLTE) fallback is used.
This means when making or receiving a call, the UE drops from the 5G RAN to using a 4G (LTE) basd RAN, and then uses VoLTE to make the call the same as it would when connected to 4G (LTE) networks, because it is connected to a 4G network.
This works technically, but is not the prefered option as it adds extra signaling and complexity to the network, and delays in the call setup, and it’s expected operators will eventually move to VoNR,but works as a stop-gap measure.

But mobile networks see a lot of SMS traffic. If every time an SMS was sent the UE had to rely on EPS fallback to access IMS, this would see users ping-ponging between 4G and 5G every time they sent or received an SMS.

This isn’t a new problem, in fact SMS-over-NAS was initially added to 4G (LTE) to allow devices to stay connected to the EPC (4G Core network) but still send and receive SMS, even if the network or device relied on “Circuit-Switched fallback” (A mechanism to drop from 4G to 2G / 3G for voice calls).

5GC reintroduces the SMS-over-NAS feature, allowing the SMS messages to be carried over NAS messaging on the N1 interface. Voice calls may still require fallback to EPS (4G) to make calls over VoLTE, but SMS can be carried over NAS messaging, minimizing the amount of Inter-RAT handovers required.

The Nsmsf_SMService

For this a new Service Based Interface is introduced between the AMF and the SMSF (SMS Function, typically built into an SMSc), via the N20 / Nsmsf SBI to offer the Nsmsf_SMService service.

There are 3 operations supported for the Nsmsf_SMService:

  • Active – Initiated by the AMF – Used to active the SMS service for a given subscriber,
  • Deactivate – Initiated by the AMF – Used to deactivate the SMS over NAS service for a given subscriber.
  • UplinkSMS – Initiated by the AMF to transfer the SMS payload towards the SMSF.

The UplinkSMS is a HTTP post from the AMF with the SUPI in the Request URI and the request body containing a JSON encoded SmsRecordData.

Astute readers may notice that’s all well and good, but that only covers Mobile Originated (MO) SMS, what about Mobile Terminated (MT) SMS?

Well that’s actually handled by a totally different SBI, the Namf_Communication action “N1N2MessageTransfer” is resused for sending MT SMS, as that interface already exists for use by SMF, LMF and PCF, and 5GC attempts to reuse interfaces as much as possible.

5G Online Charging with the Nchf_ConvergedCharging SBI

There’s no such thing as a free lunch, and 5G is the same – services running through a 5G Standalone core need to be billed.

In 5G Core Networks, the SMF (Session Management Function) reaches out to the CHF (Charging Function) to perform online charging, via the Nchf_ConvergedCharging Service Based Interface (aka reference point).

Like in other generations of core mobile networks, Credit Control in 5G networks is based on 3 functions:
Requesting a quota for a subscriber from an online charging service, which if granted permits the subscriber to use a certain number of units (in this case data transferred in/out).
Just before those units are exhausted sending an update to request more units from the online charging service to allow the service to continue.
When the session has ended or or subscriber has disconnected, a termination to inform the online charging service to stop billing and refund any unused credit / units (data).

Initial Service Creation (ConvergedCharging_Create)

When the SMF needs to setup a session, (For example when the AMF sends the SMF a Nsmf_PDU_SessionCreate request), the CTF (Charging Trigger Function) built into the SMF sends a Nchf_ ConvergedCharging_Create (Initial, Quota Requested) to the Charging Function (CHF).

Because the Nchf_ConvergedCharging interface is a Service Based Interface this is carried over HTTP, in practice, this means the SMF sends a HTTP post to http://yourchargingfunction/Nchf_ConvergedCharging/v1/chargingdata/

Obviously there’s some additional information to be shared rather than just a HTTP post, so the HTTP post includes the ChargingDataRequest as the Request Body. If you’ve dealt with Diameter Credit Control you may be expecting the ChargingDataRequest information to be a huge jumble of nested AVPs, but it’s actually a fairly short list:

  • The subscriberIdentifier (SUPI) is included to identify the subscriber so the CHF knows which subscriber to charge
  • The nfConsumerIdentification identifies the SMF generating the request (The SBI Consumer)
  • The invocationTimeStamp and invocationSequenceNumber are both pretty self explanatory; the time the request is sent and the sequence number from the SBI consumer
  • The notifyUri identifies which URI should receive subsequent notifications from the CHF (For example if the CHF wants to terminate the session, the SMF to send that to)
  • The multipleUnitUsage defines the service-specific parameters for the quota being requested.
  • The triggers identifies the events that trigger the request

Of those each of the fields should be pretty self explanatory as to their purpose.
The multipleUnitUsage data is used like the Service Information AVP in Diameter based Credit Control, in that it defines the specifics of the service we’re requesting a quota for. Inside it contains a mandatory ratingGroup specifying which rating group the CHF should use, and optionally requestedUnit which can define either the amount of service units being requested (For us this is data in/out), or to tell the CHF units are needed. Typically this is used to define the amount of units to be requested.

On the amount of units requested we have a bit of a chicken-and-egg scenario; we don’t know how many units (In our case the units is transferred data in/out) to request, if we request too much we’ll take up all the customer’s credit, potentially prohibiting them from accessing other services, and not enough requested and we’ll constantly slam the CHF with requests for more credit.
In practice this value is somewhere between the two, and will vary quite a bit.

Based on the service details the SMF has put in the Nchf_ ConvergedCharging_Create request, the Charging Function (CHF) takes into account the subscriber’s current balance, credit control policies, etc, and uses this to determine if the Subscriber has the required balances to be granted a service, and if so, sends back a 201 CREATED response back to the Nchf_ConvergedCharging_Create request sent by the CTF inside the SMF.

This 201 CREATED response is again fairly clean and simple, the key information is in the multipleQuotaInformation which is nested within the ChargingDataResponse, which contains the finalUnitIndication defining the maximum units to be granted for the session, and the triggers to define when to check in with CHF again, for time, volume and quota thresholds.

And with that, the service is granted, the SMF can instruct the UPF to start allowing traffic through.

Update (ConvergedCharging_Update)

Once the granted units / quota has been exhausted, the Update (ConvergedCharging_Update) request is used for requesting subsequent usage / quota units. For example our Subscriber has used up all the data initially allocated but is still consuming data, so the SMF sends a Nchf_ConvergedCharging_Update request to request more units, via another HTTP post, to the CHF, with the requested service unit in the request body in the form of ChargingDataRequest as we saw in the initial ConvergedCharging_Create.

If the subscriber still has credit and the CHF is OK to allow their service to continue, the CHF returns a 200 OK with the ChargingDataResponse, again, detailing the units to be granted.

This procedure repeats over and over as the subscriber uses their allocated units.

Release (ConvergedCharging_Release)

Eventually when our subscriber disconnects, the SMF will generate a Nchf_ConvergedCharging_Release request, detailing the data the subscriber used in the ChargingDataRequest in the body, to the CHF, so it can refund any unused credits.

The CHF sends back a 204 No Content response, and the procedure is completed.

More Info

If you’ve had experience in Diameter credit control, this simple procedure will be a breath of fresh air, it’s clean and easy to comprehend,
If you’d like to learn more the 3GPP specification docs on the topic are clear and comprehensible, I’d suggest:

  • TS 132 290 – Short overview of charging mechanisms
  • TS 132 291 – Specifics of the Nchf_ConvergedCharging interface
  • The common 3GPP charging architecture is specified in TS 32.240
  • TS 132 291 – Overview of components and SBIs inc Operations

EIR in 5G Networks (N5g-eir_EquipmentIdentityCheck)

Today, we’re going to look at one of the simplest Service Based Interfaces in the 5G Core, the Equipment Identity Register (EIR).

The purpose of the EIR is very simple – When a subscriber connects to the network it’s Permanent Equipment Identifier (PEI) can be queried against an EIR to determine if that device should be allowed onto the network or not.

The PEI is the IMEI of a phone / device, with the idea being that stolen phones IMEIs are added to a forbidden list on the EIR, and prohibited from connecting to the network, making them useless, in turn making stolen phones harder to resell, deterring mobile phone theft.

In reality these forbidden-lists are typically either country specific or carrier specific, meaning if the phone is used in a different country, or in some cases a different carrier, the phone’s IMEI is not in the forbidden-list of the overseas operator and can be freely used.

The dialog goes something like this:

AMF: Hey EIR, can PEI 49-015420-323751-8 connect to the network?
EIR: (checks if 49-015420-323751-8 in forbidden list - It's not) Yes.

or

AMF: Hey EIR, can PEI 58-241992-991142-3 connect to the network?
EIR: (checks if 58-241992-991142-3 is in forbidden list - It is) No.

(Optionally the SUPI can be included in the query as well, to lock an IMSI to an IMEI, which is a requirement in some jurisdictions)

As we saw in the above script, the AMF queries the EIR using the N5g-eir_EquipmentIdentityCheck service.

The N5g-eir_EquipmentIdentityCheck service only offers one operation – CheckEquipmentIdentity.

It’s called by sending an HTTP GET to:

http://{apiRoot}/n5g-eir-eic/v1/equipment-status

Obviously we’ll need to include the PEI (IMEI) in the HTTP GET, which means if you remember back to basic HTTP GET, you may remember means you have to add ?attribute=value&attribute=value… for each attribute / value you want to share.

For the CheckEquipmentIdentity operation, the PEI is a mandatory parameter, and optionally the SUPI can be included, this means to query our PEI (The IMSI of the phone) against our EIR we’d simply send an HTTP GET to:

AMF: HTTP GET http://{apiRoot}/n5g-eir-eic/v1/equipment-status?pei=490154203237518
EIR: 200 (Body EirResponseData: status "WHITELISTED")

And how it would look for a blacklisted IMEI:

AMF: HTTP GET http://{apiRoot}/n5g-eir-eic/v1/equipment-status?pei=490154203237518
EIR: 404 (Body EirResponseData: status "BLACKLISTED")

Because it’s so simple, the N5g-eir_EquipmentIdentityCheck service is a great starting point for learning about 5G’s Service Based Interfaces.

You can find all the specifics in 3GPP TS 29.511 – Equipment Identity Register Services; Stage 3

PS Data Off

Imagine a not-too distant future, one without flying cars – just one where 2G and 3G networks have been switched off.

And the imagine a teenage phone user, who has almost run out of their prepaid mobile data allocation, and so has switched mobile data off, or a roaming scenario where the user doesn’t want to get stung by an unexpectedly large bill.

In 2G/3G networks the Circuit Switched (Voice & SMS) traffic was separate to the Packet Switched (Mobile Data).

This allowed users to turn of mobile data (GPRS/HSDPA), etc, but still be able to receive phone calls and send SMS, etc.

With LTE, everything is packet switched, so turning off Mobile Data would cut off VoLTE connectivity, meaning users wouldn’t be able to make/recieve calls or SMS.

In 3GPP Release 14 (2017) 3GPP introduced the PS Data Off feature.

This feature is primarily implemented on the UE side, and simply blocks uplink user traffic from the UE, while leaving other background IP services, such as IMS/VoLTE and MMS, to continue working, even if mobile data is switched off.

The UE can signal to the core it is turning off PS Data, but it’s not required to, so as such from a core perspective you may not know if your subscriber has PS Data off or not – The default APN is still active and in the implementations I’ve tried, it still responds to ICMP Pings.

IMS Registration stays in place, SMS and MMS still work, just the UE just drops the requests from the applications on the device (In this case I’m testing with an Android device).

What’s interesting about this is that a user may still find themselves consuming data, even if data services are turned off. A good example of this would be push notifications, which are sent to the phone (Downlink data). The push notification will make it to the UE (or at least the TCP SYN), after all downlink services are not blocked, however the response (for example the SYN-ACK for TCP) will not be sent. Most TCP stacks when ignored, try again, so you’ll find that even if you have PS Data off, you may still use some of your downlink data allowance, although not much.

The SIM EF 3GPPPSDATAOFF defines the services allowed to continue flowing when PS Data is off, and the 3GPPPSDATAOFFservicelist EF lists which IMS services are allowed when PS Data is off.

Usually at this point, I’d include a packet capture and break down the flow of how this all looks in signaling, but when I run this in my lab, I can’t differentiate between a PS Data Off on the UE and just a regular bearer idle timeout… So have an irritating blinking screenshot instead…

The PLMN Problem for Private LTE / 5G

So it’s the not to distant future and the pundits vision of private LTE and 5G Networks was proved correct, and private networks are plentiful.

But what PLMN do they use?

The PLMN (Public Land Mobile Network) ID is made up of a Mobile Country Code + Mobile Network Code. MCCs are 3 digits and MNCs are 2-3 digits. It’s how your phone knows to connect to a tower belonging to your carrier, and not one of their competitors.

For example in Australia (Mobile Country Code 505) the three operators each have their own MCC. Telstra as the first licenced Mobile Network were assigned 505/01, Optus got 505/02 and VHA / TPG got 505/03.

Each carrier was assigned a PLMN when they started operating their network. But the problem is, there’s not much space in this range.

The PLMN can be thought of as the SSID in WiFi terms, but with a restriction as to the size of the pool available for PLMNs, we’re facing an IPv4 exhaustion problem from the start if we’re facing an explosion of growth in the space.

Let’s look at some ways this could be approached.

Everyone gets a PLMN

If every private network were to be assigned a PLMN, we’d very quickly run out of space in the range. Best case you’ve got 3 digits, so only space for 1,000 networks.

In certain countries this might work, but in other areas these PLMNs may get gobbled up fast, and when they do, there’s no more. New operators will be locked out of the market.

Loaner PLMNs

Carriers already have their own PLMNs, they’ve been using for years, some kit vendors have been assigned their own as well.

If you’re buying a private network from an existing carrier, they may permit you to use their PLMN,

Or if you’re buying kit from an existing vendor you may be able to use their PLMN too.

But what happens then if you want to move to a different kit vendor or another service provider? Do you have to rebuild your towers, reconfigure your SIMs?

Are you contractually allowed to continue using the PLMN of a third party like a hardware vendor, even if you’re no longer purchasing hardware from them? What happens if they change their mind and no longer want others to use their PLMN?

Everyone uses 999 / 99

The ITU have tried to preempt this problem by reallocating 999/99 for use in Private Networks.

The problem here is if you’ve got multiple private networks in close proximity, especially if you’re using CBRS or in close proximity to other networks, you may find your devices attempting to attach to another network with the same PLMN but that isn’t part of your network,

Mobile Country or Geographical Area Codes
Note from TSB
Following the agreement on the Appendix to Recommendation ITU-T E.212 on “shared E.212 MCC 999 for internal use within a private network” at the closing plenary of ITU-T SG2 meeting of 4 to 13 July 2018, upon the advice of ITU-T Study Group 2, the Director of TSB has assigned the Mobile Country Code (MCC) “999” for internal use within a private network. 

Mobile Network Codes (MNCs) under this MCC are not subject to assignment and therefore may not be globally unique. No interaction with ITU is required for using a MNC value under this MCC for internal use within a private network. Any MNC value under this MCC used in a network has
significance only within that network. 

The MNCs under this MCC are not routable between networks. The MNCs under this MCC shall not be used for roaming. For purposes of testing and examples using this MCC, it is encouraged to use MNC value 99 or 999. MNCs under this MCC cannot be used outside of the network for which they apply. MNCs under this MCC may be 2- or 3-digit.

(Recommendation ITU-T E.212 (09/2016))

The Crystal Ball?

My bet is we’ll see the ITU allocate an MCC – or a range of MCCs – for private networks, allowing for a pool of PLMNs to use.

When deploying networks, Private network operators can try and pick something that’s not in use at the area from a pool of a few thousand options.

The major problem here is that there still won’t be an easy way to identify the operator of a particular network; the SPN is local only to the SIM and the Network Name is only present in the NAS messaging on an attach, and only after authentication.

If you’ve got a problem network, there’s no easy way to identify who’s operating it.

But as eSIMs become more prevalent and BIP / RFM on SIMs will hopefully allow operators to shift PLMNs without too much headache.

Pre-5G Network Slicing

Network Slicing, is a new 5G Technology. Or is it?

Pre 3GPP Release 16 the capability to “Slice” a network already existed, in fact the functionality was introduced way back at the advent of GPRS, so what is so new about 5G’s Network Slicing?

Network Slice: A logical network that provides specific network capabilities and network characteristics

3GPP TS 123 501 / 3 Definitions and Abbreviations

Let’s look at the old and the new ways, of slicing up networks, pre release 16, on LTE, UMTS and GSM.

Old Ways: APN Separation

The APN or “Access Point Name” is used so the SGSN / MME knows which gateway to that subscriber’s traffic should be terminated on when setting up the session.

APN separation is used heavily by MVNOs where the MVNO operates their own P-GW / GGSN.
This allows the MNVO can handle their own rating / billing / subscriber management when it comes to data.
A network operator just needs to setup their SGSN / MME to point all requests to setup a bearer on the MVNO’s APN to the MNVO’s gateways, and presoto, it’s no longer their problem.

Later as customers wanted MPLS solutions extended over mobile (Typically LTE), MNOs were able to offer “private APNs”.
An enterprise could be allocated an APN by the MNO that would ensure traffic on that APN would be routed into the enterprise’s MPLS VRF.
The MNO handles the P-GW / GGSN side of things, adding the APN configuration onto it and ensuring the traffic on that APN is routed into the enterprise’s VRF.

Different QCI values can be assigned to each APN, to allow some to have higher priority than others, but by slicing at an APN level you lock all traffic to those QoS characteristics (Typically mobile devices only support one primary APN used for routing all traffic), and don’t have the flexibility to steer which networks which traffic from a subscriber goes to.

It’s not really practical for everyone to have their own APNs, due in part to the namespace limitations, the architecture of how this is usually done limits this, and the simple fact of everyone having to populate an APN unique to them would be a real headache.

5G replaces APNs with “DNNs” – Data Network Names, but the functionality is otherwise the same.

In Summary:
APN separation slices all traffic from a subscriber using a special APN and provide a bearer with QoS/QCI values set for that APN, but does not allow granular slicing of individual traffic flows, it’s an all-or-nothing approach and all traffic in the APN is treated equally.

The old Ways: Dedicated Bearers

Dedicated bearers allow traffic matching a set rule to be provided a lower QCI value than the default bearer. This allows certain traffic to/from a UE to use GBR or Non-GBR bearers for traffic matching the rule.

The rule itself is known as a “TFT” (Traffic Flow Template) and is made up of a 5 value Tuple consisting of IP Source, IP Destination, Source Port, Destination Port & Protocol Number. Both the UE and core network need to be aware of these TFTs, so the traffic matching the TFT can get the QCI allocated to it.

This can be done a variety of different ways, in LTE this ranges from rules defined in a PCRF or an external interface like those of an IMS network using the Rx interface to request a dedicated bearers matching the specified TFTs via the PCRF.

Unlike with 5G network slicing, dedicated bearers still traverse the same network elements, the same MME, S-GW & P-GW is used for this traffic. This means you can’t “locally break out” certain traffic.

In Summary:
Dedicated bearers allow you to treat certain traffic to/from subscribers with different precedence & priority, but the traffic still takes the same path to it’s ultimate destination.

Old Ways: MOCN

Multi-Operator Core Network (MOCN) allows multiple MNOs to share the same active (tower) infrastructure.

This means one eNodeB can broadcast more than one PLMN and server more than one mobile network.

This slicing is very coarse – it allows two operators to share the same eNodeBs, but going beyond a handful of PLMNs on one eNB isn’t practical, and the PLMN space is quite limited (1000 PLMNs per country code max).

In Summary:
MOCN allows slicing of the RAN on a very coarse level, to slice traffic from different operators/PLMNs sharing the same RAN.

Its use is focused on sharing RAN rather than slicing traffic for users.

5Gethernet? – Transporting Non-IP data in 5G

I wrote not too long ago about how LTE access is not liked WiFi, after a lot of confusion amongst new Open5Gs users coming to LTE for the first time and expecting it to act like a Layer 2 network.

But 5G brings a new feature that changes that;

PDU Session Type: The type of PDU Session which can be IPv4, IPv6, IPv4v6, Ethernet or Unstructured

ETSI TS 123 501 – System Architecture for the 5G System

No longer are we limited to just IP transport, meaning at long last I can transport my Token Ring traffic over 5G, or in reality, customers can extend Layer 2 networks (Ethernet) over 3GPP technologies, without resorting to overlay networking, and much more importantly, fixed line networks, typically run at Layer 2, can leverage the 5G core architecture.

How does this work?

With TFTs and the N6 interfaces relying on the 5 value tuple with IPs/Ports/Protocol #s to make decisions, transporting Ethernet or Non-IP Data over 5G networks presents a problem.

But with fixed (aka Wireline) networks being able to leverage the 5G core (“Wireline Convergence”), we need a mechanism to handle Ethernet.

For starters in the PDU Session Establishment Request the UE indicates which PDN types, historically this was IPv4/6, but now if supported by the UE, Ethernet or Unstructured are available as PDU types.

We’ll focus on Ethernet as that’s the most defined so far,

Once an Ethernet PDU session has been setup, the N6 interface looks a bit different, for starters how does it know where, or how, to route unstructured traffic?

As far as 3GPP is concerned, that’s your problem:

Regardless of addressing scheme used from the UPF to the DN, the UPF shall be able to map the address used between the UPF and the DN to the PDU Session.

5.6.10.3 Support of Unstructured PDU Session type

In short, the UPF will need to be able to make the routing decisions to support this, and that’s up to the implementer of the UPF.

In the Ethernet scenario, the UPF would need to learn the MAC addresses behind the UE, handle ARP and use this to determine which traffic to send to which UE, encapsulate it into trusty old GTP, fill in the correct TEID and then send it to the gNodeB serving that user (if they are indeed on a RAN not a fixed network).

So where does this leave QoS? Without IPs to apply with TFTs and Packet Filter Sets to, how is this handled? In short, it’s not – Only the default QoS rule exist for a PDU Session of Type Unstructured. The QoS control for Unstructured PDUs is performed at the PDU Session level, meaning you can set the QFI when the PDU session is set up, but not based on traffic through that bearer.

Does this mean 5G RAN can transport Ethernet?

Well, it remains to be seen.

The specifications don’t cover if this is just for wireline scenarios or if it can be used on RAN.

The 5G PDU Creation signaling has a field to indicate if the traffic is Ethernet, but to work over a RAN we would need UE support as well as support on the Core.

And for E-UTRAN?

For the foreseeable future we’re going to be relying on LTE/E-UTRAN as well as 5G. So if you’re mobile with a non-IP PDU, and you enter an area only served by LTE, what happens?

PDU Session types “Ethernet” and “Unstructured” are transferred to EPC as “non-IP” PDN type (when supported by UE and network).

It is assumed that if a UE supports Ethernet PDU Session type and/or Unstructured PDU Session type in 5GS it will also support non-IP PDN type in EPS.

5.17.2 Interworking with EPC

If you were not aware of support in the EPC for Non-IP PDNs, I don’t blame you – So far support the CIoT EPS optimizations were initially for Non-IP PDN type has been for NB-IoT to supporting Non-IP Data Delivery (NIDD) for lightweight LwM2M traffic.

So why is this? Well, it may have to do with WO 2017/032399 Al which is a patent held by Ericsson, regarding “COMMUNICATION OF NON-IP DATA OVER PACKET DATA NETWORKS” which may be restricting wide scale deployment of this,

Open5Gs Logo

Open5Gs Database Schema Change

As Open5Gs has introduced network slicing, which led to a change in the database used,

Alas many users had subscribers provisioned in the old DB schema and no way to migrate the SDM data between the old and new schema,

If you’ve created subscribers on the old schema, and now after the updates your Subscriber Authentication is failing, check out this tool I put together, to migrate your data over.

The Open5Gs Python library I wrote has also been updated to support the new schema.

MTU in LTE & 5G Transmission Networks – Part 1

Every now and then when looking into a problem I have to really stop and think about how things work low down, that I haven’t thought about for a long time, and MTU is one of those things.

I faced with an LTE MTU issue recently I thought I’d go back and brush up on my MTU knowhow and do some experimenting.

Note: This is an IPv4 discussion, IPv6 does not support fragmentation.

The very, very basics

MTU is the Maximum Transmission Unit.

In practice this is the largest datagram the layer can handle, and more often than not, this is based on a physical layer constraint, in that different physical layers can only stuff so much into a frame.

“The Internet” from a consumer perspective typically has an MTU of 1500 bytes or perhaps a bit under depending on their carrier, such as 1472 bytes.
SANs in data centers typically use an MTU of around 9000 bytes,
Out of the box, most devices if you don’t specify, will use an MTU of 1500 bytes.

As a general rule, service providers typically try to offer an MTU as close to 1500 as possible.

Messages that are longer than the Maximum Transmission Unit need to be broken up in a process known as “Fragmenting”.
Fragmenting allows large frames to be split into smaller frames to make their way across hops with a lower MTU.

All about Fragmentation

So we can break up larger packets into smaller ones by Fragmenting them, so case closed on MTU right? Sadly not.

Fragmentation leads to reduced efficiency – Fragmenting frames takes up precious CPU cycles on the router performing it, and each time a frame is broken up, additional overhead is added by the device breaking it up, and by the receiver to reassemble it.

Fragmentation can happen multiple times across a path (Multi-Stage Fragmentation).
For example if a frame is sent with a length of 9000 bytes, and needs to traverse a hop with an MTU of 4000, it would need to be fragmented (broken up) into 3 frames (Frame 1 and Frame 2 would be ~4000 bytes long and frame 3 would be ~1000 bytes long).
If it then needs to traverse another hop with an MTU of 1500, then the 3 fragmented frame would each need to be further fragmented, with the first frame of ~4000 bytes being split up into 3 more fragmented frames.
Lost track of what just happened? Spare a thought for the routers having to to do the fragmentation and the recipient having to reassemble their packets.

Fragmented frames are reassembled by the end recipient, other devices along the transmission path don’t reassemble packets.

In the end it boils down to this trade off:
The larger the packet can be, the more user data we can stuff into each one as a percentage of the overall data. We want the percentage of user data for each packet to be as high as can be.
This means we want to use the largest MTU possible, without having to fragment packets.

Overhead eats into our MTU

A 1500 byte MTU that has to be encapsulated in IPsec, GTP or PPP, is no longer a 1500 byte MTU as far as the customer is concerned.

Any of these encapsulation techniques add overhead, which shrinks the MTU available to the end customer.

Keep in mind we’re going to be encapsulating our subscriber’s data in GTP before it’s transmitted across LTE/NR, and this means we’ll be adding:

  • 8 bytes for the GTP header
  • 8 bytes for the transport UDP header
  • 20 bytes for the transport IPv4 header
  • 14 bytes if our transport is using Ethernet

This means we’ve got 50 bytes of transmission / transport overhead. This will be important later on!

How do subscribers know what to use as MTU?

Typically when a subscriber buys a DSL service or HFC connection, they’ll either get a preconfigured router from their carrier, or they will be given a list of values to use that includes MTU.

LTE and 5G on the other hand tell us the value we should use.

Inside the Protocol Configuration Options in the NAS PDU, the UE requests the MTU and DNS server to be used, and is provided back from the network.

This MTU value is actually set on the MME, not the P-GW. As the MME doesn’t actually know the maximum MTU of the network, it’s up to the operator to configure this to be a value that represents the network.

Why this Matters for LTE & 5G Transmission

As we covered earlier, fragmentation is costly. If we’re fragmenting packets we are:

  • Wasting resources on our transmission network / core networks – as we fragment Subscriber packets it’s taking up compute resources and therefore limiting throughput
  • Wasting radio resources as additional overhead is introduced for fragmented packets, and additional RBs need to be scheduled to handle the fragmented packets

To test this I’ve setup a scenario in the lab, and we’ll look at the packet captures to see how the MTU is advertised, and see how big we can make our MTU on the subscriber side.