Tag Archives: NR

Seeing the Airwaves – QCsuper

Recently we were on a project and our RAN guy was seeing UEs hand between one layer and another over and over. The hysteresis and handover parameters seemed correct, but we needed a way to see what was going on, what the eNB was actually advertising and what the UE was sending back.

In a past life I had access to expensive complicated dedicated tooling that could view this information transmitted by the eNB, but now, all I need is a cellphone or a modem with a Qualcomm chip.

QCsuper is a super handy tool that gives access to the Diag interface on Qualcomm chipsets. That’s the same interface QXDM uses, but without the massive headache and usability issues that come with QXDM, plus it puts everything in Wireshark to make it super easy to view everything.

You can see RRC, NAS and even the user plane traffic, all from Wireshark.

I’ve tested this with a rooted Xiaomi Pro 12 and a Quectel modem in a M.2 to USB adapter.

This is really cool and I’m looking forward to using it more in the field, or just if I’m bored on the train scanning the airwaves!

Mobile IPv6 Tax?

Recently a Tweet from Dean Bubly got me thinking about how data is charged in cellular:

In the cellular world, subscribers are charged for data from the IP, transport and applications layers; this means you pay for the IP header, you pay for the TCP/UDP header, and you pay for the contents (the cat videos it contains).

This also means if an operator moves mobile subscribers from IPv4 to IPv6, there’s an extra 20 bytes the customer is charged for for every packet sent / received, which the customer is charged for – This is because the IPv6 header is longer than the IPv4 header.

Source: ServerFault - https://serverfault.com/questions/547768/ipv4-header-vs-ipv6-header-size

In most cases, mobile subs don’t get a choice as to if their connection is IPv4 or IPv6, but on a like for like basis, we can say that if a customer moves is on IPv6 every packet sent/received will have an extra 20 bytes of data consumed compared to IPv4.

This means subscribers use more data on IPv6, and this means they get charged for more data on IPv6.

For IoT applications, light users and PAYG users, this extra 20 bytes per packet could add up to something significant – But how much?

We can quantify this, but we’d need to know the number of packets sent on average, and the quantity of the data transferred, because the number of packets is the multiplier here.

So for starters I’ve left a phone on the desk, it’s registered to the network but just sitting in Idle mode – This is an engineering phone from an OEM, it’s just used for testing so doesn’t have anything loaded onto it in terms of apps, it’s not signed into any applications, or checking in the background, so I thought I’d try something more realistic.

So to get a clearer picture, I chucked a SIM in my regular everyday phone I use personally, registered it to the cellular lab I have here. For the next hour I sniffed the GTP traffic for the phone while it was sitting on my desk, not touching the phone, and here’s what I’ve got:

Overall the PCAP includes 6,417,732 bytes of data, but this includes the transport and GTP headers, meaning we can drop everything above it in our traffic calculations.

Everything except the data encapsulated in GTP can be dropped

For this I’ve got 14 bytes of ethernet, 20 bytes IP, 8 bytes UDP and 5 bytes for TZSP (this is to copy the traffic from the eNB to my local machine), then we’ve got the transport from the eNB to the SGW, 14 bytes of ethernet again, 20 bytes of IP , 8 bytes of UDP and 8 bytes of GTP then the payload itself. Phew.
All this means we can drop 97 bytes off every packet.

We have 16,889 packets, 6,417,732 bytes in total, minus 97 bytes from each gives us 1,638,233 of headers to drop (~1.6MB) giving us a total of 4.556 MB traffic to/from the phone itself.

This means my Android phone consumes 4.5 MB of cellular data in an hour while sitting on the desk, with 16,889 packets in/out.

Okay, now we’re getting somewhere!

So now we can answer the question, if each of these 16k packets was IPv6, rather than IPv4, we’d be adding another 20 bytes to each of them, 20 bytes x 16,889 packets gives 337,780 bytes (~0.3MB) to add to the total.

If this traffic was transferred via IPv6, rather than IPv4, we’d be looking at adding 20 bytes to each of the 16,889 packets, which would equate to 0.3MB extra, or about 7% overhead compared to IPv4.

But before you go on about what an outrage this IPv6 transport is, being charged for those extra bytes, that’s only one part of the picture.

There’s a reason operators are finally embracing IPv6, and it’s not to put an extra 7% of traffic on the network (I think if you asked most capacity planners, they’d say they want data savings, not growth).

IPv6 is, for lack of a better term, less rubbish than IPv4.

There’s a lot of drivers for IPv6, and some of these will reduce data consumption.
IPv6 is actually your stuff talking directly to the remote stuff, this means that we don’t need to rely on NAT, so no need to do NAT keepalives, and opening new sessions, which is going to save you data. If you’re running apps that need to keep a connection to somewhere alive, these data savings could negate your IPv6 overhead costs.

Will these potential data savings when using IPv6 outweigh the costs?

That’s going to depend on your use case.

If you’ve extremely bandwidth / data constrained, for example, you have an IoT device on an NTN / satellite connection, that was having to Push data every X hours via IPv4 because you couldn’t pull data from it as it had no public IP, then moving it to IPv6 so you can pull the data on the public IP, on demand, will save you data. That’s a win with IPv6.

If you’re a mobile user, watching YouTube, getting push notifications and using your phone like a normal human, probably not, but if you’re using data like a normal user, you’ve probably got a sizable data allowance that you don’t end up fully consuming, and the extra 20 bytes per packet will be nothing in comparison to the data used to watch a 2k video on your small phone screen.

Inside a 32×32 MIMO Antenna

For the past few months I’ve had a Band 78 NR active antenna unit sitting next to my desk.

It’s a very cool bit of kit that doesn’t get enough love, but I thought I’d pop open the radome and take a peek inside.

Individual antenna elements

What I found very interesting is that it’s not all antennas in there!

… 29, 30, 31, 32. Yup. Checks out.

There are the expected number of antennas (I mean if I opened it up and found 31 antennas I’d have been surprised) but they don’t take up the whole volume of the unit, only about half,

AAU with Radome reinstalled

Well, after that strip show, back to sitting in my office until I need to test something 5G SA again…

CUPS – Control and User Plane Separation in LTE & NR with PFCP (Sx & N4)

3GPP release 14 introduced the concept of CUPS – Control & User Plane Separation, and the Sx interface, this allows the control plane (GTP-C) functionality and the user plane (GTP-U) functionality to be separated, and run in a distributed fashion, allowing the node a user’s GTP-U traffic flows through to be in a different location to where the Control / Signalling traffic (GTP-C) flows.

In practice that means for an LTE EPC this means we split our P-GW and S-GW into a minimum of two network elements each.

A P-GW is split and becomes a P-GW-C that handles the P-GW Control Plane traffic (GTPv2-C) and a P-GW-U speaking GTP-U for our User Plane traffic. But the split doesn’t need to stop there, one P-GW-C could control multiple P-GW-Us, routing the user plane traffic. Sames goes for S-GW being split into S-GW-C and S-GW-U,

This would mean we could have a P-GW-U located closer to a eNB / User to reduce latency, by allowing GTP-U traffic to break out on a node closer to the user.

It also means we can scale better too, if we need to handle more data traffic, but not necessarily more control plane traffic, we can just add more P-GW-U nodes to handle this.

To manage this a new protocol was defined – PFCP – the Packet Forwarding Control Protocol. For LTE this is refereed to as the Sx reference point, it’ also reused in 5G-SA as the N4 reference point.

When a GTP-C “Create Session Request” comes into a P-GW for example from an S-GW, a PFCP “Session Establishment Request” is sent by the P-GW-C to the P-GW-U with much the same information that was in the GTP-C request, to setup the session.

So why split the Control and User Plane traffic if you’re going to just relay the GTP-C traffic in a different format?

That was my first question – the answer is that keeping the GTP-C interface ensures backward compatibility with older MMEs, PCRFs, charging systems, and allows a staged roll out and bolting on extra capacity.

GTP-C disappears entirely in 5G Standalone architecture and is replaced with the N4 interface, which uses PFCP for the Control Plan and GTP-U again.

Here’s a capture from Open5Gs core showing GTPv2C and PFCP in play.