The other day I found myself banging my head on the table to diagnose an issue with Ringback tone on an SS7 link and the IMS.
On the IMS side, no RBT was heard, but I could see the Media Gateway was sending RTP packets to the TAS, and the TAS was sending it to the UE, but was there actual content in the RTP packets or was it just silence?
If this was PCM / G711 we’d be able to just playback in Wireshark, but alas we can’t do this for the AMR codec.
Filter the RTP stream out in Wireshark
Inside Wireshark I filtered each of the audio streams in one direction (one for the A-Party audio and one for the B-Party audio)
Then I needed to save each of the streams as a separate PCAP file (Not PCAPng).
Advanced Mobile Location (AML) is being rolled out by a large number of mobile network operators to provide accurate caller location to emergency services, so how does it work, what’s going on and what do you need to know?
Recently we’ve been doing a lot of work on emergency calling in IMS, and meeting requirements for NG-112 / e911, etc.
This led me to seeing my first Advanced Mobile Location (AML) SMS in the wild.
For those unfamiliar, AML is a fancy text message that contains the callers location, accuracy, etc, that is passed to emergency services when you make a call to emergency services in some countries.
It’s sent automatically by your handset (if enabled) when making a call to an emergency number, and it provides the dispatch operator with your location information, including extra metadata like the accuracy of the location information, height / floor if known, and level of confidence.
Google has their own version of AML called ELS, which they claim is supported on more than 99% of Android phones (I’m unclear on what this means for Harmony OS or other non-Google backed forks of Android), and Apple support for AML starts from iOS 11 onwards, meaning it’s supported on iPhones from the iPhone 5S onards,.
Call Flow
When a call is made to the PSAP based on the Emergency Calling Codes set on the SIM card or set in the OS, the handset starts collecting location information. The phone can pull this from a variety of sources, such as WiFi SSIDs visible, but the best is going to be GPS or one of it’s siblings (GLONASS / Galileo).
Once the handset has a good “lock” of a location (or if 20 seconds has passed since the call started) it bundles up all of this information the phone has, into an SMS and sends it to the PSAP as a regular old SMS.
The routing from the operator’s SMSc to the PSAP, and the routing from the PSAP to the dispatcher screen of the operator taking the call, is all up to implementation. For the most part the SMS destination is the emergency number (911 / 112) but again, this is dependent on the country.
Inside the SMS
To the user, the AML SMS is not seen, in fact, it’s actually forbidden by the standard to show in the “sent” items list in the SMS client.
On the wire, the SMS looks like any regular SMS, it can use GSM7 bit encoding as it doesn’t require any special characters.
Each attribute is a key / value pair, with semicolons (;) delineating the individual attributes, and = separating the key and the value.
If you’ve got a few years of staring at Wireshark traces in Hex under your belt, then this will probably be pretty easy to get the gist of what’s going on, we’ve got the header (A”ML=1″) which denotes this is AML and the version is 1.
After that we have the latitude (lt=), longitude (lg=), radius (rd=), time of positioning (top=), level of confidence (lc=), positioning method (pm=) with G for GNSS, W for Wifi signal, C for Cell or N for a position was not available, and so on.
AML outside the ordinary
Roaming Scenarios
If an emergency occurs inside my house, there’s a good chance I know the address, and even if I don’t know my own address, it’s probably linked to the account holder information from my telco anyway.
AML and location reporting for emergency calls is primarily relied upon in scenarios where the caller doesn’t know where they’re calling from, and a good example of this would be a call made while roaming.
If I were in a different country, there’s a much higher likelihood that I wouldn’t know my exact address, however AML does not currently work across borders.
The standard suggests disabling SMS when roaming, which is not that surprising considering the current state of SMS transport.
Without a SIM?
Without a SIM in the phone, calls can still be made to emergency services, however SMS cannot be sent.
That’s because the emergency calling standards for unauthenticated emergency calls, only cater for
This is a limitation however this could be addressed by 3GPP in future releases if there is sufficient need.
HTTPS Delivery
The standard was revised to allow HTTPS as the delivery method for AML, for example, the below POST contains the same data encoded for use in a HTTP transaction:
Implementation of this approach is however more complex, and leads to little benefit.
The operator must zero-rate the DNS, to allow the FQDN for this to be resolved (it resolves to a different domain in each country), and allow traffic to this endpoint even if the customer has data disabled (see what happens when your handset has PS Data Off ), or has run out of data.
Due to the EU’s stance on Net Neutrality, “Zero Rating” is a controversial topic that means most operators have limited implementation of this, so most fall back to SMS.
Other methods for sharing location of emergency calls?
In some upcoming posts we’ll look at the GMLC used for E911 Phase 2, and how the network can request the location from the handset.
A few years ago, I was out with a friend (who knows telecom history like no one else) who pointed at a patch of grass and some concrete and said “There’s an underground exchange under there”.
Being the telecommunications nerd that I am, I had a lot of follow up questions, and a very strong desire to see inside, but first, I’m going to bore you with some history.
I’ve written about RIMs – Remote Integrated Multiplexers before, but here’s the summary:
In the early ’90s, Australia was growing. Areas that had been agricultural or farmland were now being converted into housing estates and industrial parks, and they all wanted phone lines. While the planners at Telecom Australia had generally been able to cater for growth, suddenly plonking 400 homes in what once was a paddock presented a problem.
There were traditional ways to solve this of course; expanding the capacity at the exchange in the nearest town, trenching larger conduits, running 600 pair cables from the exchange to the housing estate, and distributing this around the estate, but this was the go-go-nineties, and Alcatel had a solution, the Remote Integrated Multiplexer, or RIM.
A RIM is essentially a stack of line cards in a cabinet by the side of the road, typically fed by one or more E1 circuits. Now Telecom Australia didn’t need to upgrade exchanges, trench new conduits or lay vast quantities of costly copper – Instead they could meet this demand with a green cabinet on the nature strip.
This was a practical and quick solution to increase capacity in these areas, and this actually worked quite well; RIMs served many Australian housing estates until the copper switch off, many having been upgraded with “top-hats” to provide DSLAM services for these subscribers as well, or CMUX being the evolved version. There’s still RIMs that are alive in the CAN today, in areas serviced by NBN’s Fixed Wireless product, it’s not uncommon to see them still whirring away.
A typical RIM cabinet
But in some areas planning engineers realised some locations may not be suitable for a big green cabinet, for this they developed the “Underground CAN Equipment Housing” (UCEH). Designed as a solution for sensitive areas or locations where above ground housing of RIMs would not be suitable – which translated to areas council would not them put their big green boxes on their nature strips.
So in Narre Warren in Melbourne’s outer suburbs Telecom Research Labs staff built the first underground bunker to house the exchange equipment, line cards, a distribution frame and batteries – a scaled down exchange capable of serving 480 lines, built underground.
Naturally, an underground enclosure faced some issues, cooling and humidity being the big two.
The AC systems used to address this were kind of clunky, and while the underground exchanges were not as visually noisy as a street cabinet, they were audibly noisy, to the point you probably wouldn’t want to live next to one.
Sadly, for underground exchange enthusiasts such as myself, by 1996, OH&S classified these spaces as “Confined Spaces”, which made accessing them onerous, and it was decided that new facilities like this one would only be dug if there were no other options.
This wasn’t Telecom Australia’s first foray into underground equipment shelters, some of the Microwave sites in the desert built by telecom put the active equipment in underground enclosures covered over by a sea freight container with all the passive gear.
Some of these sites still exist today, and I was lucky enough to see inside one, and let’s face it, if you’ve read this far you want to see what it looks like!
A large steel plate sunk into a concrete plinth doesn’t give away what sits below it.
A gentle pull and the door lifts open with a satisfying “woosh” – assisted by hydraulics that still seem to be working.
The power to the site has clearly been off for some time, but the sealed underground exchange is in surprisingly good condition, except for the musky smell of old electronics, which to be honest goes for any network site.
There’s an exhaust fan with a vent hose that hogs a good chunk of the ladder space, which feels very much like an afterthought.
Inside is pretty dark, to be expected I guess what with being underground, and not powered.
Inside is the power system (well, the rectifiers – the batteries were housed in a pit at the end of the UECH entrance hatch, so inside there are no batteries), a distribution frame (MDF / IDF), and the Alcatel cabinets that are the heart of the RIM.
From the log books it appeared no one had accessed this in a very long time, but no water had leaked in, and all the equipment was still there, albeit powered off.
I’ve no idea how many time capsules like this still exist in the network today, but keep your eyes peeled and you might just spot one yourself!
I am connected on a VDSL line, not by choice, but here we are. DSL is many things, but consistent it not one of them, so I thought it’d be interesting to graph out the SNR and the line rate of the connection.
This is an NBN FTTN circuit, I run Mikrotiks for the routing, but I have a Draytek Vigor 130 that acts as a dumb modem and connects to the Tik.
Draytek exposes this info via SNMP, but the OIDs / MIBs are not part of the standard Prometheus snmp_exporter, so I’ve added them into snmp_exporter.yaml and restarted the snmp_exporter service.
On July 9, 1970 a $10 million dollar program to link Australia from East to West via Microwave was officially opened. Spanning over 2,400 kilometres, it connected Northam (to the east of Perth) to Port Pirie (north of Adelaide) and thus connected the automated telephone networks of Australia’s Eastern States and Western States together, to enable users to dial each other and share video live, across the country, for the first time.
In 1877, long before road and rail lines, the first telegraph line – a single iron wire, was spanned across the Nullabor to link Australia’s Eastern states with Western Australia.
By 1930 an open-wire voice link had been established between the two sides of the continent. This was open-wire circuit was upgraded a rebuilt several times, to finally top out at 140 channels, but by the 1960s Australian Post Office (APO) engineers knew a higher bandwidth (broadband carrier) system was required if ever Standard Trunk Dialling (STD) was to be implemented so someone in Perth could dial someone in Sydney without going via an operator.
A few years earlier Melbourne and Sydney were linked via a 600 kilometre long coaxial cable route, so API engineers spent months in the Nullarbor desert surveying the soil conditions and came to the conclusion that a coaxial cable (like the recently opened Melbourne to Sydney Coaxial cable) was possible, but would be very difficult to achieve.
Instead, in 1966, Alan Hume, the Postmaster-General, announced that the decision had been made to construct a network of Microwave relay stations to span from South Australia to Western Australia.
In the 1930s microwave communications had spanned the English channel, by 1951 AT&T’s Long Lines microwave network had opened, spanning the continental United States. So by the 1960’s Microwave transmission networks were commonplace throughout Europe and the US and was thought to be fairly well understood.
But soon APO engineers soon realised that the unique terrain of the desert and the weather conditions of the Nullabor, had significant impacts on the transmission of Radio Waves. Again Research Labs staff went back to spend months in the desert measuring signal strength between test sites to better understand how the harsh desert environment would impact the transmission in order to overcome these impediments.
The length of the link was one of the longest ever attempted, longer than the distance from London to Moscow,
In the end it was decided that 59 towers with heights from 22 meters to 76 meters were to be built, topped off with 3.6m tall microwave dishes for relaying the messages between towers.
The towers themselves were to be built in a zig-zag pattern, to prevent overshooting microwave signals from interfering with signals for the next station in the chain.
Due to the remote nature of the repeater sites, for 43 of the 59 repeater sites had to be fully self sufficient in terms of power.
Initial planning saw the power requirements of the repeater sites to be limited to 500 watts, APO engineers looked at the available wind patterns and determined that combined with batteries, wind generators could keep these sites online year round, without the need for additional power sources. Unfortunately this 500 watt power consumption target quickly tripled, and diesel generators were added to make up any shortfall on calm days.
The addition of the Diesel gensets did not in any way reduce the need to conserve power – the more Diesel consumed, the more trips across the desert to refuel the diesel generators would be required, so the constant need to keep power to a minimum was one of the key restraints in the project.
The designs of these huts were reused after the project for extreme temperature equipment housings, including one reused by Broadcast Australia seen in Marble Barr – The hottest town in Australia.
Active cooling systems (Like Air Conditioning) were out of the question due to being too power hungry. APO engineers knew that the more efficient equipment they could use, the less heat they would produce, and the more efficient the system would be, so solid state (transistorised devices) were selected for the 2Ghz transmission equipment, instead of valves which would have been more power-hungry and produced more heat.
The reduced power requirement of the fully transistorized radio equipment meant that wind-supplied driven generators could provide satisfactory amounts of power provided that the wind characteristics of the site were suitable.
THE TELECOMMUNICATION JOURNAL OF AUSTRALIA / Volume 21 / Issue 21 / February 1971
So forced to use passive cooling methods, the engineers on the project designed the repeater huts to cleverly utilize ventilation and the orientation of the huts to keep them as cool as possible.
Construction was rough, but in just under 2 years the teams had constructed all 59 towers and the associated equipment huts to span the desert.
Average time to construct a tower was 6 daysMenzies, built in the 1980s as an additional spurTower in Kalgoorlie
When the system first opened for service in July 1970, live TV programs could be simulcast on both sides of the country, for the first time, and someone in Perth could pick up the phone and call someone in Melbourne directly (previously this would have gone through an operator).
PMG Engineers designed a case to transport the fragile equipment spares – That resided in the back of a Falcon XR Station Wagon
The system offered 1+1 redundancy, and capacity for 600 circuits, split across up to 6 radio bearers, and a bearer could be dedicated at times to support TV transmissions, carried on 5 watt (2 watt when modulated) carriers, operating at 1.9 to 2.3Ghz.
By linking the two sides of Australia, Telecom opened up the ability to have a single time source distributed across the country, the station in Lyndhurst in Victoria, created the 100 “microseconds” signal generated by a VNG, that was carrier across the link.
Looking down one of the towers
Unlike AT&T’s Long Lines network, which lasted until after MCI, deregulation and the breakup off the Bell System, the East-West link didn’t last all that long.
By 1981, Telecom Australia (No longer APO) had installed their first experimental optic fibre cable between Clayton and Springvale, and fibre quickly became the preferred method for broadband carrier circuits between exchanges.
By 1987, Melbourne and Sydney were linked by fibre, and the benefits of fibre were starting to be seen more broadly, and by 1989, just under 20 years since the original East-West Microwave system opened, Telecom Australia completed a 2373 kilometre long / 14 fibre cable from Perth to Adelaide, and Optus followed in 1993.
This effectively made the microwave system redundant. Fibre provided a higher bandwidth, more reliable service, that was far cheaper to operate due to decreased power requirements. And so piece by piece microwave hops were replaced with fibre optic cables.
I’m not clear on which was the last link to be switched off (If you do know please leave a comment or drop me a message), but eventually at some point in the late 1980s or early 1990s, the system was decommissioned.
Many of the towers still stand today and carry microwave equipment on them, but it is a far cry from what was installed in the late 1960s.
In the cellular world, subscribers are charged for data from the IP, transport and applications layers; this means you pay for the IP header, you pay for the TCP/UDP header, and you pay for the contents (the cat videos it contains).
This also means if an operator moves mobile subscribers from IPv4 to IPv6, there’s an extra 20 bytes the customer is charged for for every packet sent / received, which the customer is charged for – This is because the IPv6 header is longer than the IPv4 header.
In most cases, mobile subs don’t get a choice as to if their connection is IPv4 or IPv6, but on a like for like basis, we can say that if a customer moves is on IPv6 every packet sent/received will have an extra 20 bytes of data consumed compared to IPv4.
This means subscribers use more data on IPv6, and this means they get charged for more data on IPv6.
For IoT applications, light users and PAYG users, this extra 20 bytes per packet could add up to something significant – But how much?
We can quantify this, but we’d need to know the number of packets sent on average, and the quantity of the data transferred, because the number of packets is the multiplier here.
So for starters I’ve left a phone on the desk, it’s registered to the network but just sitting in Idle mode – This is an engineering phone from an OEM, it’s just used for testing so doesn’t have anything loaded onto it in terms of apps, it’s not signed into any applications, or checking in the background, so I thought I’d try something more realistic.
So to get a clearer picture, I chucked a SIM in my regular everyday phone I use personally, registered it to the cellular lab I have here. For the next hour I sniffed the GTP traffic for the phone while it was sitting on my desk, not touching the phone, and here’s what I’ve got:
Overall the PCAP includes 6,417,732 bytes of data, but this includes the transport and GTP headers, meaning we can drop everything above it in our traffic calculations.
For this I’ve got 14 bytes of ethernet, 20 bytes IP, 8 bytes UDP and 5 bytes for TZSP (this is to copy the traffic from the eNB to my local machine), then we’ve got the transport from the eNB to the SGW, 14 bytes of ethernet again, 20 bytes of IP , 8 bytes of UDP and 8 bytes of GTP then the payload itself. Phew. All this means we can drop 97 bytes off every packet.
We have 16,889 packets, 6,417,732 bytes in total, minus 97 bytes from each gives us 1,638,233 of headers to drop (~1.6MB) giving us a total of 4.556 MB traffic to/from the phone itself.
This means my Android phone consumes 4.5 MB of cellular data in an hour while sitting on the desk, with 16,889 packets in/out.
Okay, now we’re getting somewhere!
So now we can answer the question, if each of these 16k packets was IPv6, rather than IPv4, we’d be adding another 20 bytes to each of them, 20 bytes x 16,889 packets gives 337,780 bytes (~0.3MB) to add to the total.
If this traffic was transferred via IPv6, rather than IPv4, we’d be looking at adding 20 bytes to each of the 16,889 packets, which would equate to 0.3MB extra, or about 7% overhead compared to IPv4.
But before you go on about what an outrage this IPv6 transport is, being charged for those extra bytes, that’s only one part of the picture.
There’s a reason operators are finally embracing IPv6, and it’s not to put an extra 7% of traffic on the network (I think if you asked most capacity planners, they’d say they want data savings, not growth).
IPv6 is, for lack of a better term, less rubbish than IPv4.
There’s a lot of drivers for IPv6, and some of these will reduce data consumption. IPv6 is actually your stuff talking directly to the remote stuff, this means that we don’t need to rely on NAT, so no need to do NAT keepalives, and opening new sessions, which is going to save you data. If you’re running apps that need to keep a connection to somewhere alive, these data savings could negate your IPv6 overhead costs.
Will these potential data savings when using IPv6 outweigh the costs?
That’s going to depend on your use case.
If you’ve extremely bandwidth / data constrained, for example, you have an IoT device on an NTN / satellite connection, that was having to Push data every X hours via IPv4 because you couldn’t pull data from it as it had no public IP, then moving it to IPv6 so you can pull the data on the public IP, on demand, will save you data. That’s a win with IPv6.
If you’re a mobile user, watching YouTube, getting push notifications and using your phone like a normal human, probably not, but if you’re using data like a normal user, you’ve probably got a sizable data allowance that you don’t end up fully consuming, and the extra 20 bytes per packet will be nothing in comparison to the data used to watch a 2k video on your small phone screen.
Ask someone with headphones and a lanyard in the halls of a datacenter what transport does DNS use, there’s a good chance the answer you’d get back is UDP Port 53.
But not always!
In scenarios where the DNS response is large (beyond 512 bytes) a DNS query will shift over to TCP for delivery.
How does the client know when to shift the request to TCP – After all, the DNS server knows how big the response is, but the client doesn’t.
The answer is the Truncated flag, in the response.
The DNS server sends back a response, but with the Truncated bit set, as per RFC 1035:
TC TrunCation – specifies that this message was truncated due to length greater than that permitted on the transmission channel.
RFC 1035
Here’s an example of the truncated bit being set in the DNS response.
The DNS client, upon receiving a response with the truncated bit set, should run the query again, this time using TCP for the transport.
One prime example of this is DNS NAPTR records used for DNS in roaming scenarios, where the response can quite often be quite large.
If it didn’t move these responses to TCP, you’d run the risk of MTU mismatches dropping DNS. In that half of my life has been spent debugging DNS issues, and the other half of my life debugging MTU issues, if I had MTU and DNS issues together, I’d be looking for a career change…
I had a question recently on LinkedIn regarding how to preference Voice over WiFi traffic so that a network engineer operating the WiFi network can ensure the best quality of experience for Voice over WiFi.
Voice over WiFi is underpinned by the ePDG – Evolved Packet Data Gateway (this is a fancy IPsec tunnel we authenticate to using the SIM to drop our traffic into the P-CSCF over an unsecured connection). To someone operating a WiFi network, the question is how do we prioritise the traffic to the ePDGs and profile it?
ePDGs can be easily discovered through a simple DNS lookup, once you know the Mobile Network Code and Mobile Country code of the operators you want to prioritise, you can find the IPs really easily.
ePDG addresses take the form epdg.epc.mncXXX.mccYYY.pub.3gppnetwork.org so let’s look at finding the IPs for each of these for the operators in a country:
The first step is nailing down the mobile network code and mobile country codes of the operators you want to target, Wikipedia is a great source for this information. Here in Australia we have the Mobile Country Code 505 and the big 3 operators all support Voice over WiFi, so let’s look at how we’d find the IPs for each. Telstra has mobile network code (MNC) 01, in 3GPP DNS we always pad network codes to 3 digits, so that’ll be 001, and the mobile country code (MCC) for Australia is 505. So to find the IPs for Telstra we’d run an nslookup for epdg.epc.mnc001.mcc505.pub.3gppnetwork.org – The list of IPs that are returned, are the IPs you’ll see Voice over WiFi traffic going to, and the IPs you should provide higher priority to:
The same rules apply in other countries, you’d just need to update the MNC/MCC to match the operators in your country, do an nslookup and prioritise those IPs.
Generally these IPs are pretty static, but there will need to be a certain level of maintenance required to keep this list up to date by rechecking.
There’s old joke about standards that the great thing about standards there’s so many to choose from.
SMS wasn’t there from the start of GSM, but within a year of the inception of 2G we had SMS, and we’ve had SMS, almost totally unchanged, ever since.
In a recent Twitter exchange, I was asked, what’s the best way to transport SMS? As always the answer is “it depends” so let’s take a look together at where we’ve come from, where we are now, and how we should move forward.
How we got Here
Between 2G and 3G SMS didn’t change at all, but the introduction of 4G (LTE) caused a bit of a rethink regarding SMS transport.
Early builders of LTE (4G) networks launched their 4G offerings without 4G Voice support (VoLTE), with the idea that networks would “fall back” to using 2G/3G for voice calls.
This meant users got fast data, but to make or receive a call they relied on falling back to the circuit switched (2G/3G) network – Hence the name Circuit Switched Fallback.
Falling back to the 2G/3G network for a call was one thing, but some smart minds realised that if a phone had to fall back to a 2G/3G network every time a subscriber sent a text (not just calls) – And keep in mind this was ~2010 when SMS traffic was crazy high; then that would put a huge amount of strain on the 2G/3G layers as subs constantly flip-flopped between them.
The SGs-AP interface has two purposes; One, It can tell a phone on 4G to fallback to 2G/3G when it’s got an incoming call, and two; it can send and receive SMS.
SMS traffic over this interface is sometimes described as SMS-over-NAS, as it’s transported over a signaling channel to the UE.
This also worked when roaming, as the MSC from the 2G/3G network was still used, so SMS delivery worked the same when roaming as if you were in the home 2G/3G network.
Enter VoLTE & IMS
Of course when VoLTE entered the scene, it also came with it’s own option for delivering SMS to users, using IP, rather than the NAS signaling. This removed the reliance on a link to a 2G/3G core (MSC) to make calls and send texts.
This was great because it allowed operators to build networks without any 2G/3G network elements and build a fully standalone LTE only network, like Jio, Rakuten, etc.
In roaming scenarios, S8 Home Routing for VoLTE enabled SMS to be handled when roaming the same way as voice calls, which made SMS roaming a doddle.
4G SMS: SMS over IP vs SMS over NAS
So if you’re operating a 4G network, should you deliver your SMS traffic using SMS-over-IP or SMS-over-NAS?
Generally, if you’ve been evolving your network over the years, you’ve got an MSC and a 2G/3G network, you still may do CSFB so you’ve probably ended up using SMS over NAS using the SGs-AP interface. This method still relies on “the old ways” to work, which is fine until a discussion starts around sunsetting the 2G/3G networks, when you’d need to move calling to VoLTE, and SMS over NAS is a bit of a mess when it comes to roaming.
Greenfield operators generally opt for SMS over IP from the start, but this has its own limitations; SMS over IP is has awful efficiency which makes it unsuitable for use with NB-IoT applications which are bandwidth constrained, support for SMS over IP is generally limited to more expensive chipsets, so the bargain basement chips used for IoT often don’t support SMS over IP either, and integration of VoLTE comes with its own set of challenges regarding VoLTE enablement.
5G enters the scene (Nsmsf_SMService)
5G rolled onto the scene with the opportunity to remove the SMS over NAS option, and rely purely on SMS over IP (IMS); forcing the industry to standardise on an option alas this did not happen.
This added another option for SMS delivery dependent on the access network used, and the Nsmsf_SMService interface does not support roaming.
Of course if you are using Voice over NR (VoNR) then like VoLTE, SMS is carried in a SIP message to the IMS, so this negates the need for the Nsmsf_SMService.
2G/3G Shutdown – Diameter to replace SGs-AP (SGd)
With the 2G/3G shutdown in the US operators who had up until this point been relying on SMS-over-NAS using the SGs-AP interface back to their MSCs were forced to make a decision on how to route SMS traffic, after the MSCs were shut down.
This landed with SMS-over-Diameter, where the 4G core (MME) communicates over Diameter with the SMSc.
This has adoption by all the US operators, but we’re not seeing it so widely deployed in the rest of the world.
State of Play
Option
Conditions
Notes
MAP
2G/3G Only
Relies on SS7 signaling and is very old Supports roaming
SGs-AP (SMS-over-NAS)
4G only relies on 2G/3G
Needs an MSC to be present in the network (generally because you have a 2G/3G network and have not deployed VoLTE) Supports limited roaming
SMS over IP (IMS)
4G / 5G
Not supported on 2G/3G networks Relies on a IMS enabled handset and network Supports roaming in all S8 Home Routed scenarios Device support limited, especially for IoT devices
Diameter SGd
4G only / 5G NSA
Only works on 4G or 5G NSA Better device support than 4G/5G Supports roaming in some scenarios
Nsmsf_SMService
5G standalone only
Only works on 5GC Doesn’t support roaming
The convoluted world of SMS delivery options
A Way Forward:
While the SMS payload hasn’t changed in the past 31 years, how it is transported has opened up a lot of potential options for operators to use, with no clear winner, while SMS revenues and traffic volumes have continued to fall.
For better or worse, the industry needs to accept that SMS over NAS is an option to use when there is no IMS, and that in order to decommission 2G/3G networks, IMS needs to be embraced, and so SMS over IP (IMS) supported in all future networks, seems like the simple logical answer to move forward.
And with that clear path forward, we add in another wildcard…
But, when you’ve only got a finite resource of bandwidth, and massive latencies to contend with, the all-IP architecture of IMS (VoLTE / VoNR) and it’s woeful inefficiency starts to really sting.
Of course there are potential workarounds here, Robust Header Correction (ROHC) can shrink this down, but it’s still going to rely on the 3 way handshake of TCP, TCP keepalive timers and IMS registrations, which in turn can starve the radio resources of the satellite link.
Even with SMS over 30 years old, we can still expect it to be a part of networks for years to come, even as WhatsApp / iMessage, etc, offer enhanced services. As to how it’s transported and the myriad of options here, I’m expecting that we’ll keep seeing a multi-transport mix long into the future.
For simple, cut-and-dried 4G/5G only network, IMS and SMS over IP makes the most sense, but for anything outside of that, you’ve got a toolbox of options for use to make a solution that best meets your needs.
I needed to have both legs of the B2BUA bridge call through FreeSWITCH using the same Call-ID (long story), and went down the rabbit hole of looking for how to do this.
A post from 15 years ago on the mailing list from Anthony Minessale said he added “sip_outgoing_call_id” variable for this, and I found the commit, but it doesn’t work – More digging shows this variable disappears somewhere in history.
But by looking at what it changed I found sip_invite_call_id does the same thing now, so if you want to make both legs use the same Call-ID here ya go:
Even before 5G was released, the arms race to claim the “fastest” speeds on LTE, NSA and SA networks has continued, with pretty much every operator claiming a “first” or “fastest”.
I myself have the fastest 5G network available* but I thought I’d look at how big the values are we can put in for speed, these are the Maximum Bitrate Values (like AMBR) we can set on an APN/DNN, or on a Charging Rule.
*Measurement is of the fastest 5G network in an eastward facing office, operated by a person named Nick, in a town in Australia. Other networks operated by people other than those named Nick in eastward facing office outside of Australia were not compared.
The answer for Release 8 LTE is 4294967294 bytes per second, aka 4295 Mbps 4.295 Gbps.
Not bad, but why this number?
The Max-Requested-Bandwidth-DL AVP tells the PGW the max throughput allowed in bits per second. It’s a Unsigned32 so max value is 4294967294, hence the value.
But come release 15 some bright spark thought we may in the not to distant future break this barrier, so how do we go above this?
The answer was to bolt on another AVP – the “Extended-Max-Requested-BW-DL” AVP ( 554 ) was introduced, you might think that means the max speed now becomes 2x 4.295 Gbps but that’s not quite right – The units was shifted.
This AVP isn’t measuring bits per second it’s measuring kilobits per second.
So the standard Max-Requested-Bandwidth-DL AVP gives us 4.3 Gbps, while the Extended-Max-Requested-Bandwidth gives us a 4,295 Gbps.
We add the Extended-Max-Requested-Bandwidth AVP (4295 Gbps) onto the Max-Requested Bandwidth AVP (4.3 Gbps) giving us a total of 4,4299.3 Gbps.
I started off just updating the SPN, OPN, etc, etc, but I had a suspicion there were still references.
I confirmed this pretty easily with Wireshark, first I started a trace in Wireshark of the APDUs: I enabled capturing on a USB Interface:
modprobe usbmon
Then we need to find where our card reader is connected, running ‘lsusb‘ lists all the USB devices, and you can see here’s mine on Bus 1, Device 49.
Then fired up Wireshark, selected USB Bus 01 to capture all the USB traffic on the bus.
Then I ran the “export” command in PySIM to read the contents of all the files on the SIM, and jumped back over to Wireshark. (PySIM decodes most files but not all – Whereas this method just looks for the bytes containing the string)
From the search menu in Wireshark I searched the packet bytes for the string containing the old brand name, and found two more EFs I’d missed.
For anyone playing along at home, using this method I found references to the old brand name in SMSP (which contains the network name) and ADN (Which had the customer support number as a contact with the old brand name).
If you’re typing on a full size keyboard there’s a good chance that to your right, there’s a number pad.
The number 5 is in the middle – That’s to be expected, but is 1 in the top left or bottom left?
Being derived from an adding machine keypad, the number pad on a keyboard has a 1 will be in the bottom left, however in the 1950s when telephone keypads were being introduced, only folks who worked in accounting had adding machines.
So when it came time to work out the best layout, the result we have today was a determined through a stack of research and testing by Human Factors Engineering Department of Bell Labs who studied the most efficient layout of keys, and tested focus groups to find the layout that provided the best level of speed and accuracy.
That landed with the 1 in the top left, and that’s what we still have today.
Oddly ATM and Card terminals opted to use the telephone layout, rather than the adding machine layout, while number pads use the adding machine layout.
Note: All information contained here is sourced from: Photos provided by NBNco’s press pages, Googling part numbers from these photos, and public costing information.
This post covers the specifics and capabilities of NBNco’s FTTN solution, and is the result of some internet sleuthing.
If some of the info in here is now out of date, I’d love to know, let me know in the comments or drop me an email and I’ll update it.
FTTN in Numbers
A total of 24,544 nodes have been deployed upon completion of roll out. Each node is provisioned with 384 subscriber ports.
The hardware has 10Gbps shared between the 384 subscriber lines, equating to 208Mbps per subscriber.
Construction costs were $2.311 billion and hardware costs were $1.513 billion,
For the hardware this equates to $61,644 per node or $160 per subscriber line connected (each node is provisioned with 384 ports)
Full cost for node including hardware, construction and provisioning is $244,150 per node, which is $635 per port.
To operate the FTTN infrastructure costs $709 million per year (Made up of costs such as power, equipment servicing and spares). This equates to $28k per node per annum, or $75 per subscriber. (This does not take into account other costs such as access to the copper, transmission network, etc, just the costs to have the unit powered on the footpath.)
Overview
Inside the FTTN cabinets is a Alcatel Lucent (now Nokia) ISAM 7330 cabinet mounted on it’s side,
On the inside left of the door is a optic fibre tray where the transmission links come into the cabinet,
On the extreme left is a custom panel. It contains I/Os that are fed to the 7330, such as door open sensor, battery monitoring, AC power in, SPD and breaker.
Connection to subscriber lines happens on a frame at the end of the cabinet.
Alcatel Lucent ISAM 7330 FTTN
NBN co’s nodes are made up of Alcatel Lucent (Now Nokia) ISAM (Intelligent Services Access Manager) 7330 FTTN rack mounted it’s side.
Slot
Type
Function
1
GFC (General Facilities Card)
Power and alarm management
2
NT Slot (NANT-E Card)
Main processing and transmission
3
NTIO Slot (NDPS-C Card)
VDSL vectoring number-crunching
4
NT Slot (Free)
Optional (Unused) backup Main processing and transmission
5-12
LT (NDLT-F)
48 Port VDSL Subscriber DSLAM Interfaces
Slot numbering is just counting L to R, ALU documentation has different numbering
First up is the GFC (General Facilities Card) which handles alarm input / output, and power distribution. This connects to the custom IO panel on the far left of the cabinet, meaning the on-board IO ports aren’t all populated as it’s handled by the IO panel. (More on that later)
Next up is the first NT slot, there are two on the 7330, but in NBN’s case, only one is used; the second can be used for redundancy if one of the cards were to fail, but it seems this option has not been selected. In the first and only populated NT slot is an NANT-E Card (Combined NT Unit E) which handles transmission and main processing.
All the ISAM NANT cards support support RIPv2! But only the NANT-E card also supports BGP – Interestingly they don’t have BGP on all the NANT cards?
To the right of that is the NTIO slot, which has a NDPS-C card, which handles the vector processing for VDSL.
Brief overview of Vectoring: By adding vectoring to DSL signals allows noise on subscriber loops to be modeled, and then cancelled out with an integrated anti-phase signal matching that of the noise.
The vectoring in VDSL relies on pretty complex number crunching as the DSLAM has to constantly process the vectoring coefficients which are different for each line and can change based on the conditions of the subscriber loop etc. To do this the NDPS-C has two roles; The NDPS-C’s Vectoring Control Entity performs non-real time calculations of vectoring coefficients and handles joining and leaving of vectored VDSL2 lines. While the NDPS-C’s Vectoring Processor performs the real time matrix calculations based on crosstalk correction samples for the VDSL symbols collected from the subscriber lines. The NDPS-C has a Twinax connection to every second LT Card.
After the NTIO slot is the unused NT slot.
Finally we have the 8 LT slots for line cards, which for FTTN is using the NDLT-F are 48 port line cards.
The 8 card slots allows 384 subscriber lines per node.
These are the cards which the actual subscriber lines ultimately connect to. With 10Gbps available from the NT to the LTs, means each LT card with 48 subs so 208 Mbps per subscriber max theoretical throughput.
POTS overlay is supported, this allowed VF services coexisted on the same copper during the rollout. M / X pairs are no longer added inline on new connections. (More on that on cabling).
Power & Environment
The 7330 has a 40 amp draw at -48v would mean the unit consumes 1920w
The -48v supply is provided by 2x Eltek Flatpack2 rectifiers, each providing 1Kw each.
These can be configured to provide 1Kw with redundancy to protect against the failure of one of the Flatpack2 units, or 2Kw with no redundancy, which is what is used here.
On the extreme left is a custom panel. It contains alarm I/Os that are fed to the 7330, such as door open sensor, battery monitoring, etc.
It also is the input for AC power in, surge protection device and breakers.
I did have some additional information on the batteries used and the power calculations, however NBNco’s security team have asked that this be removed.
Cabling
Incoming transmission fiber comes in on NBNco’s green ribbon fibre, which terminates on a break out tray on the left hand side wall of the cabinet. Spliced inside the tray is a duplex LC pigtail for connecting the SMF to the 7330. I don’t have the specifics on the optics used.
Subscriber lines come in via an IDC distribution frame (Quante IDS) on the right hand side end of the cabinet, accessed through a seperate door.
This frame is referred to as the CCF – Cross Connect Frame.
There are two sets of blocks on the CCF, termination of ‘X’ and ‘C’ Pairs.
‘X’ Pairs are the VF Pairs (PSTN lines) connecting to the pillar where they are jumpered back to the ‘M’ pairs back to the serving exchange,
‘C’ Pairs are the pairs containing combined VDSL & VF services to to the pillar where they are jumpered to the ‘O’ pairs which run out to the customer’s premises,
Short one, The other day I needed to add a Network Appearance on an SS7/SS7 M3UA linkset.
Network Appearances on M3UA links are kinda like a port number, in that they allow you to distinguish traffic to the same point code, but handled by different logical entities.
When I added the NA parameter on the Linkset nothing happened.
If you’re facing the same you’ll need to set:
cs7 multi-instance
In the global config (this is the part I missed).
Then select the M3UA linkset you want to change and add the network-appearance parameter:
network-appearance 10
And bingo, you’ll start seeing it in your M3UA traffic:
Want more telecom goodness?
I have a good old fashioned RSS feed you can subscribe to.