Category Archives: Python

CGrateS – Actions & Action Plans

In our last post we added a series of different balances to an account, these were actions we took via the API specifically to add a balance.

But there’s a lot more actions we may want to do beyond just adding balance.

CGrateS has the concept of “Actions” which are, as the name suggests, things we want to do to the system.

Some example Actions would be:

  • Adding / Deducting / Resetting a balance
  • Adding a CDR log
  • Enable/Disable an account
  • Sending HTTP POST request or email notification
  • Deleting / suspending account
  • Transferring balances

We can run these actions on a timed basis, or when an event is triggered, and group Actions together to run multiple actions via an ActionTrigger, this means we can trigger these Actions, not just by sending an API request, but based on the state of the subscriber / account.

Let’s look at some examples,

We can define an Action named “Action_Monthly_Fee” to debit $12 from the monetary balance of an account, and add a CDR with the name “Monthly Account Fee” when it does so.
We can use ActionTriggers to run this every month on the account automatically.

We can define an Action named “Usage_Warning_10GB” to send an email to the Account owner to inform them they’ve used 10GB of usage, and use ActionTriggers to send this when the customer has used 10GB of their *data balance.

Using Actions

Note: The Python script I’ve used with all the examples in this post is available on GitHub here.

Let’s start by defining an Account, just as we have before:

# Create the Account object inside CGrateS
Account = "Nick_Test_123"
Create_Account_JSON = {
    "method": "ApierV2.SetAccount",
    "params": [
        {
            "Tenant": "cgrates.org",
            "Account": str(Account)
        }
    ]
}
print(CGRateS_Obj.SendData(Create_Account_JSON))

Let’s start basic; to sweeten the deal for new Accounts, we’ll give them $99 of balance to use in the first month they have the service. Rather than hitting the AddBalance API, we’ll define an Action named “Action_Add_Signup_Bonus” to credit $99 of monetary balance to an account.

If you go back to our last post, you should know what we’d need to do to add this balance manually with the AddBalance API, but let’s look at how we can create the same balance add functionality using Actions:

#Add a Signup Bonus of $99 to the account with type *monetary expiring a month after it's added
Action_Signup_Bonus = {
    "id": "0",
    "method": "ApierV1.SetActions",
    "params": [
        {
          "ActionsId": "Action_Add_Signup_Bonus",
          "Actions": [
              {
                  "Identifier": "*topup",
                  "BalanceId": "Balance_Signup_Bonus",
                  "BalanceUuid": "",
                  "BalanceType": "*monetary",
                  "Directions": "*out",
                  "Units": 99,
                  "ExpiryTime": "*month",
                  "Filter": "",
                  "TimingTags": "",
                  "DestinationIds": "",
                  "RatingSubject": "",
                  "Categories": "",
                  "SharedGroups": "",
                  "BalanceWeight": 1200,
                  "ExtraParameters": "",
                  "BalanceBlocker": "false",
                  "BalanceDisabled": "false",
                  "Weight": 10
              }
]}]}
pprint.pprint(CGRateS_Obj.SendData(Action_Signup_Bonus))

Alright, this should look pretty familiar if you’ve just come from Account Balances.
You’ll notice we’re no longer calling, SetBalance, we’re now calling SetActions, to create the ActionsId with the name “Action_Add_Signup_Bonus“.
In “Action_Add_Signup_Bonus” we’ve got an actions we’ll do when “Action_Add_Signup_Bonus” is called.
We can define multiple actions, but for now we’ve only got one action defined, which has the Identifier (which defines what the action does) set to *topup to add balance.
As you probably guessed, we’re triggering a top up, and setting the BalanceId, BalanceType, Units, ExpiryTime and BalanceWeight just as we would using SetBalance to add a balance.

So how do we use the Action we just created? Well, there’s a lot of options, but let’s start with the most basic – Via the API:

# Trigger ExecuteAction
Account_Action_trigger_JSON = {"method": "APIerSv1.ExecuteAction", "params": [
    {"Tenant": "cgrates.org", "Account": str(Account), "ActionsId": "Action_Add_Signup_Bonus"}]}
pprint.pprint(CGRateS_Obj.SendData(Account_Action_trigger_JSON))

Boom, we’ve called the ExecuteAction API call, to execute the Action named “Action_Add_Signup_Bonus“.

We can check on this with GetAccount again and check the results:

# Get Account Info
pprint.pprint(CGRateS_Obj.SendData({'method': 'ApierV2.GetAccount', 'params': [
              {"Tenant": "cgrates.org", "Account": str(Account)}]}))
{'method': 'ApierV2.GetAccount', 'params': [{'Tenant': 'cgrates.org', 'Account': 'Nick_Test_123'}]}
{'error': None,
 'id': None,
 'result': {'ActionTriggers': None,
            'AllowNegative': False,
            'BalanceMap': {'*monetary': [{'Blocker': False,
                                          'Categories': {},
                                          'DestinationIDs': {},
                                          'Disabled': False,
                                          'ExpirationDate': '2023-11-15T10:27:52.865119544+11:00',
                                          'Factor': None,
                                          'ID': 'Balance_Signup_Bonus',
                                          'RatingSubject': '',
                                          'SharedGroups': {},
                                          'TimingIDs': {},
                                          'Timings': None,
                                          'Uuid': '01cfb471-ba38-453a-b0e2-8ddb397dfe9c',
                                          'Value': 99,
                                          'Weight': 1200}]},
            'Disabled': False,
            'ID': 'cgrates.org:Nick_Test_123',
            'UnitCounters': None,
            'UpdateTime': '2023-10-15T10:27:52.865144268+11:00'}}

Great start!

Making Actions Useful

Well congratulations, we took something we previously did with one API call (SetBalance), and we did it with two (SetAction and ExcecuteAction)!

But let’s start paying efficiency dividends,

When we add a balance, let’s also add a CDR log event so we’ll know the account was credited with the balance when we call the GetCDRs API call.

We’d just modify our SetActions to include an extra step:

Action_Signup_Bonus = {
    "id": "0",
    "method": "ApierV1.SetActions",
    "params": [
        {
          "ActionsId": "Action_Add_Signup_Bonus",
          "Actions": [
              {
                  "Identifier": "*topup",
                  "BalanceId": "Balance_Signup_Bonus",
...
              }, 
              {
                  "Identifier": "*cdrlog",
                  "BalanceId": "",
                  "BalanceUuid": "",
                  "BalanceType": "*monetary",
                  "Directions": "*out",
                  "Units": 0,
                  "ExpiryTime": "",
                  "Filter": "",
                  "TimingTags": "",
                  "DestinationIds": "",
                  "RatingSubject": "",
                  "Categories": "",
                  "SharedGroups": "",
                  "BalanceWeight": 0,
                  "ExtraParameters": "{\"Category\":\"^activation\",\"Destination\":\"Your sign up Bonus\"}",
                  "BalanceBlocker": "false",
                  "BalanceDisabled": "false",
                  "Weight": 10
              }
]}]}
pprint.pprint(CGRateS_Obj.SendData(Action_Signup_Bonus))

Boom, now we’ll get a CDR created when the Action is triggered.

But let’s push this a bit more and add some more steps in the Action:

As well as adding balance and putting in a CDR to record what we did, let’s also send a notification to our customer via an HTTP API (BYO customer push notification system) and log to Syslog what’s going on.

# Add a Signup Bonus of $99 to the account with type *monetary expiring a month after it's added
Action_Signup_Bonus = {
    "id": "0",
    "method": "ApierV1.SetActions",
    "params": [
        {
          "ActionsId": "Action_Add_Signup_Bonus",
          "Actions": [
              {
                  "Identifier": "*topup",
                  "BalanceId": "Balance_Signup_Bonus",
                  "BalanceUuid": "",
                  "BalanceType": "*monetary",
                  "Directions": "*out",
                  "Units": 99,
                  "ExpiryTime": "*month",
                  "Filter": "",
                  "TimingTags": "",
                  "DestinationIds": "",
                  "RatingSubject": "",
                  "Categories": "",
                  "SharedGroups": "",
                  "BalanceWeight": 1200,
                  "ExtraParameters": "",
                  "BalanceBlocker": "false",
                  "BalanceDisabled": "false",
                  "Weight": 90
              },
              {
                  "Identifier": "*cdrlog",
                  "BalanceId": "",
                  "BalanceUuid": "",
                  "BalanceType": "*monetary",
                  "Directions": "*out",
                  "Units": 0,
                  "ExpiryTime": "",
                  "Filter": "",
                  "TimingTags": "",
                  "DestinationIds": "",
                  "RatingSubject": "",
                  "Categories": "",
                  "SharedGroups": "",
                  "BalanceWeight": 0,
                  "ExtraParameters": "{\"Category\":\"^activation\",\"Destination\":\"Your sign up Bonus\"}",
                  "BalanceBlocker": "false",
                  "BalanceDisabled": "false",
                  "Weight": 80
              },
              {
                  "Identifier": "*http_post_async",
                  "ExtraParameters": "http://10.177.2.135/example_endpoint",
                  "ExpiryTime": "*unlimited",
                  "Weight": 70
              },
              {
                  "Identifier": "*log",
                  "Weight": 60
              }
          ]}]}
pprint.pprint(CGRateS_Obj.SendData(Action_Signup_Bonus))

Phew! That’s a big action, but if we execute the action again using ExecuteAction, we’ll get all these things happening at once:

Okay, now we’re getting somewhere!

ActionPlans

Having an Action we can trigger manually via the API is one thing, but being able to trigger it automatically is where it really comes into its own.

Let’s define an ActionPlan, that is going to call our Action named “Action_Add_Signup_Bonus” as soon as the ActionPlan is assigned to an Account.

# Create ActionPlan using SetActionPlan to trigger the Action_Signup_Bonus ASAP
SetActionPlan_Signup_Bonus_JSON = {
    "method": "ApierV1.SetActionPlan",
    "params": [{
        "Id": "ActionPlan_Signup_Bonus",
        "ActionPlan": [{
            "ActionsId": "Action_Add_Signup_Bonus",
            "Years": "*any",
            "Months": "*any",
            "MonthDays": "*any",
            "WeekDays": "*any",
            "Time": "*asap",
            "Weight": 10
        }],
        "Overwrite": True,
        "ReloadScheduler": True
    }]
}
pprint.pprint(CGRateS_Obj.SendData(SetActionPlan_Signup_Bonus_JSON))

So what have we done here? We’ve made an ActionPlan named “Action_Add_Signup_Bonus”, which, when associated with an account, will run the Action “Action_Add_Signup_Bonus” as soon as it’s tied to the account, thanks to the Time*asap“.

Now if we create or update an Account using the SetAccount method, we can set the ActionPlanIds to reference our “ActionPlan_Signup_Bonus” and it’ll be triggered straight away.

# Create the Account object inside CGrateS
Create_Account_JSON = {
    "method": "ApierV2.SetAccount",
    "params": [
        {
            "Tenant": "cgrates.org",
            "Account": str(Account),
            "ActionPlanIds": ["ActionPlan_Signup_Bonus"],
            "ActionPlansOverwrite": True,
            "ReloadScheduler":True
        }
    ]
}
print(CGRateS_Obj.SendData(Create_Account_JSON))

Now if we were to run a GetAccount API call, we’ll see the Account balance assigned that was created by the action Action_Add_Signup_Bonus which was triggered by ActionPlan assigned to the account:

{'method': 'ApierV2.GetAccount', 'params': [{'Tenant': 'cgrates.org', 'Account': 'Nick_Test_123'}]}
{'error': None,
 'id': None,
 'result': {'ActionTriggers': None,
            'AllowNegative': False,
            'BalanceMap': {'*monetary': [{'Blocker': False,
                                          'Categories': {},
                                          'DestinationIDs': {},
                                          'Disabled': False,
                                          'ExpirationDate': '2023-11-16T12:41:02.530985381+11:00',
                                          'Factor': None,
                                          'ID': 'Balance_Signup_Bonus',
                                          'RatingSubject': '',
                                          'SharedGroups': {},
                                          'TimingIDs': {},
                                          'Timings': None,
                                          'Uuid': '7bdbee5c-0888-4da2-b42f-5d6b8966ee2d',
                                          'Value': 99,
                                          'Weight': 1200}]},
            'Disabled': False,
            'ID': 'cgrates.org:Nick_Test_123',
            'UnitCounters': None,
            'UpdateTime': '2023-10-16T12:41:12.7236096+11:00'}}

But here’s where it gets interesting, in the ActionPlan we just defined the Time was set to “*asap“, which means the Action is triggered as soon as it was assigned to the account, but if we set the Time value to “*monthly“, the Action would get triggered every month, or *every_minute to trigger every minute, or *month_end to trigger at the end of every month.

Code for these examples is available here.

I’m trying to keep these posts shorter as there’s a lot to cover. Stick around for our next post, we’ll look at some more ActionTriggers to keep decreasing the balance of the account, and setting up ActionTriggers to send a notification to the customer to tell them when their balance is getting low, or any other event based Action you can think of!

Diameter Routing Agents – Part 5 – AVP Transformations with FreeDiameter and Python in rt_pyform

In our last post we talked about why we’d want to perform Diameter AVP translations / rewriting on our Diameter Routing Agent.

Now let’s look at how we can actually achieve this using rt_pyform extension for FreeDiameter and some simple Python code.

Before we build we’ll need to make sure we have the python3-devel package (I’m using python3-devel-3.10) installed.

Then we’ll build FreeDiameter with the rt_pyform, this branch contains the rt_pyform extension in it already, or you can clone the extension only from this repo.

Now once FreeDiameter is installed we can load the extension in our freeDiameter.conf file:

LoadExtension = "rt_pyform.fdx" : "<Your config filename>.conf";

Next we’ll need to define our rt_pyform config, this is a super simple 3 line config file that specifies the path of what we’re doing:

DirectoryPath = "."        # Directory to search
ModuleName = "script"      # Name of python file. Note there is no .py extension
FunctionName = "transform" # Python function to call

The DirectoryPath directive specifies where we should search for the Python code, and ModuleName is the name of the Python script, lastly we have FunctionName which is the name of the Python function that does the rewriting.

Now let’s write our Python function for the transformation.

The Python function much have the correct number of parameters, must return a string, and must use the name specified in the config.

The following is an example of a function that prints out all the values it receives:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    
    return value

Note the order of the arguments and that return is of the same type as the AVP value (string).

We can expand upon this and add conditionals, let’s take a look at some more complex examples:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    #IMSI Translation - if App ID = 16777251 and the AVP being evaluated is the Username
    if (int(appId) == 16777251) and int(AVP_Code) == 1:
        print("This is IMSI '" + str(value) + "' - Evaluating transformation")
        print("Original value: " + str(value))
        value = str(value[::-1]).zfill(15)

The above look at if the App ID is S6a, and the AVP being checked is AVP Code 1 (Username / IMSI ) and if so, reverses the username, so IMSI 1234567 becomes 7654321, the zfill is just to pad with leading 0s if required.

Now let’s do another one for a Realm Rewrite:

def transform(appId, flags, cmdCode, HBH_ID, E2E_ID, AVP_Code, vendorID, value):

    #Print Debug Info
    print('[PYTHON]')
    print(f'|-> appId: {appId}')
    print(f'|-> flags: {hex(flags)}')
    print(f'|-> cmdCode: {cmdCode}')
    print(f'|-> HBH_ID: {hex(HBH_ID)}')
    print(f'|-> E2E_ID: {hex(E2E_ID)}')
    print(f'|-> AVP_Code: {AVP_Code}')
    print(f'|-> vendorID: {vendorID}')
    print(f'|-> value: {value}')
    #Realm Translation
    if int(AVP_Code) == 283:
        print("This is Destination Realm '" + str(value) + "' - Evaluating transformation")
    if value == "epc.mnc001.mcc001.3gppnetwork.org":
        new_realm = "epc.mnc999.mcc999.3gppnetwork.org"
        print("translating from " + str(value) + " to " + str(new_realm))
        value = new_realm
    else:
        #If the Realm doesn't match the above conditions, then don't change anything
        print("No modification made to Realm as conditions not met")
    print("Updated Value: " + str(value))

In the above block if the Realm is set to epc.mnc001.mcc001.3gppnetwork.org it is rewritten to epc.mnc999.mcc999.3gppnetwork.org, hopefully you can get a handle on the sorts of transformations we can do with this – We can translate any string type AVPs, which allows for hostname, realm, IMSI, Sh-User-Data, Location-Info, etc, etc, to be rewritten.

FreeSWITCH mod_python3 – Python Dialplans

Sometimes FreeSWITCH XML dialplan is a bit cumbersome to do more complex stuff, particularly to do with interacting with APIs, etc. But we have the option of using scripts written in Python3 to achieve our outcomes and pass variables to/from the dialplan and perform actions as if we were in the dialplan.

This is different to the Event Socket interface for Python I’ve covered in the past.

For starters we’ll need to install the module and enable it, here’s the StackOverflow thread that got me looking at it where I share the setup steps.

Here is a very simple example I’ve put together to show how we interact with Python3 in FreeSWITCH:

We’ll create a script in /usr/share/freeswitch/scripts/ and call it “CallerName.py”

from freeswitch import *
import sys 
def handler(session,args):

    #Get Variables from FreeSWITCH
    user_name = str(session.getVariable("user_name"))
    session.execute("log", "Call from Username: " + str(user_name))

    #Check if Username is equal to Nick
    if user_name == "Nick":
        session.execute("log", "Username is Nick!")
        #Convert the Username to Uppercase
        session.execute("set", "user_name=" + str(user_name).upper())
        #And return to the dialplan
        return
    else:
        #If no matches then log the error
        session.execute("log", "CRIT Username is not Nick - Hanging up the call")
        #And reject the call
        session.execute("hangup", "CALL_REJECTED")

Once we’ve created and saved the file, we’ll need to ensure it is owned by and executable by service user:

chown freeswitch:freeswitch CallerName.py
chmod 777 CallerName.py

In our Dialplan we’ll need to add the below to our logic to get called at the time we want it:

<action application="system" data="export PYTHONPATH=$PYTHONPATH:/usr/share/freeswitch/scripts/"/> <action application="python" data="CallerName"/>

After adding this to the dialplan, we’ll need to run a “reloadxml” to reload the dialplan, and now when these actions are hit, the Python script we created will be called, and if the user_name variable is set to “nick” it will be changed to “NICK”, and if it it isn’t, the call will be hung up with a “CALL_REJECTED” response.

Obviously this is a very basic scenario, but I’m using it for things like ACLs from an API, and dynamic call routing, using the familiar and easy to work with Python interpreter.

Scratch’n’Sniff – An easy tool for remote Packet Captures

A lesson learned a long time ago in Net Eng, is that packet captures (seldom) lie, and the answers are almost always in the packets.

The issue is just getting those packets.

The Problem

But if you’re anything like me, you’re working on remote systems from your workstation, and trying to see what’s going on.

For me this goes like this:

  1. SSH into machine in question
  2. Start TCPdump
  3. Hope that I have run it for long enough to capture the event of interest
  4. Stop TCPdump
  5. Change permissions on PCAP file created so I can copy it
  6. SFTP into the machine in question
  7. Transfer the PCAP to my local machine
  8. View the PCAP in Wireshark
  9. Discover I had not run the PCAP for long enough and repeat

Being a Mikrotik user I fell in love with the remote packet sniffer functionality built into them, where the switch/router will copy packets matching a filter and just stream them to the IP of my workstation.

If only there was something I could use to get this same functionality on remote machines – without named pipes, X11 forwarding or any of the other “hacky” solutions…

The Solution

Introducing Scratch’n’Sniff, a simple tcpdump front end that encapsulates all the filtered traffic of interest in TZSP the same as Mikrotiks do, and stream it (in real time) to your local machine for real time viewing in Wireshark.

Using it is very simple:

Capture all traffic on port 5060 on interface enp0s25 and send it to 10.0.1.252
python3 scratchnsniff.py --dstip 10.0.1.252 --packetfilter 'port 5060' --interface enp0s25

Capture all sctp and icmp traffic on interface lo and send it to 10.98.1.2:
python3 scratchnsniff.py --dstip 10.98.1.2 --packetfilter 'sctp or icmp' --interface lo

If you’re keen to try it out you can grab it from GitHub – Scratch’n’Sniff and start streaming packets remotely.

Enjoy!

Querying CouchDB with Python

If you’re trying to wean yourself away from SQL for everything database related, NoSQL based options like CouchDB are fantastic, but require some rewiring of the brain.

Searching in these databases, without SQL, was a learning curve.

I’m interacting with CouchDB using Python, and while you can create views to help find data, sometimes I knew one of the keys in the document and the value, I wanted to find all documents matching that.

In SQL world I’d do a SELECT WHERE and find the data I wanted, but in CouchDB it’s a bit harder to find.

Using the db.find() function with the {selector} as an input you can filter for all records that have the key value pair you’re looking for.

for doc in db.find({'selector': {'keyyouwanttofind': 'valueofkeyyouwanttofind'}}):
     print(doc)

Forsk Atoll – Importing Antennas

I recently had a bunch of antennas profiles in .msi format, which is the Planet format for storing antenna radiation patterns, but I’m working in Forsk Atoll, so I needed to convert them,

To load these into Atoll, you need to create a .txt file with each of the MSI files in each of the directories, I could do this by hand, but instead I put together a simple Python script you point at the folder full of your MSI files, and it creates the index .txt file containing a list of files, with the directory name.txt, just replace path with the path to your folder full of MSI files,

#Atoll Index Generator
import os
path = "C:\Users\Nick\Desktop\Antennas\ODV-065R15E-G"
antenna_folder = path.split('\\')[-1]
f = open(path + '\\' + 'index_' + str(antenna_folder) + '.txt', 'w+')
files = os.listdir(path)
for individual_file in files:
    if individual_file[-4:] == ".msi":
        print(individual_file)
        f.write(individual_file + "\n")

f.close()

Which you can then import into Atoll, easy!

Telephony binary-coded decimal (TBCD) in Python with Examples

Chances are if you’re reading this, you’re trying to work out what Telephony Binary-Coded Decimal encoding is. I got you.

Again I found myself staring at encoding trying to guess how it worked, reading references that looped into other references, in this case I was encoding MSISDN AVPs in Diameter.

How to Encode a number using Telephony Binary-Coded Decimal encoding?

First, Group all the numbers into pairs, and reverse each pair.

So a phone number of 123456, becomes:

214365

Because 1 & 2 are swapped to become 21, 3 & 4 are swapped to become 34, 5 & 6 become 65, that’s how we get that result.

TBCD Encoding of numbers with an Odd Length?

If we’ve got an odd-number of digits, we add an F on the end and still flip the digits,

For example 789, we add the F to the end to pad it to an even length, and then flip each pair of digits, so it becomes:

87F9

That’s the abbreviated version of it. If you’re only encoding numbers that’s all you’ll need to know.

Detail Overload

Because the numbers 0-9 can be encoded using only 4 bits, the need for a whole 8 bit byte to store this information is considered excessive.

For example 1 represented as a binary 8-bit byte would be 00000001, while 9 would be 00001001, so even with our largest number, the first 4 bits would always going to be 0000 – we’d only use half the available space.

So TBCD encoding stores two numbers in each Byte (1 number in the first 4 bits, one number in the second 4 bits).

To go back to our previous example, 1 represented as a binary 4-bit word would be 0001, while 9 would be 1001. These are then swapped and concatenated, so the number 19 becomes 1001 0001 which is hex 0x91.

Let’s do another example, 82, so 8 represented as a 4-bit word is 1000 and 2 as a 4-bit word is 0010. We then swap the order and concatenate to get 00101000 which is hex 0x28 from our inputted 82.

Final example will be a 3 digit number, 123. As we saw earlier we’ll add an F to the end for padding, and then encode as we would any other number,

F is encoded as 1111.

1 becomes 0001, 2 becomes 0010, 3 becomes 0011 and F becomes 1111. Reverse each pair and concatenate 00100001 11110011 or hex 0x21 0xF3.

Special Symbols (#, * and friends)

Because TBCD Encoding was designed for use in Telephony networks, the # and * symbols are also present, as they are on a telephone keypad.

Astute readers may have noticed that so far we’ve covered 0-9 and F, which still doesn’t use all the available space in the 4 bit area.

The extended DTMF keys of A, B & C are also valid in TBCD (The D key was sacrificed to get the F in).

Symbol4 Bit Word
*1 0 1 0
#1 0 1 1
a1 1 0 0
b1 1 0 1
c1 1 1 0

So let’s run through some more examples,

*21 is an odd length, so we’ll slap an F on the end (*21F), and then encoded each pair of values into bytes, so * becomes 1010, 2 becomes 0010. Swap them and concatenate for our first byte of 00101010 (Hex 0x2A). F our second byte 1F, 1 becomes 0001 and F becomes 1111. Swap and concatenate to get 11110001 (Hex 0xF1). So *21 becomes 0x2A 0xF1.

And as promised, some Python code from PyHSS that does it for you:

    def TBCD_special_chars(self, input):
        if input == "*":
            return "1010"
        elif input == "#":
            return "1011"
        elif input == "a":
            return "1100"
        elif input == "b":
            return "1101"
        elif input == "c":
            return "1100"      
        else:
            print("input " + str(input) + " is not a special char, converting to bin ")
            return ("{:04b}".format(int(input)))


    def TBCD_encode(self, input):
        print("TBCD_encode input value is " + str(input))
        offset = 0
        output = ''
        matches = ['*', '#', 'a', 'b', 'c']
        while offset < len(input):
            if len(input[offset:offset+2]) == 2:
                bit = input[offset:offset+2]    #Get two digits at a time
                bit = bit[::-1]                 #Reverse them
                #Check if *, #, a, b or c
                if any(x in bit for x in matches):
                    new_bit = ''
                    new_bit = new_bit + str(TBCD_special_chars(bit[0]))
                    new_bit = new_bit + str(TBCD_special_chars(bit[1]))    
                    bit = str(int(new_bit, 2))
                output = output + bit
                offset = offset + 2
            else:
                bit = "f" + str(input[offset:offset+2])
                output = output + bit
                print("TBCD_encode output value is " + str(output))
                return output
    

    def TBCD_decode(self, input):
        print("TBCD_decode Input value is " + str(input))
        offset = 0
        output = ''
        while offset < len(input):
            if "f" not in input[offset:offset+2]:
                bit = input[offset:offset+2]    #Get two digits at a time
                bit = bit[::-1]                 #Reverse them
                output = output + bit
                offset = offset + 2
            else:   #If f in bit strip it
                bit = input[offset:offset+2]
                output = output + bit[1]
                print("TBCD_decode output value is " + str(output))
                return output

PyHSS Update – YAML Config Files

One feature I’m pretty excited to share is the addition of a single config file for defining how PyHSS functions,

In the past you’d set variables in the code or comment out sections to change behaviour, which, let’s face it – isn’t great.

Instead the config.yaml file defines the PLMN, transport time (TCP or SCTP), the origin host and realm.

We can also set the logging parameters, SNMP info and the database backend to be used,

HSS Parameters
 hss:
   transport: "SCTP"
   #IP Addresses to bind on (List) - For TCP only the first IP is used, for SCTP all used for Transport (Multihomed).
   bind_ip: ["10.0.1.252"]
 #Port to listen on (Same for TCP & SCTP)
   bind_port: 3868
 #Value to populate as the OriginHost in Diameter responses
   OriginHost: "hss.localdomain"
 #Value to populate as the OriginRealm in Diameter responses
   OriginRealm: "localdomain"
 #Value to populate as the Product name in Diameter responses
   ProductName: "pyHSS"
 #Your Home Mobile Country Code (Used for PLMN calcluation)
   MCC: "999"
   #Your Home Mobile Network Code (Used for PLMN calcluation)
   MNC: "99"
 #Enable GMLC / SLh Interface
   SLh_enabled: True


 logging:
   level: DEBUG
   logfiles:
     hss_logging_file: log/hss.log
     diameter_logging_file: log/diameter.log
     database_logging_file: log/db.log
   log_to_terminal: true

 database:
   mongodb:
     mongodb_server: 127.0.0.1
     mongodb_username: root
     mongodb_password: password
     mongodb_port: 27017

 Stats Parameters
 redis:
   enabled: True
   clear_stats_on_boot: False
   host: localhost
   port: 6379
 snmp:
   port: 1161
   listen_address: 127.0.0.1

PyHSS Update – SCTP Support

Pleased to announce that PyHSS now supports SCTP for transport.

If you’re not already aware SCTP is the surprisingly attractive cousin of TCP, that addresses head of line blocking and enables multi-homing,

The fantastic PySCTP library from P1sec made adding this feature a snap. If you’re looking to add SCTP to a Python project, it’s surprisingly easy,

A seperate server (hss_sctp.py) is run to handle SCTP connections, and if you’re looking for Multihoming, we got you dawg – Just edit the config file and set the bind_ip list to include each of your IPs to multi home listen on.

PyHSS New Features

Thanks to some recent developments, PyHSS has had a major overhaul recently, and is getting better than ever,

Some features that are almost ready for public release are:

Config File

Instead of having everything defined all over the place a single YAML config file is used to define how the HSS should function.

SCTP Support

No longer just limited to TCP, PyHSS now supports SCTP as well for transport,

SLh Interface for Location Services

So the GMLC can query the HSS as to the serving MME of a subscriber.

Additional Database Backends (MSSQL & MySQL)

No longer limited to just MongoDB, simple functions to add additional backends too and flexible enough to meet your existing database schema.

All these features will be merged into the mainline soon, and documented even sooner. I’ll share some posts on each of these features as I go.

SIM / Smart Card Deep Dive – Part 4 – Interacting with Cards IRL

This is part 3 of an n part tutorial series on working with SIM cards.

So in our last post we took a whirlwind tour of what an APDU does, is, and contains.

Interacting with a card involves sending the APDU data to the card as hex, which luckily isn’t as complicated as it seems.

While reading what the hex should look like on the screen is all well and good, actually interacting with cards is the name of the game, so that’s what we’ll be doing today, and we’ll start to abstract some of the complexity away.

Getting Started

To follow along you will need:

  • A Smart Card reader – SIM card / Smart Card readers are baked into some laptops, some of those multi-card readers that read flash/SD/CF cards, or if you don’t have either of these, they can be found online very cheaply ($2-3 USD).
  • A SIM card – No need to worry about ADM keys or anything fancy, one of those old SIM cards you kept in the draw because you didn’t know what to do with them is fine, or the SIM in our phone if you can find the pokey pin thing. We won’t go breaking anything, promise.

You may end up fiddling around with the plastic adapters to change the SIM form factor between regular smart card, SIM card (standard), micro and nano.

USB SIM / Smart Card reader supports all the standard form factors makes life a lot easier!

To keep it simple, we’re not going to concern ourselves too much with the physical layer side of things for interfacing with the card, so we’ll start with sending raw APDUs to the cards, and then we’ll use some handy libraries to make life easier.

PCSC Interface

To abstract away some complexity we’re going to use the industry-standard PCSC (PC – Smart Card) interface to communicate with our SIM card. Throughout this series we’ll be using a few Python libraries to interface with the Smart Cards, but under the hood all will be using PCSC to communicate.

pyscard

I’m going to use Python3 to interface with these cards, but keep in mind you can find similar smart card libraries in most common programming languages.

At this stage as we’re just interfacing with Smart Cards, our library won’t have anything SIM-specific (yet).

We’ll use pyscard to interface with the PCSC interface. pyscard supports Windows and Linux and you can install it using PIP with:

pip install pyscard

So let’s get started by getting pyscard to list the readers we have available on our system:

#!/usr/bin/env python3
from smartcard.System import *
print(readers())

Running this will output a list of the readers on the system:

Here we can see the two readers that are present on my system (To add some confusion I have two readers connected – One built in Smart Card reader and one USB SIM reader):

(If your device doesn’t show up in this list, double check it’s PCSC compatible, and you can see it in your OS.)

So we can see when we run readers() we’re returned a list of readers on the system.

I want to use my USB SIM reader (The one identified by Identiv SCR35xx USB Smart Card Reader CCID Interface 00 00), so the next step will be to start a connection with this reader, which is the first in the list.

So to make life a bit easier we’ll store the list of smart card readers and access the one we want from the list;

#!/usr/bin/env python3
from smartcard.System import *
r = readers()
connection = r[0].createConnection()
connection.connect()

So now we have an object for interfacing with our smart card reader, let’s try sending an APDU to it.

Actually Doing something Useful

Today we’ll select the EF that contains the ICCID of the card, and then we will read that file’s binary contents.

This means we’ll need to create two APDUs, one to SELECT the file, and the other to READ BINARY to get the file’s contents.

We’ll set the instruction byte to A4 to SELECT, and B0 to READ BINARY.

Table of Instruction bytes from TS 102 221

APDU to select EF ICCID

The APDU we’ll send will SELECT (using the INS byte value of A4 as per the above table) the file that contains the ICCID.

Each file on a smart card has been pre-created and in the case of SIM cards at least, is defined in a specification.

For this post we’ll be selecting the EF ICCID, which is defined in TS 102 221.

Information about EF-ICCID from TS 102 221

To select it we will need it’s identifier aka File ID (FID), for us the FID of the ICCID EF is 2FE2, so we’ll SELECT file 2FE2.

Going back to what we learned in the last post about structuring APDUs, let’s create the APDU to SELECT 2FE2.

CodeMeaningValue
CLAClass bytes – Coding optionsA0 (ISO 7816-4 coding)
INSInstruction (Command) to be calledA4 (SELECT)
P1Parameter 1 – Selection Control (Limit search options)00 (Select by File ID)
P2Parameter 1 – More selection options04 (No data returned)
LcLength of Data 02 (2 bytes of data to come)
DataFile ID of the file to Select2FE2 (File ID of ICCID EF)

So that’s our APDU encoded, it’s final value will be A0 A4 00 04 02 2FE2

So let’s send that to the card, building on our code from before:

#!/usr/bin/env python3
from smartcard.System import *
from smartcard.util import *
r = readers()
connection = r[0].createConnection()
connection.connect()

print("Selecting ICCID File")
data, sw1, sw2 = connection.transmit(toBytes('00a40004022fe2'))
print("Returned data: " + str(data))
print("Returned Status Word 1: " + str(sw1))
print("Returned Status Word 2: " + str(sw2))

If we run this let’s have a look at the output we get,

We got back:

Selecting ICCID File
 Returned data: []
 Returned Status Word 1: 97
 Returned Status Word 2: 33

So what does this all mean?

Well for starters no data has been returned, and we’ve got two status words returned, with a value of 97 and 33.

We can lookup what these status words mean, but there’s a bit of a catch, the values we’re seeing are the integer format, and typically we work in Hex, so let’s change the code to render these values as Hex:

#!/usr/bin/env python3
from smartcard.System import *
from smartcard.util import *
r = readers()
connection = r[0].createConnection()
connection.connect()

print("Selecting ICCID File")
data, sw1, sw2 = connection.transmit(toBytes('00a40004022fe2'))
print("Returned data: " + str(data))
print("Returned Status Word 1: " + str(hex(sw1)))
print("Returned Status Word 2: " + str(hex(sw2)))

Now we’ll get this as the output:

Selecting ICCID File
Returned data: []
Returned Status Word 1: 0x61
Returned Status Word 2: 0x1e

So what does this all mean?

Well, there’s this handy website with a table to help work this out, but in short we can see that Status Word 1 has a value of 61, which we can see means the command was successfully executed.

Status Word 2 contains a value of 1e which tells us that there are 30 bytes of extra data available with additional info about the file. (We’ll cover this in a later post).

So now we’ve successfully selected the ICCID file.

Keeping in mind with smart cards we have to select a file before we can read it, so now let’s read the binary contents of the file we selected;

The READ BINARY command is used to read the binary contents of a selected file, and as we’ve already selected the file 2FE2 that contains our ICCID, if we run it, it should return our ICCID.

If we consult the table of values for the INS (Instruction) byte we can see that the READ BINARY instruction byte value is B0, and so let’s refer to the spec to find out how we should format a READ BINARY instruction:

CodeMeaningValue
CLAClass bytes – Coding optionsA0 (ISO 7816-4 coding)
INSInstruction (Command) to be calledB0 (READ BINARY)
P1Parameter 1 – Coding / Offset00 (No Offset)
P2Parameter 2 – Offset Low00
LeHow many bytes to read0A (10 bytes of data to come)

We know the ICCID file is 10 bytes from the specification, so the length of the data to return will be 0A (10 bytes).

Let’s add this new APDU into our code and print the output:

#!/usr/bin/env python3
from smartcard.System import *
from smartcard.util import *
r = readers()
connection = r[0].createConnection()
connection.connect()

print("Selecting ICCID File")
data, sw1, sw2 = connection.transmit(toBytes('00a40000022fe2'))
print("Returned data: " + str(data))
print("Returned Status Word 1: " + str(hex(sw1)))
print("Returned Status Word 2: " + str(hex(sw2)))

And we have read the ICCID of the card.

Phew.

That’s the hardest thing we’ll need to do over.

From now on we’ll be building the concepts we covered here to build other APDUs to get our cards to do useful things. Now you’ve got the basics of how to structure an APDU down, the rest is just changing values here and there to get what you want.

In our next post we’ll read a few more files, write some files and delve a bit deeper into exactly what it is we are doing.

Want more? 
You can also get the weekly posts on the blog by Connecting on LinkedIn, following me on Twitter, or Subscribing via RSS.
GIF showing using Redis-CLI to get a value

Adding SNMP to anything with Redis and Python

I’ve been adding SNMP support to an open source project I’ve been working on (PyHSS) to generate metrics / performance statistics from it, and this meant staring down SNMP again, but this time I’ve come up with a novel way to handle SNMP, that made it much less painful that normal.

The requirement was simple enough, I already had a piece of software I’d written in Python, but I had a need to add an SNMP server to get information about that bit of software.

For a little more detail – PyHSS handles Device Watchdog Requests already, but I needed a count of how many it had handled, made accessible via SNMP. So inside the logic that does this I just increment a counter in Redis;

#Device Watchdog Answer
    def Answer_280(self, packet_vars, avps):                                                      
        self.redis_store.incr('Answer_280_attempt_count')

In the code example above I just add 1 (increment) the Redis key ‘Answer_280_attempt_count’.

The beauty is that that this required minimal changes to the rest of my code – I just sprinkled in these statements to increment Redis keys throughout my code.

Now when that existing function is run, the Redis key “Answer_280_attempt_count” is incremented.

So I ran my software and the function I just added the increment to was called a few times, so I jumped into redis-cli to check on the values;

GIF showing using Redis-CLI to get a value

And just like that we’ve done all the heavy lifting to add SNMP to our software.

For anything else we want counters on, add the increment to your code to store a counter in Redis with that information.

So next up we need to expose our Redis keys via SNMP,

For this, I took a simple SNMP server example from Stackoverflow, to set the output of a MIB tree, and simply bolted in getting a bit of data from, code below:

#Pulled from https://stackoverflow.com/questions/58909285/how-to-add-variable-in-the-mib-tree

from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import cmdrsp, context
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.smi import instrum, builder
from pysnmp.proto.api import v2c
import datetime
import redis


import redis
redis_store = redis.Redis(host='localhost', port=6379, db=0)
# Create SNMP engine
snmpEngine = engine.SnmpEngine()

# Transport setup

# UDP over IPv4
config.addTransport(
    snmpEngine,
    udp.domainName,
    udp.UdpTransport().openServerMode(('127.0.0.1', 1161))
)

# SNMPv3/USM setup

# user: usr-md5-none, auth: MD5, priv NONE
config.addV3User(
    snmpEngine, 'usr-md5-none',
    config.usmHMACMD5AuthProtocol, 'authkey1'
)
# Allow full MIB access for each user at VACM
config.addVacmUser(snmpEngine, 3, 'usr-md5-none', 'authNoPriv', (1, 3, 6, 1, 2, 1), (1, 3, 6, 1, 2, 1))


# SNMPv2c setup

# SecurityName <-> CommunityName mapping.
config.addV1System(snmpEngine, 'my-area', 'public')

# Allow full MIB access for this user / securityModels at VACM
config.addVacmUser(snmpEngine, 2, 'my-area', 'noAuthNoPriv', (1, 3, 6, 1, 2, 1), (1, 3, 6, 1, 2, 1))

# Get default SNMP context this SNMP engine serves
snmpContext = context.SnmpContext(snmpEngine)


# Create an SNMP context with default ContextEngineId (same as SNMP engine ID)
snmpContext = context.SnmpContext(snmpEngine)

# Create multiple independent trees of MIB managed objects (empty so far)
mibTreeA = instrum.MibInstrumController(builder.MibBuilder())
mibTreeB = instrum.MibInstrumController(builder.MibBuilder())

# Register MIB trees at distinct SNMP Context names
snmpContext.registerContextName(v2c.OctetString('context-a'), mibTreeA)
snmpContext.registerContextName(v2c.OctetString('context-b'), mibTreeB)

mibBuilder = snmpContext.getMibInstrum().getMibBuilder()

MibScalar, MibScalarInstance = mibBuilder.importSymbols(
    'SNMPv2-SMI', 'MibScalar', 'MibScalarInstance'
)
class MyStaticMibScalarInstance(MibScalarInstance):
    def getValue(self, name, idx):
        currentDT = datetime.datetime.now()
        return self.getSyntax().clone(
            'Hello World!! It\'s currently: ' + str(currentDT)
        )

class AnotherStaticMibScalarInstance(MibScalarInstance):
    def getValue(self, name, idx):
        return self.getSyntax().clone('Ahoy hoy?')

class Answer_280_attempt_count(MibScalarInstance):
    def getValue(self, name, idx):
        return self.getSyntax().clone(redis_store.get('Answer_280_attempt_count'))


mibBuilder.exportSymbols(
    '__MY_MIB', MibScalar((1, 3, 6, 1, 2, 1, 1, 1), v2c.OctetString()),
    MyStaticMibScalarInstance((1, 3, 6, 1, 2, 1, 1, 1), (0,), v2c.OctetString()),
    AnotherStaticMibScalarInstance((1, 3, 6, 1, 2, 1, 1, 1), (0,1), v2c.OctetString()),
    Answer_280_attempt_count((1, 3, 6, 1, 2, 1, 1, 1), (0,2), v2c.Integer32())
)

# Register SNMP Applications at the SNMP engine for particular SNMP context
cmdrsp.GetCommandResponder(snmpEngine, snmpContext)
cmdrsp.SetCommandResponder(snmpEngine, snmpContext)
cmdrsp.NextCommandResponder(snmpEngine, snmpContext)
cmdrsp.BulkCommandResponder(snmpEngine, snmpContext)

# Register an imaginary never-ending job to keep I/O dispatcher running forever
snmpEngine.transportDispatcher.jobStarted(1)

# Run I/O dispatcher which would receive queries and send responses
try:
    snmpEngine.transportDispatcher.runDispatcher()

except:
    snmpEngine.transportDispatcher.closeDispatcher()
    raise

While PySNMP can be a bit much to wrap your head around, all you need to know:

V2 community string set in:

config.addV1System(snmpEngine, 'my-area', 'public')

Create an additional class from the template below for each of your Redis keys you wish to expose;

class something_else_from_Redis(MibScalarInstance):
    def getValue(self, name, idx):
        return self.getSyntax().clone(redis_store.get('something_else_from_Redis'))

Renaming the class and replacing the redis_store.get() value with the Redis key you want to pull,

And finally inside mibBuilder.exportSymbols() add each of the new classes you added and the OID for each;

    Answer_280_attempt_count((1, 3, 6, 1, 2, 1, 1, 1), (0,2), v2c.Integer32())
    something_else_from_Redis((1, 3, 6, 1, 2, 1, 1, 1), (0,3), v2c.Integer32())

Then when you run it, presto, you’re exposing that data via SNMP.

You can verify it through SNMP walk or start integrating it into your NMS, in the above example OID 1.3.6.1.2.1.1.1.0.2, contains the value of Answer_280_attempt_count from Redis, and with that, you’re exposing info via SNMP, all while not really having to think about SNMP.

*Ok, you still have to sort which OIDs you assign for what, but you get the idea.

Dr StrangeEncoding or: How I learned to stop worrying and love ASN.1

Australia is a strange country; As a kid I was scared of dogs, and in response, our family got a dog.

This year started off with adventures working with ASN.1 encoded data, and after a week of banging my head against the table, I was scared of ASN.1 encoding.

But now I love dogs, and slowly, I’m learning to embrace ASN.1 encoding.

What is ASN.1?

ASN.1 is an encoding scheme.

The best analogy I can give is to image a sheet of paper with a form on it, the form has fields for all the different bits of data it needs,

Each of the fields on the form has a data type, and the box is sized to restrict input, and some fields are mandatory.

Now imagine you take this form and cut a hole where each of the text boxes would be.

We’ve made a key that can be laid on top of a blank sheet of paper, then we can fill the details through the key onto the blank paper and reuse the key over and over again to fill the data out many times.

When we remove the key off the top of our paper, and what we have left on the paper below is the data from the form. Without the key on top this data doesn’t make much sense, but we can always add the key back and presto it’s back to making sense.

While this may seem kind of pointless let’s look at the advantages of this method;

The data is validated by the key – People can’t put a name wherever, and country code anywhere, it’s got to be structured as per our requirements. And if we tried to enter a birthday through the key form onto the paper below, we couldn’t.

The data is as small as can be – Without all the metadata on the key above, such as the name of the field, the paper below contains only the pertinent information, and if a field is left blank it doesn’t take up any space at all on the paper.

It’s these two things, rigidly defined data structures (no room for errors or misinterpretation) and the minimal size on the wire (saves bandwidth), that led to 3GPP selecting ASN.1 encoding for many of it’s protocols, such as S1, NAS, SBc, X2, etc.

It’s also these two things that make ASN.1 kind of a jerk; If the data structure you’re feeding into your ASN.1 compiler does not match it will flat-out refuse to compile, and there’s no way to make sense of the data in its raw form.

I wrote a post covering the very basics of working with ASN.1 in Python here.

But working with a super simple ASN.1 definition you’ve created is one thing, using the 3GPP defined ASN.1 definitions is another,

With the aid of the fantastic PyCrate library, which is where the real magic happens, and this was the nut I cracked this week, compiling a 3GPP ASN.1 definition and communicating a standards-based protocol with it.

Watch this space for more fun with ASN.1!

Ansible – Timeout on Become

I’ve written a playbook that provisions some server infrastructure, however one of the steps is to change the hostname.

A common headache when changing the hostname on a Linux machine is that if the hostname you set for the machine, isn’t in the machine’s /etc/hosts file, then when you run sudo su or su, it takes a really long time before it shows you the prompt as the machine struggles to do a DNS lookup for it’s own hostname and fails,

This becomes an even bigger problem when you’re using Ansible to setup these machines, Ansible times out when changing the hostname;

Simple fix, edit the /etc/ansible/ansible.cfg file and include

# SSH timeout
timeout = 300

And that’s it.

Kamailio Bytes – Docker and Containers

I wrote about using Ansible to automate Kamailio config management, Ansible is great at managing VMs or bare metal deployments, but for Containers using Docker to build and manage the deployments is where it’s at.

I’m going to assume you’ve got Docker in place, if not there’s heaps of info online about getting started with Docker.

The Dockerfile

The Kamailio team publish a Docker image for use, there’s no master branch at the moment, so you’ve got to specify the version; in this case kamailio:5.3.3-stretch.

Once we’ve got that we can start on the Dockerfile,

For this example I’m going to include

#Kamailio Test Stuff
FROM kamailio/kamailio:5.3.3-stretch

#Copy the config file onto the Filesystem of the Docker instance
COPY kamailio.cfg /etc/kamailio/

#Print out the current IP Address info
RUN ip add

#Expose port 5060 (SIP) for TCP and UDP
EXPOSE 5060
EXPOSE 5060/udp

Once the dockerfile is created we can build an image,

docker image build -t kamtest:0.1 .

And then run it,

docker run kamtest:0.1

Boom, now Kamailio is running, with the config file I pushed to it from my Dockerfile directory,

Now I can setup a Softphone on my local machine and point it to the IP of the Docker instance and away we go,

Where the real power here comes in is that I can run that docker run command another 10 times, and have another 10 Kamailio instannces running.

Tie this in with Kubernetes or a similar platform and you’ve got a way to scale and manage upgrades unlike anything you’d get on Bare Metal or VMs.

I’ve uploaded a copy of my Dockerfile for reference, you can find it on my GitHub.

Ansible for Scaling and Deployment of Evolved Packet Core NEs

There’s always lots of talk of Network Function Virtualization (NFV) in the Telco space, but replacing custom hardware with computing resources is only going to get you so far, if every machine has to be configured manually.

Ansible is a topic I’ve written a little bit about in terms of network automation / orchestration.

I wanted to test limits of Open5gs EPC, which led me to creating a lot of Packet Gateways, so I thought I’d share a little Ansible Playbook I wrote for deploying P-GWs.

It dynamically sets the binding address and DHCP servers, and points to each PCRF in the defined pool.

You can obviously build upon this too, creating another playbook to deploy PCRFs, MMEs and S-GWs will allow you to reference the hosts in each group to populate the references.

The Playbook

---
- name: Install & configure Open5GS EPC (P-GW)

  hosts: epc

  become: yes
  become_method: sudo

  vars:
    dns_servers: ['8.8.8.8','1.1.1.1']
    diameter_realm: 'nickvsnetworking.com'
    pcrf_hosts: ['pcrf1.nickvsnetworking.com', 'pcrf2.nickvsnetworking.com']
  tasks:
    - name: Set Hostname to {{inventory_hostname}}
      hostname:
        name: "{{inventory_hostname}}"
      register: rebootrequired

    - name: Updating hotnamectl
      command: "hostnamectl set-hostname {{inventory_hostname}}"
      become: True
      become: True
      when: rebootrequired.changed

    - name: Reboot VM due to Hostname Change
      reboot:
        reboot_timeout: 180
      when: rebootrequired.changed

    - name: Add Software common
      apt:
        name: software-properties-common

    - name: Add Repository
      apt_repository:
        repo: ppa:open5gs/latest

    - name: Install Open5gs P-GW
      apt:
        update_cache: true
        name: open5gs-pgw

    - name: Fill in GTP Addresses in Config
      template:
        src: pgw.yaml.j2
        dest: /etc/open5gs/pgw.yaml
        backup: true
      register: config_changed

    - name: Fill in P-GW Diameter Config
      template:
        src: pgw.conf.j2
        dest: /etc/freeDiameter/pgw.conf
        backup: true
      register: config_changed


    - name: Restart P-GW Service if Config Changed
      service:
        name: open5gs-pgwd
        state: restarted
      when: config_changed.changed

Jinja2 Template

P-GW Config (pgw.yaml.jn2)

logger:
    file: /var/log/open5gs/pgw.log

parameter:

pgw:
    freeDiameter: /etc/freeDiameter/pgw.conf
    gtpc:
      - addr: {{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}

    gtpu:
      - addr: {{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}

    ue_pool:
      - addr: 45.45.0.1/16
      - addr: cafe::1/64
    dns:
{% for dns in dns_servers %}
      - {{ dns }}
{% endfor %}

Diameter Config (pgw.conf.j2)

# This is a sample configuration file for freeDiameter daemon.

# Most of the options can be omitted, as they default to reasonable values.
# Only TLS-related options must be configured properly in usual setups.

# It is possible to use "include" keyword to import additional files
# e.g.: include "/etc/freeDiameter.d/*.conf"
# This is exactly equivalent as copy & paste the content of the included file(s)
# where the "include" keyword is found.


##############################################################
##  Peer identity and realm

# The Diameter Identity of this daemon.
# This must be a valid FQDN that resolves to the local host.
# Default: hostname's FQDN
#Identity = "aaa.koganei.freediameter.net";
Identity = "{{ inventory_hostname  }}.{{ diameter_realm }}";

# The Diameter Realm of this daemon.
# Default: the domain part of Identity (after the first dot).
#Realm = "koganei.freediameter.net";
Realm = "{{ diameter_realm }}";
##############################################################
##  Transport protocol configuration

# The port this peer is listening on for incoming connections (TCP and SCTP).
# Default: 3868. Use 0 to disable.
#Port = 3868;

# The port this peer is listening on for incoming TLS-protected connections (TCP and SCTP).
# See TLS_old_method for more information about TLS flavours.
# Note: we use TLS/SCTP instead of DTLS/SCTP at the moment. This will change in future version of freeDiameter.
# Default: 5868. Use 0 to disable.
#SecPort = 5868;

# Use RFC3588 method for TLS protection, where TLS is negociated after CER/CEA exchange is completed
# on the unsecure connection. The alternative is RFC6733 mechanism, where TLS protects also the
# CER/CEA exchange on a dedicated secure port.
# This parameter only affects outgoing connections.
# The setting can be also defined per-peer (see Peers configuration section).
# Default: use RFC6733 method with separate port for TLS.
#TLS_old_method;

# Disable use of TCP protocol (only listen and connect over SCTP)
# Default : TCP enabled
#No_TCP;

# Disable use of SCTP protocol (only listen and connect over TCP)
# Default : SCTP enabled
#No_SCTP;
# This option is ignored if freeDiameter is compiled with DISABLE_SCTP option.

# Prefer TCP instead of SCTP for establishing new connections.
# This setting may be overwritten per peer in peer configuration blocs.
# Default : SCTP is attempted first.
#Prefer_TCP;

# Default number of streams per SCTP associations.
# This setting may be overwritten per peer basis.
# Default : 30 streams
#SCTP_streams = 30;

##############################################################
##  Endpoint configuration

# Disable use of IP addresses (only IPv6)
# Default : IP enabled
#No_IP;

# Disable use of IPv6 addresses (only IP)
# Default : IPv6 enabled
#No_IPv6;

# Specify local addresses the server must bind to
# Default : listen on all addresses available.
#ListenOn = "202.249.37.5";
#ListenOn = "2001:200:903:2::202:1";
#ListenOn = "fe80::21c:5ff:fe98:7d62%eth0";
ListenOn = "{{hostvars[inventory_hostname]['ansible_default_ipv4']['address']}}";


##############################################################
##  Server configuration

# How many Diameter peers are allowed to be connecting at the same time ?
# This parameter limits the number of incoming connections from the time
# the connection is accepted until the first CER is received.
# Default: 5 unidentified clients in paralel.
#ThreadsPerServer = 5;

##############################################################
##  TLS Configuration

# TLS is managed by the GNUTLS library in the freeDiameter daemon.
# You may find more information about parameters and special behaviors
# in the relevant documentation.
# http://www.gnu.org/software/gnutls/manual/

# Credentials of the local peer
# The X509 certificate and private key file to use for the local peer.
# The files must contain PKCS-1 encoded RSA key, in PEM format.
# (These parameters are passed to gnutls_certificate_set_x509_key_file function)
# Default : NO DEFAULT
#TLS_Cred = "<x509 certif file.PEM>" , "<x509 private key file.PEM>";
#TLS_Cred = "/etc/ssl/certs/freeDiameter.pem", "/etc/ssl/private/freeDiameter.key";
TLS_Cred = "/etc/freeDiameter/pgw.cert.pem", "/etc/freeDiameter/pgw.key.pem";

# Certificate authority / trust anchors
# The file containing the list of trusted Certificate Authorities (PEM list)
# (This parameter is passed to gnutls_certificate_set_x509_trust_file function)
# The directive can appear several times to specify several files.
# Default : GNUTLS default behavior
#TLS_CA = "<file.PEM>";
TLS_CA = "/etc/freeDiameter/cacert.pem";

# Certificate Revocation List file
# The information about revoked certificates.
# The file contains a list of trusted CRLs in PEM format. They should have been verified before.
# (This parameter is passed to gnutls_certificate_set_x509_crl_file function)
# Note: openssl CRL format might have interoperability issue with GNUTLS format.
# Default : GNUTLS default behavior
#TLS_CRL = "<file.PEM>";

# GNU TLS Priority string
# This string allows to configure the behavior of GNUTLS key exchanges
# algorithms. See gnutls_priority_init function documentation for information.
# You should also refer to the Diameter required TLS support here:
#   http://tools.ietf.org/html/rfc6733#section-13.1
# Default : "NORMAL"
# Example: TLS_Prio = "NONE:+VERS-TLS1.1:+AES-128-CBC:+RSA:+SHA1:+COMP-NULL";
#TLS_Prio = "NORMAL";

# Diffie-Hellman parameters size
# Set the number of bits for generated DH parameters
# Valid value should be 768, 1024, 2048, 3072 or 4096.
# (This parameter is passed to gnutls_dh_params_generate2 function,
# it usually should match RSA key size)
# Default : 1024
#TLS_DH_Bits = 1024;

# Alternatively, you can specify a file to load the PKCS#3 encoded
# DH parameters directly from. This accelerates the daemon start
# but is slightly less secure. If this file is provided, the
# TLS_DH_Bits parameters has no effect.
# Default : no default.
#TLS_DH_File = "<file.PEM>";


##############################################################
##  Timers configuration

# The Tc timer of this peer.
# It is the delay before a new attempt is made to reconnect a disconnected peer.
# The value is expressed in seconds. The recommended value is 30 seconds.
# Default: 30
#TcTimer = 30;

# The Tw timer of this peer.
# It is the delay before a watchdog message is sent, as described in RFC 3539.
# The value is expressed in seconds. The default value is 30 seconds. Value must
# be greater or equal to 6 seconds. See details in the RFC.
# Default: 30
#TwTimer = 30;

##############################################################
##  Applications configuration

# Disable the relaying of Diameter messages?
# For messages not handled locally, the default behavior is to forward the
# message to another peer if any is available, according to the routing
# algorithms. In addition the "0xffffff" application is advertised in CER/CEA
# exchanges.
# Default: Relaying is enabled.
#NoRelay;

# Number of server threads that can handle incoming messages at the same time.
# Default: 4
#AppServThreads = 4;

# Other applications are configured by loaded extensions.

##############################################################
##  Extensions configuration

#  The freeDiameter framework merely provides support for
# Diameter Base Protocol. The specific application behaviors,
# as well as advanced functions, are provided
# by loadable extensions (plug-ins).
#  These extensions may in addition receive the name of a
# configuration file, the format of which is extension-specific.
#
# Format:
#LoadExtension = "/path/to/extension" [ : "/optional/configuration/file" ] ;
#
# Examples:
#LoadExtension = "extensions/sample.fdx";
#LoadExtension = "extensions/sample.fdx":"conf/sample.conf";

# Extensions are named as follow:
# dict_* for extensions that add content to the dictionary definitions.
# dbg_*  for extensions useful only to retrieve more information on the framework execution.
# acl_*  : Access control list, to control which peers are allowed to connect.
# rt_*   : routing extensions that impact how messages are forwarded to other peers.
# app_*  : applications, these extensions usually register callbacks to handle specific messages.
# test_* : dummy extensions that are useful only in testing environments.


# The dbg_msg_dump.fdx extension allows you to tweak the way freeDiameter displays some
# information about some events. This extension does not actually use a configuration file
# but receives directly a parameter in the string passed to the extension. Here are some examples:
## LoadExtension = "dbg_msg_dumps.fdx" : "0x1111"; # Removes all default hooks, very quiet even in case of errors.
## LoadExtension = "dbg_msg_dumps.fdx" : "0x2222"; # Display all events with few details.
## LoadExtension = "dbg_msg_dumps.fdx" : "0x0080"; # Dump complete information about sent and received messages.
# The four digits respectively control: connections, routing decisions, sent/received messages, errors.
# The values for each digit are:
#  0 - default - keep the default behavior
#  1 - quiet   - remove any specific log
#  2 - compact - display only a summary of the information
#  4 - full    - display the complete information on a single long line
#  8 - tree    - display the complete information in an easier to read format spanning several lines.

LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dbg_msg_dumps.fdx" : "0x8888";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_rfc5777.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_mip6i.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_nasreq.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_nas_mipv6.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_dcca.fdx";
LoadExtension = "/usr/lib/x86_64-linux-gnu/freeDiameter/dict_dcca_3gpp.fdx";


##############################################################
##  Peers configuration

#  The local server listens for incoming connections. By default,
# all unknown connecting peers are rejected. Extensions can override this behavior (e.g., acl_wl).
#
#  In addition to incoming connections, the local peer can
# be configured to establish and maintain connections to some
# Diameter nodes and allow connections from these nodes.
#  This is achieved with the ConnectPeer directive described below.
#
# Note that the configured Diameter Identity MUST match
# the information received inside CEA, or the connection will be aborted.
#
# Format:
#ConnectPeer = "diameterid" [ { parameter1; parameter2; ...} ] ;
# Parameters that can be specified in the peer's parameter list:
#  No_TCP; No_SCTP; No_IP; No_IPv6; Prefer_TCP; TLS_old_method;
#  No_TLS;       # assume transparent security instead of TLS. DTLS is not supported yet (will change in future versions).
#  Port = 5868;  # The port to connect to
#  TcTimer = 30;
#  TwTimer = 30;
#  ConnectTo = "202.249.37.5";
#  ConnectTo = "2001:200:903:2::202:1";
#  TLS_Prio = "NORMAL";
#  Realm = "realm.net"; # Reject the peer if it does not advertise this realm.
# Examples:
#ConnectPeer = "aaa.wide.ad.jp";
#ConnectPeer = "old.diameter.serv" { TcTimer = 60; TLS_old_method; No_SCTP; Port=3868; } ;
{% for pcrf in pcrf_hosts %}
ConnectPeer = "{{ pcrf }}" { ConnectTo = "{{ pcrf }}"; No_TLS; };
{% endfor %}



##############################################################

GTPv2 – F-TEID Interface Types

I’ve been working on a ePDG for VoWiFi access to my IMS core.

This has led to a bit of a deep dive into GTP (easy enough) and GTPv2 (Bit harder).

The Fully Qualified Tunnel Endpoint Identifier includes an information element for the Interface Type, identified by a two digit number.

Here we see S2b is 32

In the end I found the answer in 3GPP TS 29.274, but thought I’d share it here.

0S1-U eNodeB GTP-U interface
1S1-U SGW GTP-U interface
2S12 RNC GTP-U interface
3S12 SGW GTP-U interface
4S5/S8 SGW GTP-U interface
5S5/S8 PGW GTP-U interface
6S5/S8 SGW GTP-C interface
7S5/S8 PGW GTP-C interface
8S5/S8 SGW PMIPv6 interface (the 32 bit GRE key is encoded in 32 bit TEID field and since alternate CoA is
not used the control plane and user plane addresses are the same for PMIPv6)
9S5/S8 PGW PMIPv6 interface (the 32 bit GRE key is encoded in 32 bit TEID field and the control plane and
user plane addresses are the same for PMIPv6)
10S11 MME GTP-C interface
11S11/S4 SGW GTP-C interface
12S10 MME GTP-C interface
13S3 MME GTP-C interface
14S3 SGSN GTP-C interface
15S4 SGSN GTP-U interface
16S4 SGW GTP-U interface
17S4 SGSN GTP-C interface
18S16 SGSN GTP-C interface
19eNodeB GTP-U interface for DL data forwarding
20eNodeB GTP-U interface for UL data forwarding
21RNC GTP-U interface for data forwarding
22SGSN GTP-U interface for data forwarding
23SGW GTP-U interface for DL data forwarding
24Sm MBMS GW GTP-C interface
25Sn MBMS GW GTP-C interface
26Sm MME GTP-C interface
27Sn SGSN GTP-C interface
28SGW GTP-U interface for UL data forwarding
29Sn SGSN GTP-U interface
30S2b ePDG GTP-C interface
31S2b-U ePDG GTP-U interface
32S2b PGW GTP-C interface
33S2b-U PGW GTP-U interface

I also found how this data is encoded on the wire is a bit strange,

In the example above the Interface Type is 7,

This is encoded in binary which give us 111.

This is then padded to 6 bits to give us 000111.

This is prefixed by two additional bits the first denotes if IPv4 address is present, the second bit is for if IPv6 address is present.

Bit 1Bit 2Bit 3-6
IPv4 Address Present IPv4 Address PresentInterface Type
11 000111

This is then encoded to hex to give us 87

Here’s my Python example;

interface_type = int(7)
interface_type = "{0:b}".format(interface_type).zfill(6)   #Produce binary bits
ipv4ipv6 = "10" #IPv4 only
interface_type = ipv4ipv6 + interface_type #concatenate the two
interface_type  = format(int(str(interface_type), 2),"x") #convert to hex

Kamailio Bytes – Ansible for Automating Deployments

Despite the fact it’s 2020 there’s still a lot of folks in the world manually configuring boxes,

Ansible is a topic I could talk all day about, but in essence it’s kind of an automation framework, tell Ansible what to do one and it can spin you up two boxes, or two thousand boxes and manage the config on them.

I talked about DMQ, the Distributed Message Queue in a Kamailio Bytes post some time ago, and so as an example I’ll share an example playbook to Install Kamailio the lazy way from the Repos, and load the DMQ config with the IP Address and DMQ Address pulled from variables based on the host itself.

There’s a huge number of posts on installing and the basics of Ansible online, if you’re not familiar with Ansible already I’d suggest starting by learning the basics and then rejoining us.

The Hosts

Depending on if your hosts are running on bare metal, VMWare VMs or cloud based, I’m going to assume you’re working with a Debian system.

I’ve already got 3 servers ready to go, they’ve got sequential IP Addresses so I’ve added the range to my /etc/ansible/hosts file:

I’ve created the group kamailio and put the IP Address range 10.0.1.193 to 10.0.1.195 in there.

You will probably need to add the authentication info, such as passwords, private keys and privilege escalation details, but I’m going to assume you’ve already done that and you can run the ping module on each one:

ansible kamailio -m ping

Assuming that comes back OK and you can get into each one let’s move onto the Playbook.

The Playbook

There’s a few tasks we’ll get out of the way before we configure Kamailio,

The first of which is adding the Debian repo and the keys,

Next we’ll load a Kamailio config from a template that fills in our IP Address and Kamailio version, then we’ll install Kamailio,

Rather than talk you through each of the plays here’s a copy of my playbook:

---
- name: Configure Kamailio


  hosts: kamailio
  become: yes

  vars:
    kamailio_version: "53"
    debian_sources_dir: "/etc/apt/sources.list.d"

  tasks:



    - name: Add keys for Kamailio repo
      apt_key:
        url: http://deb.kamailio.org/kamailiodebkey.gpg
        state: present


    - name: Add repo to sources.list
      apt_repository:
        repo: deb http://deb.kamailio.org/kamailio{{kamailio_version}} {{hostvars[inventory_hostname]['ansible_lsb']['codename']}} main
                #The full list of Debian repos can be found at http://deb.kamailio.org/
                #The version is based off the versions listed there and the release is based on the codename of the Debian / Ubuntu release.
        state: present


    - name: Copy Config Template
                #Copies config from the template, fills in variables and uplaods to the server
      template:
        src: kamailio.cfg.j2
        dest: /etc/kamailio/kamailio.cfg
        owner: root
        group: root
        backup: yes
      register: config_changed

    - name: Install Kamailio
                #Updates cache (apt-get update) and then installs Kamailio
      apt:
        name: kamailio
        update_cache: yes
        state: present
      register: kamailio_installed_firstrun

    - name: Restart Kamailio if config changed
      service:
       name: kamailio
       state: restarted
      when: config_changed.changed

    - name: Start Kamailio if installed for the first time
      service:
       name: kamailio
       state: started
      when: kamailio_installed_firstrun.changed

Should be pretty straight forward to anyone who’s used Ansible before, but the real magic happens in the template module. Let’s take a look;

Kamailio config in Jinja2 template

Pusing out static config is one thing, but things like IP Addresses, FQDNs and SSL certs may differ from machine to machine, so instead of just pushing one config, I’ve created a config and added some variables in Jinja2 format to the config, that will be filled with the value on the target when pushed out.

In the template module of the playbook you can see I’ve specified the file kamailio.cfg.j2 this is just a regular Kamailio config file but I’ve added some variables, let’s look at how that’s done.

On the machine 10.0.1.194 we want it to listen on 10.0.1.194, we could put list 0.0.0.0 but this can lead to security concerns, so instead let’s specify the IP in the Jinja config,

listen=udp:{{ ansible_default_ipv4.address }}:5060
listen=tcp:{{ ansible_default_ipv4.address }}:5060
listen=udp:{{ ansible_default_ipv4.address }}:5090

By putting ansible_default_ipv4.address in two sets of curly brackets, this tells Ansible to fill in thes values from the template with the Ansible IPv4 Address of the target machine.

Let’s take a look on the 10.0.1.194’s actual kamailio.cfg file:

Let’s take another example,

To keep DMQ humming it makes sense to have different DMQ domains for different versions of Kamailio, so in the Kamailio config file template I’ve called the variable kamailio_version in the DMQ address,

This means on a Kamailio 5.2 version this URL look like this on the boxes’ config:

# ---- dmq params ----
modparam("dmq", "server_address", "sip:10.0.1.194:5090")
modparam("dmq", "notification_address", "sip:dmq-53.nickvsnetworking.com:5090")

Running It

Running it is just a simple matter of calling ansible-playbook and pointing it at the playbook we created, here’s how it looks setting up the 3 hosts from their vanilla state:

The great thing about Kamailio is it’s omnipotent – This means it will detect if it needs to do each of the tasks specified in the playbook.

So if we run this again it won’t try and add the repo, GPG keys, install Kamailio and load the template, it’ll look and see each of those steps have already been done and skip each of them.

But what if someone makes some local changes on one of the boxes, let’s look at what happens:

Likewise now if we decide to change our config we only need to update the template file and Ansible will push it out to all our machines, I’ve added a comment into the Kamailio template, so let’s run it again and see the config pushed out to all the Kamailio instances and them restarting.

Hopefully this gives you a bit more of an idea of how to manage a large number of Kamailio instances at scale, as always I’ve put my running code on GitHub, Ansible Playbook (configure_kamailio.yml) and Kamailio Jinja config template (kamailio.cfg.j2)

PyHSS Update – IMS Cx Support!

As I’ve been doing more and more work with IMS / VoLTE, the requirements / features on PyHSS has grown.

Some key features I’ve added recently:

IMS HSS Features

IMS Cx Server Assignment Request / Answer

IMS Cx Multimedia Authentication Request / Answer

IMS Cx User Authentication Request / Answer

IMS Cx Location Information Request / Answer

General HSS Features

Better logging (IPs instead of Diameter hostnames)

Better Resync Support (For USIMs with different sync windows)

ToDo

There’s still some functions in the 3GPP Cx interface description I need to implement:

IMS Cx Registration-Termination Request / Answer

IMS Cx Push-Profile-Request / Answer

Support for Resync in IMS Cx Multimedia Authentication Answer

Keep an eye on the GitLab repo where I’m pushing the changes.

If you’re leaning about VoLTE & IMS networks, or building your own, I’d suggest checking out my other posts on the topic.