Category Archives: Software

SessionS in CGrateS

In a scenario where we don’t know how long an event will be (for example at the start of a voice call, we don’t know how long it’s going to go for, or the start of a data session but we don’t know how much data will be used) but need to want to
A) charge for it and
B) apply some credit control to make sure the subscriber doesn’t consume more than their allowed balance
That’s when we start to use SessionS.

SessionS is what powers online charging, and it’s done with Unit Reservation, I’ve written about this in painful detail here.

For a voice call for example, we reserve talk time in advance, before the user actually consumes it, for example when the call starts, we reserve 30 seconds of credit from the user’s balance, then when the user has consumed this first 30 seconds of credit, we go back and request another 30 seconds of credit.
If there’s credit available, we grant it and the call is allowed to continue for another 30 seconds, and then the process repeats, until either the call ends or we go back for more credit and there’s none available, at which point we terminate the call.

Why is this important?
We may have multiple sources drawing down on an account at the same time, if you’re on a call while browsing, you’re doing two events that are charged, and may be charged from the same balance, and we don’t want to give you free calls or data just because you’re able to walk and chew gum at the same time.

CGrateS Agents such as Asterisk, Kamailio, FreeSWITCH, RADIUS and Diameter Agents handle most of the heavy lifting for us, but understanding how SessionS works for me at least, made working with these modules much easier.

So let’s set the scene, we’re going to create an Account with 10 units of *generic balance (I’m using generic as if we use time the numbers end up kinda big and it gets confusing to look at) and then consume over several transactions it until all the balance is gone

In the config we’ve disabled the debit_interval in session – Usually this is handled by the Agents, but for our demo we’re going to do it manually, so it’s off.

Let’s get setup, we’ll define a charger, and create an account and allocate some balance to it.

#Define default Charger
print(CGRateS_Obj.SendData({
    "method": "APIerSv1.SetChargerProfile",
    "params": [
        {
            "Tenant": "cgrates.org",
            "ID": "Charger_API_Default",
            "RunID": "*Charger_API_Default_RunID",  #Arbitrary Sting
            'FilterIDs': [],
            'AttributeIDs': ['*none'],
            'Weight': 999,
        }
    ]   }   ))  

#Add a balance to the account with type *generic with 10 units of balance
Create_Voice_Balance_JSON = {
    "method": "ApierV1.SetBalance",
    "params": [
        {
            "Tenant": "cgrates.org",
            "Account": "Nick_Test_123",
            "BalanceType": "*generic",
            "Categories": "*any",
            "Balance": {
                "ID": "10_units_generic_balance",
                "Value": "10",
                "Weight": 25,
                "Blocker": "true",       #This stops the Monetary Balance from being used
            }
        }
    ]
}
print(CGRateS_Obj.SendData(Create_Voice_Balance_JSON))

Alright, with that out of the way let’s start a session using SessionSv1.UpdateSession we’re going to define a CGrateS event to pass to it, and we’ll call it multiple times, but change the usage as we go.

To make our demo easier, I’ve nested a little for loop, so we can keep deducting balance,

now = datetime.datetime.now()
OriginID = str(uuid.uuid4())
call_event = {
                "RequestType": "*prepaid",
                "ToR": "*generic",
                "Tenant": "cgrates.org",
                "Account": "Nick_Test_123",
                "AnswerTime": "*now",
                "OriginID": str(uuid.uuid1()),
                "OriginHost": "ScratchPad",
            }

while input("Enter to continue or q to quit") != "q":
    call_event['Usage'] = str(input("Usage: "))
    result = CGRateS_Obj.SendData(
        {"method": "SessionSv1.UpdateSession", "params": [
            {
                "GetAttributes": False,
                "UpdateSession": True,
                "Subsystem" : "sessions",
                "Tenant": "cgrates.org",
                "ID": OriginID,
                "Context": None,
                "Time": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ"),
                "Event":
                    call_event}
        ]})
    pprint.pprint(result)
print("Quit")

So now with this all in place, we define the default charger add add balance to an account (as the account doesn’t exist yet, this step creates the account too) in the first block of code, and this second block of code defines the event.

By running these together, we can start our session.

When you run it you’ll be prompted to press enter to continue or input q to quit, let’s enter to continue, then you’ll be asked for the usage, I’ve put 1 in the below example.

Enter to continue or q to quit
Usage: 1
Sending Request with Body:
{'method': 'SessionSv1.UpdateSession', 'params': [{'GetAttributes': False, 'UpdateSession': True, 'Subsystem': 'sessions', 'Tenant': 'cgrates.org', 'ID': '8e43c5e4-0b9b-4aaf-8d01-5143677d6a8a', 'Context': None, 'Time': '2024-06-14T22:22:52.279155Z', 'Event': {'RequestType': '*prepaid', 'ToR': '*generic', 'Tenant': 'cgrates.org', 'Account': 'Nick_Test_123', 'AnswerTime': '*now', 'OriginID': 'c86e7f54-2a48-11ef-9862-072e6d04df9b', 'OriginHost': 'ScratchPad', 'Usage': '1'}}]}
{'error': None, 'id': None, 'result': {'MaxUsage': 1}}

Alright, now let’s take a quick sidebar, and check in with cgr-console in a different tab, what do we think is going to show as our balance?

Well, if we run the accounts command from within cgr-console we can see our account which had a balance of 10 before, now has a balance of 9, as we’ve deducted 1 from the balance by inputting it as our usage:

And if we run the active_sessions command in the same console, we see the active sessions, where we can see where that one unit of balance went.

A few things to call out here:

  • The DebitInterval is how often this balance will be deducted, for our test scenario we’ve turned off automatic debiting, but Agents like FreeSWITCH and Kamailio leave this on and automatically tick off time as it passes (Oblivious this doesn’t work for data, so we’d leave it off)
  • The LoopIndex is how many UpdateSessions events the API has handled for this session (the unique session is identified by the ID / CGRID field
  • SetupTime is blank because we didn’t set it in our UpdateSession
  • The Usage in cgr-console is sometimes shown as nanoseconds, that’s because 1ns is equal to 1 generic unit.

So let’s go back to our Python script, go through the loop again but this time set the usage to 7.

Now if we flip back to cgr-console and check again, we’ll see, as expected that our account balance is now 2, and the active session has 8 of usage.

That’s because we started with 10, then we deducted 1, then we deducted 7, gives us 2 remaining. If we’re to run active_sessions again at cgr-console we’ll see the Usage of the session is now 8.

And lastly let’s try and take another 7 of balance, knowing we’ve only got 2 units left.

No dice, 7 is greater than 2 of course, so CGrateS stops us there, it’s done it’s job of making sure we didn’t allocate more of the credit than we were allowed and told us we have insufficient credit and that this balance is a blocker.

In this little demo we had one service drawing on the same source, but imagine if you’d fired up two copies of the script, you could have those two sources both consuming data at the same time, and this is where CGrateS shines; CGrateS can do all the heavy lifting to make sure that the resources are never over allocated, and that we’re not ending up with a negative balance.

When it comes time to terminating the session, there’s a trick to this.

Unit reservation is all about allocating resources in advance, this means we’ve generally have taken more money from the balance than we actually ended up consuming, so we have to give this back to the customer.

If we include the Usage field in the TerminateSession request, this must be the total usage for the entire session (start-to-finish), not just since the last UpdateSession API call.

For example if we allocated 30 seconds balance at the start of a call, then as that 30 seconds was consumed, we allocated another 30 seconds, and then when the call got 60 seconds in, we allocate another 30 seconds of balance. But if the call ends at a total of 70 seconds, we’ve allocated 90 seconds (3x 30 seconds), so we’d be over billing the customer. This is where we set Usage to 70 and CGrateS will refund the 20 seconds of balance we over charged them. This is because 3x 30 seconds = 90 seconds allocated, but the call only ended up using 70 seconds, so we need to refund 20 seconds of balance (90 – 70 = 20) to the Account balance.

That’s one way of doing it, but the other option is if we’ve just tracked usage since the last update, we have a 70 second call that we had allocated 3x 30 seconds Session Updates, we can set LastUsed to be 10 seconds (as we only used 10 seconds of the 30 seconds allocated in the last Update) which will also refund the 20 seconds.

In practice, you’ll probably use CGrateS Agents like the FreeSWITCH Agent, Asterisk Agent or Kamailio Agent to handle the charging in those applications. By using the premade CGrateS Agents, it handles generating the UpdateSession calls and all of this logic under the hood, but it’s super useful to know how it all works.

I’ve put the example cgrates.json file I used and the script for debiting on the Github repo for this post.

Importing building footprints into Forsk Atoll from OpenStreetMap data

Having building footprints inside Atoll is super-duper valuable, this means you can calculate your percentage of homes / buildings covered, after all geographic coverage and population coverage are two very different things.

Download the data from OSM data – If you only need a small are you can use the Export OSM page, or if you need a wider area Geofabrik provides country level exports of the data, or if you’re really keen you can download all the OSM data.

Once you’ve got the export, we’ll load the .gpkg file (or files) into GlobalMapper

Select one layer at a time that you want to export into Atoll. (This also works for roads, geographic boundaries, POIs, etc)

Export the selected layer from Export -> Export Vector / Lidar Format

Set output type to “Shapefile”

Set output filename in “Export Areas” (This will be the output file). If you want to limit the export to a given area you can do that in Export Bounds.

Now we can import this data into Atoll.

File -> Import

Select the exported Shapefile we just created.

Set the projection and import

Bingo now we’ve got our building footprints,

We can change the style of the layer and the labels as needed.

Now we can use the buildings as the Focus Zone / Compute Zone and then run reports and predictions based on those areas.

For example I can run Automatic Cell Planning with the building layers as the Focus zones, to optimize azimuths, tilts and powers to provide coverage to where people live, not just vacant land.

Importing Global Clutter data into Forsk Atoll

Clutter data describes real world things on the planet’s surface that attenuate signals, for example trees, shrubs, buildings, bodies of water, etc, etc. There’s also different types of trees, some types of trees attenuate signals more than others, different types of buildings are the same.

Getting clutter data used to be crazy expensive, and done on a per country or even per region basis, until the European Space Agency dropped a global dataset free of charge for anyone to use, that covered the entire planet in a single source of data.

So we can use this inside Forsk Atoll for making our predictions.

First things first we’ll need to create an account with the ESA (This is not where they take astronaut applications unfortunately, it just gives you access to the datasets).

Then you can select the areas (tiles) you want to download after clicking the “Download” tab on the right.

We get a confirmation of the tiles we’re download and we’ll get a ZIP file containing the data.

We can load the whole ZIP file (Without needing to extract anything) into GlobalMapper which loads all the layers.

I found the _Map.tif files the highest resolution, so I’m only exporting these.

Then we need to export the data to GeoTiff for use in Atoll (The specific GeoTiff format ESA produces them in is not compatible with Atoll hence the need to convert), so we export the layers as Raster / Image format.

Atoll requires square pixels, and we need them in meters, so we select “Calculate Spacing in Other Units”.

Then set the spacing to meters (I use 1m to match everything else, but the data is actually only 10m accurate, so you could set this to 10m).

You probably want to set the Export Bounds to just the areas you’re interested in, otherwise the data gets really big, really quickly and takes forever to crunch.

Now for the fancy part, we need to import it into Atoll.

When we import the data we import it as Raster data (Clutter Classes) with a pixel size of 1m.

Alas when we exported the data we’ve lost the positioning information, so while we’ve got the clutter data, it’s just there somewhere on the planet, which with the planet being the size it is, is probably not where you need it.

So I cheat, I start put putting the West and North values to match the values from a Cell Site I’ve already got on the map (I put one in the top left and bottom right corners of the map) and use that as the initial value.

Then – and stick with me, this is very technical – I mess with the values until the maps line up into the correct position. Increase X, decrease Y, dialing it it in until the clutter map lines up with the other maps I’ve got.

Right, now we’ve got the data but we don’t have any values.

Each color represents a clutter class, but we haven’t set any actual height or losses for that material.

To know what each colour means we need to RTFM – ESA WorldCover 2020 User Manual.

Which has a table:

Alas the Map Code does not match with the table in the manual, but the colours do, here’s what mine map to:

Which means when hovering over a layer of clutter I can see the type:

Next we need to populate the heights, indoor and outdoor losses for that given clutter. This is a little more tricky as it’s going to vary geography to geography, but there’s indicative loss numbers available online pretty easily.

Once you’ve got that plugged in you can run your predictions and off you go!

Legacy BTS Site manager on Linux

Another post in the “vendors thought Java would last forever but the web would just a fad” series, this one on getting Nokia BTS Site Manager (which is used to administer the pre-Airscale Nokia base stations) running on a modern Linux distro.

For starters we get the installers (you’ll need to get these from Nokia), and install openjdk-8-jre using whichever package manager your distro supports.

Once that’s installed, then extract the installer folder (Like BTS Site Manager FL18_BTSSM_0000_000434_000000-20250323T000206Z-001.zip).

Inside the extracted folder we’ve got a path like:

BTS Site Manager FL18_BTSSM_0000_000434_000000-20250323T000206Z-001/BTS Site Manager FL18_BTSSM_0000_000434_000000/C_Element/SE_UICA/Setup

The Setup folder contains a bunch of binaries.

We make these executable:

chmod +x BTSSiteEM-FL18-0000_000434_000000*

Then run the binary:

sudo ./BTSSiteEM-FL18-0000_000434_000000_x64.bin

By default it installs to /opt/Nokia/Managers/BTS\ Site/BTS\ Site\ Manager

And we’re done. Your OS may or may not have built a link to the app in your “start menu” / launcher.

You can use one BTS manager to manage several different versions of software, but you need the definitions for those software loaded.

If you want to load the Releases for other versions (Like other FLF or FL releases) the simplest way is just to install the BTS site manager for those versions and just use the latest, then you’ll get the table of installed versions in the “About” section that you can administer.

Reverse Engineering dead protocols – Strand ShowNet

As a kid I did lighting for stage shows.

Turns out it was great training for a career in telecom; I learned basic rigging, working at heights, electrical work, patching rats nests of cables and the shared camaraderie that comes from having stayed up all night working on something no one would ever notice, unless you hadn’t done it.

At the time the Strand 520i lighting console was the coolest thing ever, it had support for 4 DMX universes (2048 channels – Who could ever need more than that!?) and cost more than a new car.

One late Friday night browsing online I found one for sale by a University in the state I live in, for $100 or best offer. You better believe I smashed the buy now button so hard my mouse almost broke.
I was going to own my very own Strand 520.

I spent the weekend reading through the old manuals, remembering how to use all the features, then dragged my partner for the road trip the following Monday morning to pick it up and bring it home.

But before I could do anything fun I had to find a PS/2 keyboard and a VGA screen, which took me a few more days (a visit to the tip was the easiest way to get a hold of them).

Then I needed something to receive DMX – I found everything now uses ArtNet (DMX over IP) and there’s visualisers for simulating an arena / stage lighting setup, but all take ArtNet now, so I ordered a DMX to ArtNet converter.

Inside the unit is pretty much a standard PC (An “OG” Pentium in the Strand 520) with an ISA card for all the lighting control stuff.

The clicking hard drive managed to boot, but I didn’t think it’d last long, having been made more than 25 years prior. So I created a disk image and copied the file system onto a CF card using a CF to IDE adapter. This trick meant it booted faster than ever before.

Clicky HDD before it became a CF

One thing I’d read about online was the VARTA battery had a tendency to leak battery acid all over the PCB.
This one had yet to fully spill its guts, but was looking a bit bulgey and had started to leak a little already.

The battery (I’d read) was only for storing info in a power loss scenario, and if the battery isn’t present it just slows the boot time as everything has to be read from disk, so I took the leap of faith and cut the battery out, and lo, it still all boots.

Slightly furry VARTA battery

So now I was OK to get the desk online properly, but was getting semi-regular lock ups where DMX would stop outputting and inputs on the console were not read, but the underlying PC was still working.

I spent a lot time debugging this. BIOS settings, interrupts, I dived into how ISA works, replaced the battery I removed with a brand new one, and at one point I broke out the oscilloscope, but nothing worked.

Around the same time I noticed the Ethernet port (BNC!) would work if I just ran plain DOS (could ping from DOS but when the application started the NIC would go dead), which made me think I may be facing a hardware fault with the “CS” – the show processor for the console which is what the motherboard connects to via ISA.

The desk itself with the face off, and the CF adapter being mounted

Alas being almost 30 years old (this unit was made in 1996) there aren’t a great number of them around to test with, so I could try swapping out the “CS” board. What I did find was another complete console on eBay for $50 but it was in the UK, and they weigh a lot.
Shipping this thing was not an option.

But for a bit of of extra cash the seller was willing to crack the case open and strip out the two main boards and post me just those. This had added bonus that the motherboard and CPU of the board sent from the UK was a 520i meaning it has the Pentium II processor – This Strand 520 was now going to be a Strand 520i.

A month later a box appeared at my door containing the boards, but the battery on the CS board from the UK had well and truly spilled its guts, leaving some toxic sludge around all the components nearby. A can of PCB cleaner and a toothbrush (which I will not be using to brush teeth with anymore) and I’d cleaned it up as best I could, but the fan output from the board was well and truly dead, with some of the SMD components just eaten by the acid.

So I put everything back into the case and wired it up. The mounts for the Motherboard were slightly different, and the software that is used for the 520i is different from the 520 (without the i).

The HDD from the UK was unable to boot, but I was able to get it to spin up enough to copy off the ~5Mb of files I needed, then I did a fresh install of MS DOS and copied the installer for the StrandOS.

Finally fully functional

Finally I had a stable working console. Not just that but the Strand Networker application was now available to me. So I plugged into the 10Mbps connection and set the console to output to Network as well as DMX.

Enabling the “networker” for network DMX transmission

I cranked open Wireshark and there was a mystery signal sent to the broadcast address on UDP…

I patched a single DMX channel and changed the value and when I viewed the data in Wireshark I could see a hex representation of the DMX 0-255 value.

Easy I thought to myself, it’ll just be a grid of channels, each with their value as hex. Ha! I was wrong.

Turns out Strand Shownet used a conditional form of “Run Length Encoding” compression, where if you’ve got channels 1 through 5 at 50% rather than encoding this is 5 bytes each showing 0x80, it uses 2 bytes, to indicate 5 sequential channels (run length) and then the value (0x80). Then there’s another bit to denote how many forward places to move and if the next channel is using RLE or not.

The code got messy; it’s not the best thing I’ve ever written but it works for 2 full universes of DMX (I need to spend more time to understand where the channel encoding overflow happens as I end up a few channels ahead of where I should be on universe 3 and above).

The code is available on Github and I’d love to know if anyone’s using it with these old dinosaurs!

https://github.com/nickvsnetworking/PyShowNet

Before there was Grafana – How Telcos did metrics & observability at scale before computers

I love Grafana. I love metrics and observability. Nothing is more powerful than being able to see what’s going on inside your network/application/solar setup/weather station – you name it.

It’s never been easier to see what’s going on.

If I wanted to monitor my web app as I onboard more customers, Grafana is the go-to tool, but how was it done before the computer age? Let’s go back to the 1940s and look at how the telephone network handled observability and metrics…

This starts with introducing the “Call Meter”, “Subscriber Meter” or “Subs Meter” for short.

Detail of mechanical call meter Strathfield South Exchange
Source – field, field, field and chang’s brilliantly beautiful “That Exchange Project

The concept is pretty simple. Each telephone service (“subscriber” in telecom parlance) provided by the local telephone exchange gets a subscriber meter or “subs meter”.

When the subscriber (customer) makes a call, and the call is answered, a reverse of polarity on the line ticks the subscriber meter over by one digit.

Each of the meters on the left is a single telephone subscriber, each time they make a call, the meter ticks up by one position. Source – field, field, field and chang’s brilliantly beautiful “That Exchange Project

As you can imagine if you’ve got a telephone exchange that serves 10,000 customers, well you need 10,000 subscriber meters…

You need a lot of meters… Source – field, field, field and chang’s brilliantly beautiful “That Exchange Project

At the end of the month, someone takes a photo of all the meters on a film camera, sends it off to a billing center where they develop the photo, then calculate the difference in values from last month’s meter reading photo and this month’s meter reading photo, and bingo – there’s the number of calls the person made. You tabulate the cost on an adding machine and send off the invoice.

Each of the little blocks is a single subscriber to meter and the weird cone thing held is a hood for the camera to photograph the values – Source The Communications Museum Trust

Today we’d just use PromQL:

delta(subscriber_meter{phone_number="123456"}[30d])

Optional Sidebar for those asking “but what about Long Distance calls where you pay per minute?” – In a world where you pay per local call, regardless of length, this works just fine, but as more complicated scenarios like long distance calling were introduced, this presented a challenge, but this could be solved by reversing the line polarity at predefined intervals, to keep ticking up the subscriber meters during the call. Exchange Clocks provided a number of pulse outputs, like 1 pulse per second, 1 pulse per minute, etc, this 1 pulse per minute signal could be hooked up to the line reversal circuit for long distance calls, to trigger the line reversal every minute. This means if a local call was $0.40 untimed, if you made long distance calls at $0.40 per minute, then you just needed the exchange to reverse the line every minute to pulse the meter. 10 increments on the meter could mean 10 x $0.40 local calls or 10 minutes of $0.40 per minute long distance.

These meters were originally just for metering traffic, but engineers in the telephone network realised they could be used as generic “counters” for just about anything in the telephone network.

Let’s imagine you want to know how often a trunk line to another exchange runs out of capacity, well, you simply wire a meter to get triggered each time that condition happens, now you’ve got a counter for each time that event occurs.

Now let’s say you want to know how often you run out of final selectors, well, through another counter on it.

These same meters, can be wired to count fault conditions.

Mechanical fault meters on old step-by-step test desk, Queanbeyan Exchange
Source – field, field, field and chang’s brilliantly beautiful “That Exchange Project

A pencil and a logbook is how you keep track of frequency of the event being triggered, and if you want to graph it out, graph paper, not Grafana.

As telephone systems increased in complexity more and more meters were used to track what’s going on, up until the time that computers could start to handle that process, when “Electronic Customer Metering” came into play with the early Stored Program Control exchanges.

Metering and charging equipment in Blakehurst Exchange
Source – field, field, field and chang’s brilliantly beautiful “That Exchange Project

Observability and Metrics are so important for making software, but every time I define a “counter” in software for an event, I’m always reminded of clicking meters in an telephone exchange, knowing this is how it used to be done.

It’s not Rocket Science – Tracking performance of OneWeb terminals

Last year we deployed some Hughes HL1120W OneWeb terminals in one of the remote cellular networks we support.

Unfortunately it was failing to meet our expectations in terms of performance and reliability – We were seeing multiple dropouts every few hours, for between 30 seconds and ~3 minutes at a time, and while our reseller was great, we weren’t really getting anywhere with Eutelsat in terms of understanding why it wasn’t working.

Luckily for us, Hughes (who manufacture the OneWeb terminals) have an unprotected API (*facepalm*) from which we can scrape all the information about what the terminal sees.

As that data is in an API we have to query, I knocked up a quick Python script to poll the API and convert the data from the API into Prometheus data so we could put it into Grafana and visualise what’s going on with the terminals and the constellation.

After getting all this into Grafana and combining it with the ICMP Blackbox exporter (we configured Blackbox to send HTTP requests and ICMP pings out of each of the different satellite terminals we had (a mix of OneWeb and others)) we could see a pattern emerging where certain “birds” (satellites) that passed overhead would come with packet loss and dropouts.

It was the same satellites each time that led to the drops, which allowed us to pinpoint to say when we see this satellite coming over the horizon, we know there’s going to be some packet loss.

In the end Eutelsat acknowledged they had two faulty satellites in the orbit we are using, hence seeing the dropouts, and they are currently working on resolving this (but that actually does require rockets, so we’re left without a usable service for the time being) but it was a fun problem to diagnose and a good chance to learn more about space.

Packet loss on the two OneWeb terminals (Not seen on other constellation) correlated with a given satellite pass

I’ve put the source code for the Hughes terminal Prometheus Exporter onto Github for anyone to use.

The repo has instructions for use and the Grafana templates we used.

At one point I started playing with the OneWeb Ephemeris data so I could calculate the azimuth and elevation of each of the birds from our relative position, and work out distances and angles from the terminal. The maths was kinda fun, but oddly the datetimes in the OneWeb ephemeris data set seems to be about 10 years and 10 days behind the current datetime – Possibly this gives an insight into OneWeb’s two day outage at the start of the year due to their software not handling leap years.

Despite all these teething issues I’m still optimistic about OneWeb, Kupler and Qianfan (Thousand Sails) opening up the LEO market and covering more people in more places.

Update: Thanks to Scott via email who sent this:
One note, there’s a difference between GPS time and Unix time of about 10 years 5 days. This is due to a) the Unix epoch starting 1970-01-01 and the gps epoch starting 1980-01-05 and b) gps time is not adjusted for leap seconds, and ends up being offset by an integer number of seconds. 

Update: clarkzjw has published an open source tool for visualizing the pass data https://github.com/clarkzjw/LEOViz

Automatic Cell Planning with Atoll: Site Selection

One of the really neat features about using automated RF planning tools like Forsk Atoll is you’re able to get it to automatically try out tweaks and look at how that impacts performance.

In the past you’d adjust something, run the simulation again, look at the results and compare to what you had before,

Atoll’s ACP (Automatic Cell Planning) module allows you to automate this, and in most cases, it does a better job than I would!

Today we’ll look at Cell Site Selection in Atoll.

To begin with we’ll limit the computation area down to a polygon we draw around the area in question,

In the Geo tab we’ll select Zones -> Computation Zone and select Edit

We’ll create a new Polygon and draw around the area we are going to analyze. You can automate this step based on population levels, etc, if you’ve got that data present.

So now we’ve set our computation area to the selection, but if we didn’t do this, we’d be computing for the whole world, and that might take a while…

Generating Candidate Sites

Atoll sucks at this, I’ve found if your computation zone is set, and it’s not a rectangle, bad things happen, so I’ve written a little script to generate candidates for me.

Creating an new ACP Job

From the Network tab, right click on ACP Automatic Cell Planning and select New

Optimization Tab

Before we can define all the specifics of what we’re looking to plan / improve, we need to set some limits on the software itself and tell it what we’re looking to improve.

The resolution defines how precise the results should be, and the iterations defines how many changes the software should run through.

The higher the number of iterations, the better the results, but it’s not linear – The improvement between 1000 iterations and 1,000,000,000 iterations is typically pretty minor, and this is because ACP works kind of a “getting warmer” philosophy, where it changes a value up or down, looks at the overall result and then if the result was better, changes the value again until it stops getting better.

As I’m working in a fairly small area I’m going to set 100 iterations and a 50m resolution.

In the optimization tab we can also set constraints, for example we’re looking at where to place cell sites in an area, and as far as Atoll is concerned if we just throw hundreds of sites at an area we’ll have pretty good results, but the economics of that doesn’t work, so we can set constraints, for example for site selection we may want to set the max number of cell sites. As we are importing ~5k candidate locations, we probably don’t want to build 5k cell sites 20m apart, so set this to be a reasonable number for your geography.

When using ACP for Optimization as we can see later on, we can also set cost constraints regarding the cost to make changes, but for now this is just going to pick best cell sites locations for us.

Objectives Tab

Next up we’ll need to setup Automatic Cell Plannings’ objectives.

For ACP to be an effective tool we need to define what we’re looking for in terms of success, you can’t just throw it some values and say “Make it better” – we need to define what parameters we’re looking to improve. We do this by setting Objectives.

Your objectives are going to be based on your needs and wants, but for this example we’re building greenfield networks, so want to offer coverage over an area, as well as good RSRP and RSRQ, so we will set the objectives to Coverage of 95% of the Computation Zone for this post, with a secondary objective of increasing RSRP and RSRQ.

But today I’m modeling for coverage, so let’s set that:

As we’re planning for LTE we need to set the UE parameters, as I’m planning for a mobile network, I’ll need to set the service type and terminal.

Reconfiguration

Now we’ve defined the Objectives, it’s now time to define what values ACP can mess with to try and achieve these objectives, for some ACP runs you may be adjusting tilts or azimuths, swapping out antennas, etc, but today we’re looking for where we can put cell sites to be the most effective to serve our target area.

Now we import our candidate list. This might be a list of potential towers you can use, or in my case, for something greenfield, I’m just importing a list of points on a map every X meters to find the best locations to place towers.

From the “Reconfiguration”, we’ll select “Setup” to add the sites we want to evalute.

Atoll has “Automatic Candidate Positioning” which allows it to generate pins on the map, but I’ve not had any luck with it, instead I’m importing a list of candidates I’ve generated via a little Python script, so I’ll select “Import from File”.

Pick my file and set the parameters for importing the data like so.

Now we’ve got candidates for cell sites defined, we set the station template to populate and then we’re good to go.

Running ACP

Once you’ve tweaked all your ACP values as required, we can run the ACP job,

As ACP runs you’ll see a graph showing the objectives and the levels it needs to reach to satisfy them, this step can take a super dooper long time – Especially if your computation zone is large or your number of candidates is large.

But eventually we’ll be a lot older and wearier, but ACP will have completed, and we can checkout the Optimization it’s created.

In my case the objectives failed to be met, but that’s OK for me,

One it’s completed the Changes tab outlines the recommended changes, and the Objectives outlines how this has performed against the criteria we outlined at the start, and if we’re happy with the result, we can Commit the changes to put them on the map from the commit tab.

With that done I weed out the sites in impractical locations, the the ones in the sea…

Now we’ve got the sites plugged in, the next thing we’ll start doing is optimizing them.

When we’re dealing with greenfield builds like we are today, the “Move to highest location with X Meters” function is super useful. If you’ve got a high point on a property, we want to build our tower on the highest point, so the tower is moved to the highest point.

One thing to note is this just plans our grid. It won’t adjust azimuths, downtilts, etc, in one operation. We need to use another ACP operation to achieve that, and that’s the content of a different post!

StatS in CGrateS

The StatS subsystem allows us to calculate statistics based on CGrateS events.

Each StatS object contains one or more “metrics” which are things like Average call duration, Total call duration, Average call cost or totals and average of other fields.

The first thing we’ll need to do is enable stats in our JSON config file:

"stats": {
"enabled": true,
"string_indexed_fields": ["*req.Account","*req.RunID","*req.Destination"],
},

With that done we’re ready to create our first StatS entry, this one is pretty much a burger-with-the-lot, so let’s take a look:

{
    "method": "APIerSv1.SetStatQueueProfile",
    "params": [
        {
            "ID" : "StatQueueProfile_VoiceStats",
            "QueueLength": 10000000,
            "TTL": -1,
            "MinItems": 0,
            "FilterIDs": [],
            "Metrics": [
                {"FilterIDs": [],"MetricID": "*tcd"},
                {"FilterIDs": [],"MetricID": "*tcc"},
                {"FilterIDs": [],"MetricID": "*asr"},
                {"FilterIDs": [],"MetricID": "*acd"},
                {"FilterIDs": [],"MetricID": "*ddc"}
            ],
            "Stored": True,
        }
    ]
}

So what have we just done?

Well we’ve created a StatQueueProfile named StatQueueProfile_VoiceStats, in which we’ll store a maximum of 10000000 datapoints (this is important because to calculate an average we need to know all the previous datapoints), for a maximum of forever (Because TTL is -1, if we wanted to store for 1 hour we’d set TTL to 1h.

We’re not matching any FilterIDs, but based on what we covered on the post in FilterS, you can imagine using this to match calls from a given Account / customer, or to a specific group of destinations, or maybe from a given supplier, etc, etc.

What we do have that’s interesting is we have defined a series of metrics.

The docs page of CGrateS explains all the available metrics and what they mean (we’ve also mapped them in the CGrateS UI), but the ones I’ve included above are Total Call Duration (*tcd), Total Call Cost (*tcc), Answer Seizure Ratio (*asr), Average Call Duration (*acd) and Distinct Destination Count (*ddc).

So what happens if we now generate a bunch of calls? Well, for starters as we’ve got no FilterS defined here, every call will match this StatQueueProfile, and so we’ll collect data for each.

The example code I’ve provided in the repo for this post generates a bunch of calls, and we can check the values for all our Metrics with GetQueueStringMetrics for our :

{'method': 'StatSv1.GetQueueStringMetrics', 'params': [{'Tenant': '', 'ID': 'StatQueueProfile_TalkTime', 'APIOpts': {}}], 'id': 11}
{'error': None,
'id': 11,
'result': {'*acd': '8m4.4s',
'*asr': '100%',
'*ddc': '50',
'*tcc': '5396',
'*tcd': '6h43m40s'}}

We can now see the values of each metric.

If we’ve got a TTL set, old values that have existed in the QueueProfile longer than the TTL are removed, but we can also manually clear the values by using the ResetStatQueue endpoint:

{"method":"StatSv1.ResetStatQueue","params":[{"Tenant":"cgrates.org","ID":"StatQueueProfile_TalkTime"}],"id":4}

Which resets all the values back to zero / null.

One thing to keep in mind is you can’t modify a StatQueue object via the API without resetting the values.

string_indexed_fields in the config file

Sidebar on this – By specifying the string_indexed_fields means that CGrateS will not evaluate every field against Filter rules, but instead only those defined here. This means if you’ve got an event with say 20 fields (AnswerTime, Account, Subject, Destination, RunID, SetupTime, Extra Fields, etc, etc) each of these gets evaluated against a filter, which is pretty processor intensive if your FilterS only ever look at Account and Destination, so by specifying which fields are indexed here to only the fields you use in your filters, you can boost your performance. On the flip side, you can leave this blank to evaluate all fields, but you’ll take a performance hit by doing so.

CGrateS time Metas

There are so many ways you can format time for things like Expiry or ActionPlans in CGrateS, this is mostly just a quick reference for me:

  • *asap (Now)
  • *now
  • *every_minute
  • *hourly
  • *monthly
  • *monthly_estimated
  • *yearly
  • *daily
  • *weekly
  • mo+1h2m
  • *always (What?)
  • *month_end
  • *month_end+1h2m
  • +20s
  • 1375212790
  • +24h
  • 2016-09-14T19:37:43.665+0000
  • 20160419210007.037
  • 31/05/2015 14:46:00
  • 08.04.2014 22:14:29
  • 20131023215149
  • “2013-12-30 15:00:01 +0430 +0430”
  • “2013-12-30 15:00:01 +0000 UTC”
  • “2013-12-30 15:00:01”

Stolen from: https://github.com/cgrates/cgrates/blob/8fec8dbca1f28436f8658dbcb8be9d03ee7ab9ee/utils/coreutils_test.go#L242

CGrateS in Baby Steps – Part 5 – Events, Agents & Subsystems

Up until this point in the series, I’ve tried to hide all the complexity of CGrateS, so people following along can see some progress and feel like they’re making it somewhere with CGrateS, but it’s time to tear off the plaster and talk about the actual concepts, about what’s under the hood, and how all the components interact, as it’ll make it much easier then for us to learn more about how to use CGrateS.

This will be the last post in the “CGrateS in Baby Steps” series (Which I started in 2022), if you’ve made it this far congratulations, all the future posts will be on specific topics and build upon the concepts we’ve covered here.

This took me a while to grasp – CGrateS is both crazy complex and beautifully simple, but getting to the stage where you can “see through the matrix” on CGrateS and see the beautiful simplicity involves a bit of understanding how everything fits together.

Once you realize once you can see the pattern, and understand the building blocks, everything else CGrateS related becomes super simple.

Agents

in CGrateS Agents are consumers of the services. That’s a super generic answer, but let’s take a closer look at what that actually means with some examples:

Diameter is a protocol that can be used for Online Charging.
CGrateS has a common interface for API calls that can perform Online Charging.
The CGrateS Diameter Agent translates between Diameter on one side, and CGrateS API calls on the other.

Likewise, if we want to speak Radius, we can use the CGrateS Radius Agent, this translates between RADIUS and the CGrateS API calls.

FreeSWITCH, Asterisk and Kamailio don’t use specific protocols like Diameter or Radius, but rather modules or plugins to connect that application to a CGrateS Agent, and they all just end up talking the same CGrateS API calls.

Lastly, there’s even an HTTP agent so you could define your own agent to talk another protocol if you wanted to use CGrateS for anything else (We’ve been playing with CAMEL based charging with CGrateS and 5GC charging).

The config for each of the CGrateS Agents happens in the cgrates.json config file (Typically in /etc/cgrates).

Because the Agents just translate everything into API calls, logic for billing a call from FreeSWITCH is the same as for Diameter, the same as for RADIUS, the same as for SIP, the same as for Asterisk.

The Agents just translate all the domain-specific stuff into the common CGrateS RPC API, which we’ve been working with up to this point.

This is key part to understand; because once you understand how to do the CGrateS part, moving from Asterisk to FreeSWITCH, to DNS, to RADIUS, to any other Agent, it’s all the same to you.

The Agents just translate domain-specific stuff (Diameter requests, CSV files, Asterisk Calls, FreeSWITCH calls, etc, etc) and act as a translator to translate these requests into CGrateS RPC API calls.

On the left side of the image below are the Agents, and on the right side, the Subsystems that do stuff with things.

Subsystems

So with these API calls, where do they go, what do they do?

Well, it’s the Subsystems that do the things.

What things?

Well, everything of use.

Each subsystem has a purpose, AttributeS transforms stuff, EeS exports CDRs, RALs applies our charging logic, CDRs writes CDRs to StorDB, etc, etc.

In each event we can set flags to denote which subsystems it should be routed to, and we can set the links between components in our cgrates.json file.

Based on the flags, we pass events between these subsystems.

Events

So our Agents create the API calls, which contain Events, which are JSON RPC calls.

They look like all the API examples we’ve played with, because that’s exactly what they are.

We can access them via the JSON RPC API, but when you start a call on Kamailio, the Kamailio Agent generates a JSON RPC API call containing an Event into CGrateS for that call on Kamailio.

When you send a DNS request, the DNS Agent translate this DNS request into a CGrateS JSON RPC API containing the event for the DNS request.

Let’s take an example, we’re going to use the ErS as it’s the simplest to demonstrate with.

I’ve already written about ERS the Event Reader Service, and it reads text files / CSVs so we can import them into CGrateS.

So if you setup your enviroment per the tutorial above (but don’t load the CSV yet), we’ll start running some experiments…

Anatomy of an Event

We can “sniff” the events bouncing around between the Agent and the various Subsystems in real time, by using ngrep:

sudo ngrep -t -W byline port 2012 or port 2080 or port 8021 or port 2014 or port 2053 -d any

So let’ we’ve got ngrep running, we can move our CSV file in to be processed in another tab.

Plonking the CSV file into the path ERS is monitoring will mean the ErS Agent will generate a CGrateS JSON RPC “event” for each row in the file, it’ll look something like this:

#############
T 2024/12/22 09:09:47.357151 127.0.0.1:50456 -> 127.0.0.1:2012 [AP] #404
{"method":"CDRsV1.ProcessEvent","params":[{"Flags":["*rals"],"Tenant":"cgrates.org","ID":"c2ce33d","Time":"2024-12-22T09:09:47.356294838+11:00","Event":{"Account":"61412341234","Animal":"Dog","CGRID":"6330859b7c38c1d508f9e5e0043950079e54fef1","Category":"call","Destination":"61812341234","OriginID":"1lklkjfds","RequestType":"*rated","SetupTime":"2024-01-01 01:00:00","Source":"*sessions","Subject":"61812341234","ToR":"*voice","Usage":"60","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"APIOpts":{}}],"id":1}

##
T 2024/12/22 09:09:47.357427 127.0.0.1:50456 -> 127.0.0.1:2012 [AP] #406
{"method":"AttributeSv1.ProcessEvent","params":[{"Tenant":"cgrates.org","ID":"c2ce33d","Time":"2024-12-22T09:09:47.356294838+11:00","Event":{"Account":"61412341234","Animal":"Dog","CGRID":"6330859b7c38c1d508f9e5e0043950079e54fef1","Category":"call","Destination":"61812341234","OriginID":"1lklkjfds","RequestType":"*rated","SetupTime":"2024-01-01 01:00:00","Source":"*sessions","Subject":"61812341234","ToR":"*voice","Usage":"60","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"APIOpts":{"*context":"*cdrs","*subsys":"*cdrs"}}],"id":2}

##
T 2024/12/22 09:09:47.422623 127.0.0.1:2012 -> 127.0.0.1:50456 [AP] #408
{"id":2,"result":null,"error":"NOT_FOUND"}

##
T 2024/12/22 09:09:47.422788 127.0.0.1:50456 -> 127.0.0.1:2012 [AP] #410
{"method":"ChargerSv1.ProcessEvent","params":[{"Tenant":"cgrates.org","ID":"c2ce33d","Time":"2024-12-22T09:09:47.356294838+11:00","Event":{"Account":"61412341234","Animal":"Dog","CGRID":"6330859b7c38c1d508f9e5e0043950079e54fef1","Category":"call","Destination":"61812341234","OriginID":"1lklkjfds","RequestType":"*rated","SetupTime":"2024-01-01 01:00:00","Source":"*sessions","Subject":"61812341234","ToR":"*voice","Usage":"60","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"APIOpts":{"*context":"*cdrs","*subsys":"*cdrs"}}],"id":3}

##
T 2024/12/22 09:09:47.451702 127.0.0.1:2012 -> 127.0.0.1:50456 [AP] #412
{"id":3,"result":[{"ChargerSProfile":"DEFAULT","AttributeSProfiles":null,"AlteredFields":["*req.RunID"],"CGREvent":{"Tenant":"cgrates.org","ID":"c2ce33d","Time":"2024-12-22T09:09:47.356294838+11:00","Event":{"Account":"61412341234","Animal":"Dog","CGRID":"6330859b7c38c1d508f9e5e0043950079e54fef1","Category":"call","Destination":"61812341234","OriginID":"1lklkjfds","RequestType":"*rated","RunID":"DEFAULT","SetupTime":"2024-01-01 01:00:00","Source":"*sessions","Subject":"61812341234","ToR":"*voice","Usage":"60","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"APIOpts":{"*context":"*cdrs","*subsys":"*chargers"}}}],"error":null}

#
T 2024/12/22 09:09:47.452021 127.0.0.1:50456 -> 127.0.0.1:2012 [AP] #413
{"method":"Responder.GetCost","params":[{"Category":"call","Tenant":"cgrates.org","Subject":"61812341234","Account":"61412341234","Destination":"61812341234","TimeStart":"2024-01-01T01:00:00+11:00","TimeEnd":"2024-01-01T01:00:00.00000006+11:00","LoopIndex":0,"DurationIndex":60,"FallbackSubject":"","RatingInfos":null,"Increments":null,"ToR":"*voice","ExtraFields":{"Animal":"Dog","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"MaxRate":0,"MaxRateUnit":0,"MaxCostSoFar":0,"CgrID":"","RunID":"","ForceDuration":false,"PerformRounding":true,"DenyNegativeAccount":false,"DryRun":false,"APIOpts":{"*context":"*cdrs","*subsys":"*chargers"}}],"id":4}

#
T 2024/12/22 09:09:47.465711 127.0.0.1:2012 -> 127.0.0.1:50456 [AP] #414
{"id":4,"result":{"Category":"call","Tenant":"cgrates.org","Subject":"61812341234","Account":"61412341234","Destination":"61812341234","ToR":"*voice","Cost":14,"Timespans":[{"TimeStart":"2024-01-01T01:00:00+11:00","TimeEnd":"2024-01-01T01:01:00+11:00","Cost":14,"RateInterval":{"Timing":{"ID":"*any","Years":[],"Months":[],"MonthDays":[],"WeekDays":[],"StartTime":"00:00:00","EndTime":""},"Rating":{"ConnectFee":0,"RoundingMethod":"*up","RoundingDecimals":4,"MaxCost":0,"MaxCostStrategy":"","Rates":[{"GroupIntervalStart":0,"Value":14,"RateIncrement":60000000000,"RateUnit":60000000000}]},"Weight":10},"DurationIndex":60000000000,"Increments":[{"Duration":0,"Cost":0,"BalanceInfo":{"Unit":null,"Monetary":null,"AccountID":""},"CompressFactor":1},{"Duration":60000000000,"Cost":14,"BalanceInfo":{"Unit":null,"Monetary":null,"AccountID":""},"CompressFactor":1}],"RoundIncrement":null,"MatchedSubject":"*out:cgrates.org:call:*any","MatchedPrefix":"618","MatchedDestId":"Dest_AU_Fixed","RatingPlanId":"RatingPlan_VoiceCalls","CompressFactor":1}],"RatedUsage":60000000000,"AccountSummary":null},"error":null}

#
T 2024/12/22 09:09:47.470035 127.0.0.1:2012 -> 127.0.0.1:50456 [AP] #415
{"id":1,"result":"OK","error":null}

Sidebar – you’re going to spend a lot of time with `ngrep`.

Alright, that event probably looks familiar, after all, it’s the same structure as the API requests we’ve made to CGrateS so far, to set rates and handle accounts.

But what we’re witnessing here isn’t us making an API request to the JSON RPC interface from a Python script, it’s the ERS Agent inside CGrateS, calling CGrateS.

The ERS Agent inside CGrateS reads the CSV file we dropped in, and based on what we had set in the ERS section of the CGrateS config file (cgrates.json), the ERS Agent create JSON RPC events and sent it to CGrateS for processing.

You may be thinking “Wow, the ERS Agent is really dumb, it just sends an API request (events)”, and you’d be right.

We could replace the ERS Agent with a Python script to read the CSV and send the same request, and we’d get the exact same outcome, but CGrateS is mostly “batteries included” so we don’t have to.

Ok, so you’ve heard me drum in the fact that Agents are pretty simple, and all they do is make JSON RPC requests for the event which are sent to CGrateS. So now what happens?

Well, the event is calling CDRsV1.ProcessEvent, so that means the Event is passed by CGRengine to the CDRs subsystem.

What does CDRs subsystem do with it? Well, that’s going to depend on what’s in our cgrates.json config file,

In the above example, CDRs is setup with connections to the different subsystems, AttributeS, Chargers and RALs are all the subsystems linked from here.

Having these links here does not force the Event to always route to these Subsystems, but unless we’ve got the links there, the Event won’t be able to get routed from CDRs to that subsystem if we want it to.

But we can see what’s going to happen with this request based on our CDRsV1.ProcessEvent event, it’s got Flags set to rals, so we know it wants RALs to be called.

Let’s take a closer look at the API call:

{
"method": "CDRsV1.ProcessEvent",
"params": [
{
"Flags": ["*rals"],
"Tenant": "cgrates.org",
"ID": "c2ce33d",
"Time": "2024-12-22T09:09:47.356294838+11:00",
"Event": {
"Account": "61412341234",
"Animal": "Dog",
"CGRID": "6330859b7c38c1d508f9e5e0043950079e54fef1",
"Category": "call",
"Destination": "61812341234",
"OriginID": "1lklkjfds",
"RequestType": "*rated",
"SetupTime": "2024-01-01 01:00:00",
"Source": "*sessions",
"Subject": "61812341234",
"ToR": "*voice",
"Usage": "60"
},
"APIOpts": {}
}
],
"id": 1
}

So looking in ngrep we see our CDRsV1.ProcessExternalCDR event makes it to the CDRs module with ID 1.

The API call has flags set to *rals so the CDRs will call RALs , and inside our config the CDRs section has a link in the config (shown in the image below) to RALs (rals_conns) – if we didn’t have that link, CGrateS wouldn’t know how to connect to RALs, and the event would fail.

We’ve also got connections to AttributeS configured in the config and we can see the RPC call to AttributeS.ProcessEvent which is the same as if we were to call it directly via the AttributeS API.

{
"method": "AttributeSv1.ProcessEvent",
"params": [
{
"Tenant": "cgrates.org",
"ID": "c2ce33d",
"Time": "2024-12-22T09:09:47.356294838+11:00",
"Event": {
"Account": "61412341234",
"Animal": "Dog",
"CGRID": "6330859b7c38c1d508f9e5e0043950079e54fef1",
"Category": "call",
"Destination": "61812341234",
"OriginID": "1lklkjfds",
"RequestType": "*rated",
"SetupTime": "2024-01-01 01:00:00",
"Source": "*sessions",
"Subject": "61812341234",
"ToR": "*voice",
"Usage": "60",
},
"APIOpts": {
"*context": "*cdrs",
"*subsys": "*cdrs"
}
}
],
"id": 2
}

Note at the bottom the APIOpts section tells us this API call was made by the *cdrs subsystem and the ID is 2 (This is a different request to the original CDRsV1.ProcessExternalCDR request which had ID 1 – we can use this to match responses to requests).

Again, because our config also includes links ChargerS and RALS subsystems, we’ll see requests to (you guessed it) ChargerS (The ChargerSv1.ProcessEvent) and RALS (Responder.GetCost).

T 2024/12/22 09:09:47.452021 127.0.0.1:50456 -> 127.0.0.1:2012 [AP] #413
{"method":"Responder.GetCost","params":[{"Category":"call","Tenant":"cgrates.org","Subject":"61812341234","Account":"61412341234","Destination":"61812341234","TimeStart":"2024-01-01T01:00:00+11:00","TimeEnd":"2024-01-01T01:00:00.00000006+11:00","LoopIndex":0,"DurationIndex":60,"FallbackSubject":"","RatingInfos":null,"Increments":null,"ToR":"*voice","ExtraFields":{"Animal":"Dog","Value0":"2024-01-01 01:00:00","Value1":"2024-01-01 01:01:00","Value2":"Nick","Value3":"60","Value4":"61412341234","Value5":"61812341234","Value6":"Dog","Value7":"1lklkjfds"},"MaxRate":0,"MaxRateUnit":0,"MaxCostSoFar":0,"CgrID":"","RunID":"","ForceDuration":false,"PerformRounding":true,"DenyNegativeAccount":false,"DryRun":false,"APIOpts":{"*context":"*cdrs","*subsys":"*chargers"}}],"id":4}

#
T 2024/12/22 09:09:47.465711 127.0.0.1:2012 -> 127.0.0.1:50456 [AP] #414
{"id":4,"result":{"Category":"call","Tenant":"cgrates.org","Subject":"61812341234","Account":"61412341234","Destination":"61812341234","ToR":"*voice","Cost":14,"Timespans":[{"TimeStart":"2024-01-01T01:00:00+11:00","TimeEnd":"2024-01-01T01:01:00+11:00","Cost":14,"RateInterval":{"Timing":{"ID":"*any","Years":[],"Months":[],"MonthDays":[],"WeekDays":[],"StartTime":"00:00:00","EndTime":""},"Rating":{"ConnectFee":0,"RoundingMethod":"*up","RoundingDecimals":4,"MaxCost":0,"MaxCostStrategy":"","Rates":[{"GroupIntervalStart":0,"Value":14,"RateIncrement":60000000000,"RateUnit":60000000000}]},"Weight":10},"DurationIndex":60000000000,"Increments":[{"Duration":0,"Cost":0,"BalanceInfo":{"Unit":null,"Monetary":null,"AccountID":""},"CompressFactor":1},{"Duration":60000000000,"Cost":14,"BalanceInfo":{"Unit":null,"Monetary":null,"AccountID":""},"CompressFactor":1}],"RoundIncrement":null,"MatchedSubject":"*out:cgrates.org:call:*any","MatchedPrefix":"618","MatchedDestId":"Dest_AU_Fixed","RatingPlanId":"RatingPlan_VoiceCalls","CompressFactor":1}],"RatedUsage":60000000000,"AccountSummary":null},"error":null}

What we’re seeing is the CDRs module, calling RALs, to get the cost information for this event.

Finally the CDRsV1.ProcessEvent that was initially sent by ErS gets a result (we can find the result to the request as it’ll have the same id parameter)

So that’s it, that’s the secret sauce – CGrateS is just a bunch of little APIs we combo together to create something great.

Recap

Agents translate data sources into API calls.

Each little API belongs to a Subsystem, like ChargerS, AttributeS or RALs, and we can chain them together in our config file or through the flags in the API request.

Once you’ve got your head wrapped around this, everything in CGrateS becomes way easier.

From now on I’ll pivot to talking about specific modules, and how we use them, starting with AttributeS (which I wrote last year while still drafting this), and diving into how to use each module in more detail.

Installing Calix CMS Java tool on Ubuntu in 2025

Ah, another post in my “how to make software work that was made with Java in the 1990s” post, except Calix last updated this software in 2022 – make of that what you will…

This time is Calix Management System (CMS), the Java app for managing equipment in exchanges / COs from Calix.

On Ubuntu 24.04 LTS it requires JRE version 8:

sudo apt install openjdk-8-jre
sudo apt install execstack

With that installed I could install CMS

/install.bin LAX_VM /usr/lib/jvm/java-8-openjdk-amd64/bin/java

Then it came time to run it, I chose to install in my home directory in a folder named “Calix” (default).

First you’ve got to make their startup script executable:

~/Calix$ chmod +x Start\ CMS

Then we need to modify it to point to the openjdk Java 8 binary, the simplest way is to just add the LAX_VM on startup:

~/Calix$ ./Start\ CMS LAX_VM /usr/lib/jvm/java-8-openjdk-amd64/bin/java

And you’re in.

Creating a Fixed Line IMS subscriber in PyHSS

I generally do this with Python or via the Swagger UI for the Web UI, but here’s how we can create a fixed-line IMS subscriber in PyHSS, so we can register it with a softphone, without using EAP-AKA.


Firstly we create the AuC object for this password combo.

curl -X 'PUT' \
'http://10.97.0.36:8080/auc/' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"ki": "yoursippassword123",
"opc": "",
"amf": "8000",
"sqn": "1"
}'

Get back the AuC ID from the JSON body, we’ll use this to provision the Sub:

 "auc_id": 110,

Next we create the subscriber, the speeds will be 0 as there is no data on the service, but we will add an default APN so the validation passes:

curl -X 'PUT' \
'http://10.97.0.36:8080/subscriber/' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"default_apn": 1,
"roaming_rule_list": null,
"apn_list": "0",
"subscribed_rau_tau_timer": 600,
"msisdn": "123451000001",
"ue_ambr_dl": 0,
"ue_ambr_ul": 0,
"imsi": "001001000090001",
"nam": 2,
"enabled": true,
"roaming_enabled": null,
"auc_id": 110
}'

Alright, that’s the basics done, now we’ll create the IMS subscriber.

Provision it:

curl -X 'PUT' \
'http://10.97.0.36:8080/ims_subscriber/' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"pcscf_realm": null,
"scscf_realm": null,
"pcscf_active_session": null,
"scscf_peer": null,
"msisdn": "123451000001",
"pcscf_timestamp": null,
"sh_template_path": "default_sh_user_data.xml",
"msisdn_list": "123451000001",
"pcscf_peer": null,
"last_modified": "2024-04-25T00:33:37Z",
"imsi": "001001000090001",
"xcap_profile": null,
"ifc_path": "default_ifc.xml",
"sh_profile": "\n<!-- This container for the XCAP Data for the Subscriber -->\n<RepositoryData>\n <ServiceIndication>ApplicationServer</ServiceIndication>\n <SequenceNumber>0</SequenceNumber>\n <ServiceData>\n <!-- This is the actual XCAP Data for the Subscriber -->\n \n <!-- XCAP Default Template (no XCAP Data stored in Database) -->\n <simservs xmlns=\"http://uri.etsi.org/ngn/params/xml/simservs/xcap\" xmlns:cp=\"urn:ietf:params:xml:ns:common-policy\">\n <originating-identity-presentation active=\"true\" />\n <originating-identity-presentation-restriction active=\"true\">\n <default-behaviour>presentation-not-restricted</default-behaviour>\n </originating-identity-presentation-restriction>\n <communication-diversion active=\"true\">\n <!-- No Answer Time -->\n <NoReplyTimer>20</NoReplyTimer>\n <cp:ruleset>\n <!-- Call Forward All Rule -->\n <cp:rule id=\"rule0\">\n <cp:conditions>\n <communication-diverted />\n </cp:conditions>\n <cp:actions>\n <forward-to>\n <target>sip:[email protected]</target>\n </forward-to>\n </cp:actions>\n </cp:rule>\n <!-- Call Forward Not Registered Rule -->\n <cp:rule id=\"rule1\">\n <cp:conditions>\n <not-registered />\n </cp:conditions>\n <cp:actions>\n <forward-to>\n <target>sip:[email protected]</target>\n </forward-to>\n </cp:actions>\n </cp:rule>\n <!-- Call Forward No Answer Rule -->\n <cp:rule id=\"rule2\">\n <cp:conditions>\n <no-answer />\n </cp:conditions>\n <cp:actions>\n <forward-to>\n <target>sip:[email protected]</target>\n </forward-to>\n </cp:actions>\n </cp:rule>\n <!-- Call Forward Busy Rule -->\n <cp:rule id=\"rule3\">\n <cp:conditions>\n <busy />\n </cp:conditions>\n <cp:actions>\n <forward-to>\n <target>sip:[email protected]</target>\n </forward-to>\n </cp:actions>\n </cp:rule>\n <!-- Call Forward Unreachable Rule -->\n <cp:rule id=\"rule4\">\n <cp:conditions>\n <not-reachable />\n </cp:conditions>\n <cp:actions>\n <forward-to>\n <target>sip:[email protected]</target>\n </forward-to>\n </cp:actions>\n </cp:rule>\n </cp:ruleset>\n </communication-diversion>\n \n <incoming-communication-barring active=\"true\">\n <cp:ruleset>\n <cp:rule id=\"rule0\">\n <cp:conditions />\n <cp:actions>\n <allow>true</allow>\n </cp:actions>\n </cp:rule>\n </cp:ruleset>\n </incoming-communication-barring>\n\n <outgoing-communication-barring active=\"false\">\n </outgoing-communication-barring>\n </simservs>\n\n </ServiceData>\n \n</RepositoryData>\n",
"pcscf": null,
"scscf": null,
"scscf_timestamp": null
}'

And with that we’re done,

We can now register 001001000090001 at our IMS, with password yoursippassword123 which has the MSISDN / phone number 123451000001.

Easy!

CGrateS – SERVER_ERROR: unexpected end of JSON input

I started seeing this error the other day when running CDRsv1.GetCDRs on the CGrateS API:

SERVER_ERROR: unexpected end of JSON input

It seemed related to certain CDRs in the cdrs table of StoreDB.

After some digging, I found the stupid simple problem:

I’d written too much data to extra_fields, leading MySQL to cut off the data mid way through, meaning it couldn’t be reconstructed as JSON by CGrateS again.

Like the rounding issue I had, this wasn’t an issue with CGrateS but with MySQL.

Quick fix:

sudo mysql cgrates -e "ALTER TABLE cdrs MODIFY extra_fields LONGTEXT;"

And new fields can exceed this length without being cut off.

Virtualizing VMware to run inside Proxmox

Like a lot of companies, we’re moving away from VMware, and in our case, shifting to Proxmox.

But that doesn’t mean we can get entirely away from VMware, but more that it’s not our hypervisor of choice anymore, and this means shifting our dev environments and lab off VMware to Proxmox first.

So today I sat down to try and shift everything to Proxmox, while keeping the VMware based VMs accessible until they can slowly die of bitrot.

A sane person would probably utilize Proxmox’s fancy new tool for migrating VMs from VMware to Proxmox, and it’s great, but in our case at least, it required logging into each VM and remapping NICs, etc, which is tricky on boxes I don’t have access to – Plus we need to keep some VMware capability for testing / labbing stuff up.

So I decided into install Proxmox onto the bare metal servers, and then create a VMware virtual machine inside the Proxmox stack, to host a VMware ESXi instance.

I started off inside VMware (Before installing any Proxmox) by moving all the VMs onto a single physical disk, which I then removed from the server, so as to not accidentally format the one disk I didn’t want to format.

Next I nuked the server and setup the new stack with Proxmox, which is a doddle, and not something I’ll cover.

Then I loaded a VMware ISO into Proxmox and started setting up the VM.

Now, nested virtualization is a real pain in the behind.

VMware doesn’t like not being run on bare metal, and it took me a good long amount of time to find the hardware config that I could setup in Proxmox that VMware would accept.

Create the VM in the Web UI; I found using a SATA drive worked while SCSI failed, so create a SATA based LVM image to use, and mount the datastore ISO.

Then edit /etc/pve/qemu-server/your_id.conf and replace the netX, args, boot and ostype to match the below:

args: -cpu host,+invtsc,-vmx
boot: order=ide2;net0
cores: 32
cpu: host
ide2: local:iso/VMware-VMvisor-Installer-8.0U2b-23305546.x86_64.iso,media=cdrom,size=620274K
memory: 128000
meta: creation-qemu=8.1.5,ctime=1717231453
name: esxi
net0: vmxnet3=BC:24:11:95:8A:50,bridge=vmbr0
numa: 0
ostype: l26
sata0: ssd-3600gb:vm-100-disk-1,size=32G
scsihw: pvscsi
smbios1: uuid=6066a9cd-6910-4902-abc7-dfb223042630
sockets: 1
vmgenid: 6fde194f-9932-463e-b38a-82d2e7e1f2dd

Now you can go and start the VM, but once you’ve got the VMware splash screen, you’ll need to press Shift + O to enter the boot options.

At the runweasle cdromBoot after it add allowLegacyCPU=true – This will allow ESXi to use our (virtual) CPU.

Next up you’ll install VMware ESXi just like you’ve probably done 100 times before (is this the last time?), and once it’s done installing, power off, we’ll have to make few changes to the VM definition file.

Then after install we need to change the boot order, by updating:

boot: order=sata0

And unmount the ISO:

ide2: none,media=cdrom

Now remember how I’d pulled the hard disk containing all the VMware VMs out so I couldn’t break it? Well, don’t drop that, because now we’re going to map that physical drive into the VM for VMware, so I can boot all those VMs.

I plugged in the drive and I used this to find the drive I’d just inserted:

fdisk -l

Which showed the drive I’d just added last, with it’s VMware file system.

So next we need to map this through the VM we just created inside Proxmox, so VMware inside Proxmox can access the VMware file system on the disk filled with all our old VMware VMs.

VM. VM. VM. The word has lost all meaning to me at this stage.

We can see the mount point of our physical disk; in our case is /dev/sdc so that’s what we’ll pass through to the VM.

Here’s my final .conf file for the Proxmox VM:

args: -cpu host
balloon: 1024
boot: order=sata0
cores: 32
ide2: none,media=cdrom
kvm: 1
memory: 128000
meta: creation-qemu=8.1.5,ctime=1717231453
name: esxi
net0: vmxnet3=BC:24:11:95:8A:50,bridge=vmbr0
numa: 0
ostype: l26
sata0: ssd-3600gb:vm-100-disk-1,size=32G
sata1: /dev/sda
scsihw: pvscsi
smbios1: uuid=6066a9cd-6910-4902-abc7-dfb223042630
sockets: 1
vmgenid: 6fde194f-9932-463e-b38a-82d2e7e1f2dd

Now I can boot up the VM, log into VMware and behold, our disk is visible:

Last thing we need to do is get it mounted as a VMware Datastore so we can boot all those VMs.

For that, we’ll enable SSH on the ESXi box and SSH into it.

We’ll use the esxcfg-volume -l command to get the UUID of our drive

[root@localhost:~] esxcfg-volume -l
Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
VMFS UUID/label: 6513c05a-43db8c20-db0f-c81f66ea471a/FatBoy
Can mount: Yes
Can resignature: Yes
Extent name: t10.ATA_____QEMU_HARDDISK___________________________QM00007_____________:1 range: 0 - 1716735 (MB)

In my case the drive is aptly named “FatBoy”, so we’ll grab the UUID up to the slash, and then use that as the Mount parameter:

[root@localhost:~] esxcfg-volume -M 6513c05a-43db8c20-db0f-c81f66ea471a
Persistently mounting volume 6513c05a-43db8c20-db0f-c81f66ea471a

And now, if everything has gone well, after logging into the Web UI, you’ll see this:

Then the last step is going to be re-registering all the VMs, you can do this by hand, by selecting the .vmx file and adding it.

Alternately, if you’re lazy like me, I wrote a little script to do the same thing:

[root@localhost:~] cat load_vms3.sh 


#!/bin/bash

# Datastore name
DATASTORE="FatBoi/"

# Log file to store the output
LOG_FILE="/var/log/register_vms.log"

# Clear the log file
> $LOG_FILE

echo "Starting VM registration process on datastore: $DATASTORE" | tee -a $LOG_FILE

# Check if datastore directory exists
if [ ! -d "/vmfs/volumes/$DATASTORE" ]; then
echo "Datastore $DATASTORE does not exist!" | tee -a $LOG_FILE
exit 1
fi

# Find all .vmx files in the datastore and register them
find /vmfs/volumes/$DATASTORE -type f -name "*.vmx" | while read VMX_PATH; do
echo "Registering VM: $VMX_PATH" | tee -a $LOG_FILE
vim-cmd solo/registervm "$VMX_PATH" | tee -a $LOG_FILE
done

echo "VM registration process completed." | tee -a $LOG_FILE

[root@localhost:~] sh load_vms3.sh

Now with all your VMs loaded, you should almost be ready to roll and power them all back on.

And here’s where I ran into another issue:

Luckily the Proxmox wiki had the answer:

For an Intel CPU here’s what you run on the Proxmox hypervisor:

echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

But before we reboot the Hypervisor (Proxmox) we’ll have to reboot the VMware hypervisor too, because here’s something else to make you punch the screen:

Luckily we can fix this one globaly.

SSH into the VMware box, edit /etc/vmware/config.xml file and add:

vhv.enable = "FALSE"

Which will disable the performance counters.

Now power off the VMware VM, and reboot the Proxmox hypervisor, when it powers on again, Proxmox will allow nested virtualization, and when you power back on the VMware VM, you’ll have performance counters disabled, and then, you will be done.

Yeah, not a great use of my Saturday, but here we are…

Grafana and CGrateS

I’m a really big fan of CGrateS, and I’m a fan of Grafana,

So what if you combined the two?

CGrateS uses a StoreDB – In my case MySQL, but could be Postgres or MongoDB, etc, and Grafana can get data from these sources too.

So let’s join them together!

For starters, I’ve got a bunch of CDRs in my cgrates.cdrs table inside MySQL.

Setting it up is a doddle, firstly inside Grafana we link it into MySQL:

Next up we create a dashboard and add a panel.

For this instance I’m metering data usage, so I’ve set the units to Bytes/SI (but if you’re using Voice you’ll need to adjust this to time).

Here’s my Query to find the Usage:

SELECT
  DATE(setup_time) AS time,
  SUM(`usage`) AS total_usage
FROM
  cgrates.cdrs
GROUP BY
  DATE(setup_time)
ORDER BY
  time ASC

And I’ve created another one for Cost:

Keep in mind for the units it’s up to you what the units are, dollars, cents, 1/10th of a cent, etc, etc – In my case 1 in CGrateS equates to 1 cent:

SELECT

  DATE(setup_time) AS time,
  SUM(`cost`)/100 AS cost
FROM
  cgrates.cdrs
GROUP BY
  DATE(setup_time)
ORDER BY
  time ASC

Lastly I’ve added a board to show the usage per Account, which I get with the below query:

SELECT
  account AS label,
  SUM(`usage`) AS value
FROM
  cgrates.cdrs
GROUP BY
  account
ORDER BY
  value DESC

Not complete by any means, but shows what you can do with CGrateS and Grafana.

Importing CDRs into CGrateS with Event Reader Service

After we setup CgrateS the next thing we’d generally want to do would be to rate some traffic.

Of course, that could be realtime traffic, from Diameter, Radius, Kamailio, FreeSWITCH, Asterisk or whatever your case may be, but it could just as easily be CSV files, records from a database or a text file.

We’re going to be rating CDRs from simple CSV files with the date of the event, calling party, called party, and talk time, but of course your CDR exports will have a different format, and that’s to be expected – we tailor the Event Reader Service to match the format of the files we need.

The Event Reader Service, like everything inside CgrateS, is modular.
ERS is a module we load that parses files using the rules we define, and creates Events that CgrateS can process and charge for, based on the rules we define.

But before I can tell you that story, I have to tell you this story…

Nick’s imaginary CSV factory

In the repo I’ve added a DummyCSV.csv, it’s (as you might have guessed) a CSV file.

This CSV file is like a million other CSV formats out there – We’ve got a CSV file with Start Time, End Time, Customer, Talk Time, Calling Party, Called Party, Animal (for reasons) and CallID to uniquely identify this CDR.

Protip: The Rainbow CSV VScode extension makes viewing/editing/querying CSV files in VScode much easier.

Call Start TimeRow 0
Call End TimeRow 1
CustomerRow 2
Talk TimeRow 3
Calling PartyRow 4
Called PartyRow 5
AnimalRow 6
CallIDRow 7
File Format

Next we need to feed this into CGrateS, and for that we’ll be using the Event Reporter Service.

JSON config files don’t make for riveting blog posts, but you’ve made it this far, so let’s power through.

ERS is setup in CGrateS’ JSON config file, where we’ll need to define one or more readers which are the the logic we define inside CGrateS to tell it what fields are what, where to find the files we need to import, and set all the parameters for the imports.

This means if we have a CSV file type we get from one of our suppliers with CDRs in it, we’d define a reader to parse that type of file.
Likewise, if we’ve got a CSV of SMS traffic out of our SMSc, we’d need to define another reader to parse the CDRs in that format – Generally we’ll do a Reader for each file type we want to parse.

So let’s define a reader for this CSV spec we’ve just defined:


"ers": {
	"enabled": true,
	"readers": [
		{
			"id": "blog_example_csv_parser",
			"enabled": true,
			"run_delay":  "-1",
			"type": "*file_csv",
			"opts": {
				"csvFieldSeparator":",",
				"csvLazyQuotes": true,
				//csvLazyQuotes Counts the row length and if does not match this value declares an error
				//-1 means to look at the first row and use that as the row length
				"csvRowLength": -1
			},
			"source_path": "/var/spool/cgrates/blog_example_csv_parser/in",
			"processed_path": "/var/spool/cgrates/blog_example_csv_parser/out",
			"concurrent_requests": 1024,	//How many files to process at the same time
			"flags": [
				"*cdrs",
				"*log"
			],
			"tenant": "cgrates.org",
			"filters": [
				"*string:~*req.2:Nick", //Only process CDRs where Customer column == "Nick"
			],
			"fields":[]
}]}

This should hopefully be relatively simple (I’ve commented it as best I can).

The ID of the ERS object is just the name of this reader – you can name it anything you like, keeping in mind we can have multiple readers defined for different file formats we may want to read, and setting the ID just helps to differentiate them.

The run_delay of -1 means ERS will run as soon as a file is moved into the source_path directory, and the type is a CSV file – Note that’s moved not copied. We’ve got to move the file, not just copy it, as CGrateS waits for the inode notify.

In the opts section we set the specifics for the CSV we’re reading, field separator if how we’re separating the values in our CSV, and in our case, we’re using commas to delineate the fields, but if you were using a file using semicolons or another delineator, you’d adjust this.

Lastly we’ve got the paths, the source path is where we’ll need to move the files to get processed into, and the processed_path is where the processed files will end up.

For now I’ve set the flags to *log and *cdrs – By calling log we’ll make our lives a bit easier for debugging, and CDRs will send the event to the CDRs module to generate a rated CDR in CGrateS, which we could then use to bill a customer, a supplier, etc, and access via the API or exporting using Event Exporter Service.

Lastly under FilterS we’re able to define the filters that should define if we should process a row or not.
You don’t know how much you need this feature until you need this feature.
The filter rule I’ve included will only process lines where the Customer field in the CSV (row #2) is equal to “Nick”. You could use this to also filter only calls that have been answered, only calls to off-net, etc, etc – FilterS needs a blog post all on it’s own (and if you’re reading this in the future I may have already written one).

Alright, so far so good, we’ve just defined the metadata we need to do to read the file, but now how do we actually get down to parsing the lines in the file?

Well, that’s where the data in Fields: [] comes in.

If you’ve been following along the CgrateS in baby steps series, you’ll have rated a CDR using the API, that looked something like this:

{"method": "CDRsV1.ProcessExternalCDR", "params": [ { \
"Category": "call",
"RequestType": "*raw",
"ToR": "*monetary",
"Tenant": "cgrates.org",
"Account": "1002",
"Subject": "1002",
"Destination": "6141111124211",
"AnswerTime": "2022-02-15 13:07:39",
"SetupTime": "2022-02-15 13:07:30",
"Usage": "181s",
"OriginID": "API Function Example"
}], "id": 0}

ERS is going to use the same API to rate a CDR, calling more-or-less the same API, so we’re going to set the parameters that go into this from the CSV contents inside the fields:

			"fields":[

				//Type of Record (Voice)
				{"tag": "ToR", "path": "*cgreq.ToR", "type": "*constant", "value": "*voice"},

				//Category set to "call" to match RatingProfile_VoiceCalls from our RatingProfile
				{"tag": "Category", "path": "*cgreq.Category", "type": "*constant", "value": "call"},

				//RequestType is *rated as we won't be deducting from an account balance
				{"tag": "RequestType", "path": "*cgreq.RequestType", "type": "*constant", "value": "*rated"},
				
]

That’s the static values out of the way, next up we’ll define our values we pluck from the CSV. We can get the value of each row from “~*req.ColumnNumber” where ColumnNumber is the column number starting from 0.


				//Unique ID for this call - We get this from the CallID field in the CSV
				{"tag": "OriginID", "path": "*cgreq.OriginID", "type": "*variable","value":"~*req.7"},
				
				//Account is the Source of the call 
				{"tag": "Account", "path": "*cgreq.Account", "type": "*variable", "value": "~*req.4"},
				
				//Destination is B Party Number - We use 'Called Party Number'
				{"tag": "Destination", "path": "*cgreq.Destination", "type": "*variable", "value": "~*req.5"},
				{"tag": "Subject", "path": "*cgreq.Subject", "type": "*variable", "value": "~*req.5"},

				//Call Setup Time (In this case, CGrateS can already process this as a datetime object)
				{"tag": "SetupTime", "path": "*cgreq.SetupTime", "type": "*variable", "value": "~*req.0"},

				//Usage in seconds - We use 'Call duration'
				{"tag": "Usage", "path": "*cgreq.Usage", "type": "*variable", "value": "~*req.3"},

				//We can include extra columns with extra data - Like this one:
				{"tag": "Animal", "path": "*cgreq.Animal", "type": "*variable", "value": "~*req.6"},
]

Perfect,

If you’re struggling to get your JSON file right, that’s OK, I’ve included the JSON CGrateS.config file here.

You’ll need to restart CGrateS after putting the config changes in, but your instance will probably fail to start as we’ll need to create the directories we specified CGrateS should monitor for incoming CSV files:

mkdir /var/spool/cgrates/blog_example_csv_parser/
mkdir /var/spool/cgrates/blog_example_csv_parser/in
mkdir /var/spool/cgrates/blog_example_csv_parser/out

Right, now if we start CGrateS it should run.

But before we can put this all into play, we’ll need to setup some rates. My previous posts have covered how to do this, so for that I’ve included a Python script to setup all the rates, which you can run once you’ve restarted CGrateS.

Alright, with that out of the way, we can test it out, move our Dummy.csv file to /var/spool/cgrates/blog_example_csv_parser/in and see what happens.

mv Dummy.csv /var/spool/cgrates/blog_example_csv_parser/in/

All going well in your CGrateS log you’ll see all the events flying past for each row.

Then either via the CGrateS API, or just looking into the MySQL “cdrs” table you should see the records we just created.

And with that, you’ve rated CDRs from a CSV file and put them into CGrateS.

Libpython3.11 problems with Kamailio on Ubuntu 22.04

I run Ubuntu on my desktop and I mess with Kamailio a lot.

Recently I was doing some work with KEMI using Python, but installing the latest versions of Kamailio from the Debian Repos for Kamailio wasn’t working.

The following packages have unmet dependencies:
 kamailio-python3-modules : Depends: libpython3.11 (>= 3.11.0) but 3.11.0~rc1-1~22.04 is to be installed

Kamailio’s Python modules expect libpython3.11 or higher, but Ubuntu 22.04 repos only contain the release candidate – not the final version:

root@amanaki:/home/nick# apt-cache policy libpython3.11
libpython3.11:
  Installed: 3.11.0~rc1-1~22.04
  Candidate: 3.11.0~rc1-1~22.04
  Version table:
 *** 3.11.0~rc1-1~22.04 500
        500 http://au.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages
        100 /var/lib/dpkg/status

Luckily the deadsnakes PPA to the rescue!

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get purge kamailio
sudo apt --fix-broken install
sudo apt-get upgrade
apt-get install kamailio kamailio-python3-modules

And done!

vCenter – Partition Folder Isolation

While vCenter doesn’t really do contexts / mutli-tenants / VPCs like the hyperscalers, there are simple (ish) ways to do context separation inside VMware vCenter.

This means you can have a user who only has access to say a folder of VMs, but not able to see VMs outside of that folder.

Create a new Role inside vCenter from Administration -> Roles -> Add

Give the role all Virtual Machine privileges:

Create a new account (Can be on AD if you’re not using Local accounts, this is just our lab so I’ve created it as a Local account) for the user. We’re using this account for Ansible so we’ve used “Demo-Ansible-User” as the username

Now create a folder for the group of VMs, or pick an existing folder you want to give access to.

Right click on the folder and select “Add Permission”.

We give permission to that user with that role on the folder and make sure you tick “Propagate to children” (I missed this step before and had to repeat it):

If you are using templates, make sure the template is either in the folder, or apply the same permission to the template, by right clicking on it, Add Permission, same as this.

Finally you should be able to log in as that user and see the template, and clone it from the web UI, or create VMs but only within that folder.

CGrateS – Exporting CDRs

Having rated CDRs in CGrateS is great, but in reality, you probably want to get them into a billing system, CSV file, S3 bucket, CRM, invoice, Grafana, SQL table, etc, etc.

The Event Exporter Service (EES (previously called CDRe)) handles exporting CDRs from CGrateS.

Like everything in CGrateS, it’s highly configurable, and, again, like everything in CGrateS, supports every combination of services you can think of, plus a stack you haven’t thought of.

CDRs can be exported one of two ways, in real time, as the CDR is generated (online), or after the fact, exporting from the database containing the CDRs (offline).

Exporting in realtime (online) is a great option if you don’t want (or need) to store the CDRs in CGrateS; if you’re just using CGrateS to rate calls and spit them into a seperate system, this is a fantastic option, as it allows your CGrateS instances to remain light and not get clogged up with lots of old CDRs – That said, of course you can export the CDRs in realtime and still store them in CGrateS, that’s also a totally valid approach as well.

The more traditional approach is offline CDR export, where periodically or when an event is triggered, you scrape up a pile of CDRs and send them to your external systems.

For both options, we’ll need to define at least one exporter in our cgrates.json config file. For this example we’ll define a HTTP POST that we will trigger for realtime (online) CDR exporting, and a CSV file we dump to periodically when called from the API.

So first things first, we enable the EES module in the config:

"ees": {
		"enabled": true,
		"exporters": [
		]
	}

We’ll start with defining one exporter, named CSVExporter, that will output files to a folder named “testCSV” in the /tmp/ directory, but you can plonk these files wherever you like:

"ees": {
		"enabled": true,
		"synchronous": true,
		"exporters": [
			{
				"id": "CSVExporter",
				"type": "*file_csv",
				"export_path": "/tmp/testCSV",
				"flags": ["*log"],
				"attempts": 1,
				"synchronous": true,
				"field_separator": ",",
			},
		]
	}

We’ve got a lot of different types of export available to us, but type *file_csv is the easiest, so that’s where we’ll start.

Setting synchronous to true will mean we’ll only run one export job at a time, but it also means we’ll get back the result via the API, which will allow us to keep track of the ID of the last record we updated, so we don’t export the same record multiple times, more on this later.

Flags allows us to, if we wanted, bounce the event through AttributeS, for example, by adding *attributes to the flags, but in this case, it’s just logging to syslog.

Of course, just enabling ees won’t actually send calls to it, we’ll need to add “ees_conns“: [“*localhost”], to “apiers”: and “cdrs” so they know to bounce the events through it:

	"apiers": {
		"enabled": true,
                ...
		"ees_conns": ["*localhost"],
	},

	"cdrs": {
		"enabled": true,
		...
		"ees_conns": ["*localhost"],
	},

Okay, enough talk, let’s get exporting some CDRs!

If you’ve already got CDRs on your system from our previous tutorial, fantastic, but if not, let’s get up and running with a quick and dirty script to define some destinations, a charger, an account balance and then use some of the balance to generate a CDR:

import cgrateshttpapi
import pprint
import uuid
import datetime
now = datetime.datetime.now()
CGRateS_Obj = cgrateshttpapi.CGRateS('localhost', 2080)

#Define Destinations
CGRateS_Obj.SendData({'method':'ApierV2.SetTPDestination','params':[{"TPid":'cgrates.org',"ID":"Dest_AU_Mobile","Prefixes":["614"]}]})

#Load TariffPlan we just defined from StorDB to DataDB
CGRateS_Obj.SendData({"method":"APIerSv1.LoadTariffPlanFromStorDb","params":[{"TPid":'cgrates.org',"DryRun":False,"Validate":True,"APIOpts":None,"Caching":None}],"id":0})

#Define default Charger
print(CGRateS_Obj.SendData({"method": "APIerSv1.SetChargerProfile","params": [{"Tenant": "cgrates.org","ID": "DEFAULT",'FilterIDs': [],'AttributeIDs' : ['*none'],'Weight': 0,}]}))

account = "Nick_Test_123"

#Add a balance to the account with type *sms with 100 sms events
pprint.pprint(CGRateS_Obj.SendData({"method": "ApierV1.SetBalance","params": [{"Tenant": "cgrates.org","Account": account,"BalanceType": "*sms","DestinationIDs": 'Dest_NZ_Mobile;Dest_AU_Mobile',"Categories": "*any","Balance": {"ID": "100_SMS_Bundle_AU_NZ_Mobile","Value": 100,"Weight": 25}}]}))

#Process CDR Event for a single SMS
pprint.pprint(CGRateS_Obj.SendData({"method": "CDRsV2.ProcessExternalCDR","params": [{"OriginID": str(uuid.uuid1()),"ToR": "*sms","RequestType": "*pseudoprepaid","AnswerTime": now.strftime("%Y-%m-%d %H:%M:%S"),"SetupTime": now.strftime("%Y-%m-%d %H:%M:%S"),"Tenant": "cgrates.org","Account": account,"Destination" : "61412345678","Usage": "1",}]}))

Right, with that out of the way, we should now have something in our CDRs table, a quick SQL query confirms this is the case:

Bingo, there we go.

So let’s try an offline export via the API:

result = CGRateS_Obj.SendData({
	"method": "APIerSv1.ExportCDRs",
	"params": [
		{
			"ExporterIDs": [
				"CSVExporter"
			],
			"Verbose": True,
			"Accounts": [account
			]
		}
	]
})
pprint.pprint(result)

So, as you may have guessed, we’ve called the ExportCDRs API endpoint, we’ve specified which ExporterIDs we want to reference (these link back to the objects in the config, and the one we have defined currently is named CSVExporter).

Setting Verbose: True means that CGrateS gives us back a lot of info from the API call, here’s what we get back:

{"error": None,
 "id": None,
 "result": {"CSVExporter": {"ExportPath": "/tmp/testCSV/CSVExporter_21e9bc2.csv",
                            "FirstEventATime": "2024-01-02T18: 09: 29+11: 00",
                            "FirstExpOrderID": 14,
                            "LastEventATime": "2024-01-02T18: 40: 53+11: 00",
                            "LastExpOrderID": 25,
                            "NegativeExports": [],
                            "NumberOfEvents": 12,
                            "PositiveExports": ["f45dd29",
                                                ...
                                                "6163255"
            ],
                            "TimeNow": "2024-01-02T18: 40: 53.791517662+11: 00",
                            "TotalCost": 0,
                            "TotalSMSUsage": 12
        }
    }
}

Now that looks pretty positive, we got 12 events of SMS usage exported, which we can see in the file /tmp/testCSV/CSVExporter_21e9bc2.csv – and if we cat out the file, yeap, there’s all the CDRs.

But it’s a bit of a mess, there’s a lot of fields in there, so let’s adjust what goes into the CSV.

Let’s start by filtering what goes into the exporter, to only give us SMS events, of course you could adjust the filters here to target exporting only the records you want, based on anything you can define with Filters (and there’s a lot you can define with filters).

	"ees": {
		"enabled": true,
		"exporters": [
			{
				"id": "CSVExporter",
				"type": "*file_csv",
				"export_path": "/tmp/testCSV",
				"flags": ["*log"],
				"attempts": 1,
				"filters": ["*string:~*req.ToR:*sms"],
				"synchronous": true,
				"field_separator": ",",
				...

Now we’re only exporting SMS records, so let’s clean up the output of the CSV to just give us the data we want, which is the CDR ID, time, account, destination and usage.

	"ees": {
		"enabled": true,
		"exporters": [
			{
				"id": "CSVExporter",
				"type": "*file_csv",
				"export_path": "/tmp/testCSV",
				"flags": ["*log"],
				"attempts": 1,
				"filters": ["*string:~*req.ToR:*sms"],
				"synchronous": true,
				"field_separator": ",",
				"fields":[
					//Headers
					{"tag": "CGRID", "path": "*hdr.CGRID", "type": "*constant", "value": "CGRID"},
					{"tag": "AnswerTime", "path": "*hdr.AnswerTime", "type": "*constant", "value": "AnswerTime"},
					{"tag": "Account", "path": "*hdr.Account", "type": "*constant", "value": "Account"},
					{"tag": "Destination", "path": "*hdr.Destination", "type": "*constant", "value": "Destination"},
					{"tag": "Usage", "path": "*hdr.Usage", "type": "*constant", "value": "Usage"},
					//Values
					{"tag": "CGRID", "path": "*exp.CGRID", "type": "*variable", "value": "~*req.CGRID"},
					{"tag": "AnswerTime", "path": "*exp.AnswerTime", "type": "*variable", "value": "~*req.AnswerTime{*time_string:2006-01-02T15:04:05Z}"},
					{"tag": "Account", "path": "*exp.Account", "type": "*variable", "value": "~*req.Account"},
					{"tag": "Destination", "path": "*exp.Destination", "type": "*variable", "value": "~*req.Destination"},
					{"tag": "Usage", "path": "*exp.Usage", "type": "*variable", "value": "~*req.Usage"},

				],
			},
...

Now after a restart of CGrateS, our exports look like this:

Stunning, truly beautiful, look at that output!

Right, well you may at this point have noticed a problem if you’ve run this more than once. The problem is that is every time we run this, we get all the CDRs since the beginning of time.

Now there’s a few ways we can handle this, if we only want CDRs generated in the past day for example, we can filter that as an input on the ExportCDRs API call, using the multitude of filters available to us as documented in the API docs.

But where filtering by date/time falls down, is that if an offline CDR of a call on Monday, only got ingested on Tuesday, it would be missed by the export.

But, setting Verbose: True on the ExportCDRs API call gives us a handy trick, we’ve been told what the highest ID in the CDRs table we just exported in the response from the API in LastExpOrderID field.

If we jump over to the SQL database we use for StorDB, we can see that 33 is the ID of the highest CDR in the system.

So let’s try something, let’s run the exporter again, but this time let’s get all the CDRs where the ID is higher than 33:

#Process CDR Event for a single SMS
pprint.pprint(CGRateS_Obj.SendData({"method": "CDRsV2.ProcessExternalCDR","params": [{"OriginID": str(uuid.uuid1()),"ToR": "*sms","RequestType": "*pseudoprepaid","AnswerTime": now.strftime("%Y-%m-%d %H:%M:%S"),"SetupTime": now.strftime("%Y-%m-%d %H:%M:%S"),"Tenant": "cgrates.org","Account": account,"Destination" : "61412345678","Usage": "1",}]}))

#Trigger export where the OrderID is above 33
result = CGRateS_Obj.SendData({"method":"APIerSv1.ExportCDRs","params":[
    {"ExporterIDs": ["CSVExporter"],
     "Verbose" : True,
     "ExtraArgs" : {
        "OrderIDStart" : int(33),
     },
     "Accounts" : [account]}
]})
pprint.pprint(result)

Boom, now if we have a look at the output we can see the export covered two records, and the last ID was 35.

{'method': 'APIerSv1.ExportCDRs', 'params': [{'ExporterIDs': ['CSVExporter'], 'Verbose': True, 'ExtraArgs': {'OrderIDStart': 33}, 'run_id': 'carrier_interconnect', 'Accounts': ['Nick_Test_123']}]}
{'error': None,
 'id': None,
 'result': {'CSVExporter': {'ExportPath': '/tmp/testCSV/CSVExporter_c444cd9.csv',
                            'FirstEventATime': '2024-01-02T19:19:59+11:00',
                            'FirstExpOrderID': 34,
                            'LastEventATime': '2024-01-02T19:20:08+11:00',
                            'LastExpOrderID': 35,
                            'NegativeExports': [],
                            'NumberOfEvents': 2,
                            'PositiveExports': ['034aba2', '22e4fa7'],
                            'TimeNow': '2024-01-02T19:20:08.355664133+11:00',
                            'TotalCost': 0,
                            'TotalSMSUsage': 2}}}

So as long as we keep track of the LastExpOrderID value, and feed that as in input every time we run ExportCDRs, we can ensure we never miss a CDR, and never get the same CDR twice.