Kamailio world was an online event this year, but you can find all the videos here now they’ve all been posted.

Kamailio world was an online event this year, but you can find all the videos here now they’ve all been posted.
Sometimes you need Kamailio to serve as a User Agent Client, we covered using UAC to send SIP REGISTER messages and respond with the authentication info, but if you find you’re getting 401 or 407 responses back when sending an INVITE, you’ll need to use the UAC module, specifically the uac_auth() to authenticate the INVITE,
When Kamailio relays an INVITE to a destination, typically any replies / responses that are part of that dialog will go back to the originator using the Via headers.
This would be fine except if the originator doesn’t know the user name and password requested by the carrier, but Kamailio does,
Instead what we need Kamailio to do is if the response to the INVITE is a 401 Unauthorised Response, or a 407 Proxy Authentication required, intercept the request, generate the response to the authentication challenge, and send it to the carrier.
To do this we’ll need to use the UAC module in Kamailio and set some basic params:
loadmodule "uac.so"
modparam("uac", "reg_contact_addr", "10.0.1.252:5060")
modparam("uac", "reg_db_url", DBURL)
modparam("uac","auth_username_avp","$avp(auser)")
modparam("uac","auth_password_avp","$avp(apass)")
modparam("uac","auth_realm_avp","$avp(arealm)")
Next up when we relay the INVITE (using the Transaction module because we need the response to be transaction stateful).
Before we can call the t_relay() command, we need to specify a failure route, to be called if a negative response code comes back, we’ll use one called TRUNKAUTH and tell the transaction module that’s the one we’ll use by adding t_on_failure(“TRUNKAUTH”);
$du = "sip:sip.nickvsnetworking.com:5060";
if(is_method("INVITE")) {
t_on_failure("TRUNKAUTH");
t_relay();
exit;
}
What we’ve done is specified to rewrite the destination URI to sip.nickvsnetworking.com, if the request type is an INVITE, it’ll load a failure route called TRUNKAUTH and proxy the request with the transaction module to sip.nickvsnetworking.com.
What we get is a 401 response back from our imaginary carrier, and included in it is a www-auth header for authentication.
To catch this we’ll create an on failure route named “TRUNKAUTH”
failure_route[TRUNKAUTH] {
xlog("trunk auth");
}
We’ll make sure the transaction hasn’t been cancelled, and if it has bail out (no point processing subsequent requests on a cancelled dialog).
failure_route[TRUNKAUTH] {
xlog("trunk auth");
if (t_is_canceled()) {
exit;
}
And determine if the response code is a 401 Unauthorised Response, or a 407 Proxy Authentication required (Authentication requests from our upstream carrier):
failure_route[TRUNKAUTH] {
xlog("trunk auth");
if (t_is_canceled()) {
exit;
}
xlog("Checking status code");
if(t_check_status("401|407")) {
xlog("status code is valid auth challenge");
}
}
Next we’ll define the username and password we want to call upon for this challenge, and generate an authentication response based on these values using the uac_auth() command,
failure_route[TRUNKAUTH] {
xlog("trunk auth");
if (t_is_canceled()) {
exit;
}
xlog("Checking status code");
if(t_check_status("401|407")) {
xlog("status code is valid auth challenge");
$avp(auser) = "test";
$avp(apass) = "test";
uac_auth();
}
}
Then finally we’ll relay that back to the carrier with our www-auth header populated with the challenge response;
failure_route[TRUNKAUTH] {
xlog("trunk auth");
if (t_is_canceled()) {
exit;
}
xlog("Checking status code");
if(t_check_status("401|407")) {
xlog("status code is valid auth challenge");
$avp(auser) = "test";
$avp(apass) = "test";
uac_auth();
xlog("after uac_auth");
t_relay();
exit;
}
}
And done!
I wrote about using Ansible to automate Kamailio config management, Ansible is great at managing VMs or bare metal deployments, but for Containers using Docker to build and manage the deployments is where it’s at.
I’m going to assume you’ve got Docker in place, if not there’s heaps of info online about getting started with Docker.
The Kamailio team publish a Docker image for use, there’s no master branch at the moment, so you’ve got to specify the version; in this case kamailio:5.3.3-stretch.
Once we’ve got that we can start on the Dockerfile,
For this example I’m going to include
#Kamailio Test Stuff
FROM kamailio/kamailio:5.3.3-stretch
#Copy the config file onto the Filesystem of the Docker instance
COPY kamailio.cfg /etc/kamailio/
#Print out the current IP Address info
RUN ip add
#Expose port 5060 (SIP) for TCP and UDP
EXPOSE 5060
EXPOSE 5060/udp
Once the dockerfile is created we can build an image,
docker image build -t kamtest:0.1 .
And then run it,
docker run kamtest:0.1
Boom, now Kamailio is running, with the config file I pushed to it from my Dockerfile directory,
Now I can setup a Softphone on my local machine and point it to the IP of the Docker instance and away we go,
Where the real power here comes in is that I can run that docker run command another 10 times, and have another 10 Kamailio instannces running.
Tie this in with Kubernetes or a similar platform and you’ve got a way to scale and manage upgrades unlike anything you’d get on Bare Metal or VMs.
I’ve uploaded a copy of my Dockerfile for reference, you can find it on my GitHub.
I had a few headaches getting the example P-CSCF example configs from the Kamailio team to run, recent improvements with the IPsec support and code evolution meant that the example config just didn’t run.
So, after finally working out the changes I needed to make to get Kamailio to function as a P-CSCF, I took the plunge and made my first pull request on the Kamailio project.
And here it is!
https://github.com/kamailio/kamailio/pull/2203
It’s now in the master branch, so if you want to setup a P-CSCF using Kamailio, give it a shot, as the example config finally works!
I’ve touched on the http_client module in Kamailio in the past, and I’ve talked about using Kamailio as an HTTP server.
Today I thought I’d cover a simple use case – running an HTTP get from Kamailio and doing something with the output.
The http_client does what it sounds – Acts as an HTTP client to send HTTP GET and POST requests.
The use cases for this become clear quite quickly, you could use http_client to request credit from an accounting server via it’s API, get the latest rate to a destination from a supplier, pull weather data, etc, etc.
Let’s take a very simple example, we’ll load http_client by adding a loadmodule line:
... loadmodule "http_client.so" ...
Next I’ve put together a very simple request_route block that requests a HTTP file from a web server and sends the response to a SIP client:
####### Routing Logic ########
/* Main SIP request routing logic
* - processing of any incoming SIP request starts with this route
* - note: this is the same as route { ... } */
request_route {
xlog("Got request");
http_client_query("https://nickvsnetworking.com/utils/curl.html", "", "$var(result)");
xlog("Result is $var(result)");
sl_reply("200", "Result from HTTP server was $var(result)");
}
Using the http_client_query() function we’re able to query a HTTP server,
We’ll query the URL https://nickvsnetworking.com/utils/curl.html and store the output to a variable called result.
If you visit the URL you’ll just get a message that says “Hello there“, that’s what Kamailio will get when it runs the http_client function.
Next we print the result using an xlog() function so we can see the result in syslog,
Finally we send a stateless reply with the result code 200 and the body set to the result we got back from the server.
We can make this a bit more advanced, using HTTP Post we can send user variables and get back responses.
The http_client module is based on the ubiquitous cURL tool, that many users will already be familiar with.
For whatever reason you might want to run multiple Kamailio instances on the same machine.
In my case I was working on an all-in-one IMS box, which needed the P-CSCF, I-CSCF and S-CSCF all in the one box.
As you probably already know, all the startup scripts for each service/daemon live in the /etc/init.d directory.
We’ll start by copying the existing init.d file for kamailio:
cp /etc/init.d/kamailio /etc/init.d/kamailio1
Next up we’ll edit it to reflect the changes we want made.
You only really need to change the DEFAULTS= parameter, but you may also want to update the description, etc.
DEFAULTS=/etc/default/kamailio1
The CFGFILE parameter we can update later in the defaults file or specify here.
Next up we’ll need to create a defaults file where we specify how our instance will be loaded,
Again, we’ll copy an existing defaults file and then just edit it.
cp /etc/default/kamailio /etc/default/kamailio1
The file we just created from the copy will need to match the filename we specified in the init.d file for DEFAULTS=
In my case the filename is kamailio1
In here I’ll need to at minimum change the CFGFILE= parameter to point to the config file for the Kamailio instance we’re adding.
In this case the file is called kamailio1.cfg in /etc/kamailio/
For some Ubuntu systems you’re expected to reload the daemons:
systemctl daemon-reload
Once you’ve done all this you can now try and start your instance using /etc/init.d/kamailio1 start
For my example startup failed as I haven’t created the config file for kamailio1.cfg
So I quickly created a config file and tried to start my service:
/etc/init.d/kamailio1 restart
And presto, my service is running,
I can verify all is running through ps aux:
ps aux | grep kamailio1
Just keep in mind if you want to run multiple instances of Kamailio, you can’t have them all bound to the same address / port.
This also extends to tools like kamcmd which communicate with Kamailio over a socket, again you’d need to specify unique ports for each instance.
It’s probably pretty evident to most why you’d want to use TLS these days,
SIP Secure – aka sips has been around for a long time and is supported by most SIP endpoints now.
Kamailio supports TLS, and it’s setup is very straightforward.
I’ve got a private key and certificate file for the domain nickvsnetworking.com so I’ll use those to secure my Kamailio instance by using TLS.
I’ll start by copying both the certificate (in my case it’s cert.pem) and the private key (privkey.pem) into the Kamailio directory. (If you’ve already got the private key and certificate on your server for another application – say a web server, you can just reference that location so long as the permissions are in place for Kamailio to access)
Next up I’ll open my Kamailio config (kamailio.cfg), I’ll be working with an existing config and just adding the TLS capabilities, so I’ll add this like to the config:
!define WITH_TLS
That’s pretty much the end of the configuration in kamailio.cfg, if we have a look at what’s in place we can see that the TLS module loads it’s parameters from a separate file;
#!ifdef WITH_TLS
# ----- tls params -----
modparam("tls", "config", "/etc/kamailio/tls.cfg")
#!endif
So let’s next jump over to the tls.cfg file and add our certificate and private key;
[server:default]
method = TLSv1
verify_certificate = yes
require_certificate = yes
certificate = fullchain.pem
private_key = privkey.pem
Boom, as simple as that,
After restarting Kamailio subscribers can now contact us via TLS using sips.
You may wish to disable TCP & UDP transport in favor of only TLS.
If you’re planning on rolling out SIP over TLS (sips) to existing IP phones it’s worth looking at what Certificate Authorities (CAs) are recognised by the IP phones.
As TLS relies on a trust model where a CA acts kind of like a guarantor to the validity of the certificate, if the IP phone doesn’t recognise the CA, it may see the certificate as Invalid.
Some IP phones for example won’t recognize Let’s Encrypt certificates as valid, while others may not recognize any of the newer CAs.
Installing from source can be a headache,
If you’re running a Debian system, the Kamailio team provide nightly development builds as Debian packages that can be installed on Debian or Ubuntu systems using the apt package manager.
Installing is a breeze, first we just add the GPG key for the repo:
wget -O- http://deb.kamailio.org/kamailiodebkey.gpg | sudo apt-key add -
Then it’s just a matter of adding the release to your /etc/apt/sources.list file.
I’m running Bionic, so I’ll add:
deb http://deb.kamailio.org/kamailiodev-nightly bionic main deb-src http://deb.kamailio.org/kamailiodev-nightly bionic main
Then just update and install the packages you require:
apt-get update apt-get install kamailio*
For a full list of the Debian packages published check out the Debian package list:
Where you can find the nightly builds and stable builds for each of the releases.
Enjoy!
Despite the fact it’s 2020 there’s still a lot of folks in the world manually configuring boxes,
Ansible is a topic I could talk all day about, but in essence it’s kind of an automation framework, tell Ansible what to do one and it can spin you up two boxes, or two thousand boxes and manage the config on them.
I talked about DMQ, the Distributed Message Queue in a Kamailio Bytes post some time ago, and so as an example I’ll share an example playbook to Install Kamailio the lazy way from the Repos, and load the DMQ config with the IP Address and DMQ Address pulled from variables based on the host itself.
There’s a huge number of posts on installing and the basics of Ansible online, if you’re not familiar with Ansible already I’d suggest starting by learning the basics and then rejoining us.
Depending on if your hosts are running on bare metal, VMWare VMs or cloud based, I’m going to assume you’re working with a Debian system.
I’ve already got 3 servers ready to go, they’ve got sequential IP Addresses so I’ve added the range to my /etc/ansible/hosts file:
I’ve created the group kamailio and put the IP Address range 10.0.1.193 to 10.0.1.195 in there.
You will probably need to add the authentication info, such as passwords, private keys and privilege escalation details, but I’m going to assume you’ve already done that and you can run the ping module on each one:
ansible kamailio -m ping
Assuming that comes back OK and you can get into each one let’s move onto the Playbook.
There’s a few tasks we’ll get out of the way before we configure Kamailio,
The first of which is adding the Debian repo and the keys,
Next we’ll load a Kamailio config from a template that fills in our IP Address and Kamailio version, then we’ll install Kamailio,
Rather than talk you through each of the plays here’s a copy of my playbook:
---
- name: Configure Kamailio
hosts: kamailio
become: yes
vars:
kamailio_version: "53"
debian_sources_dir: "/etc/apt/sources.list.d"
tasks:
- name: Add keys for Kamailio repo
apt_key:
url: http://deb.kamailio.org/kamailiodebkey.gpg
state: present
- name: Add repo to sources.list
apt_repository:
repo: deb http://deb.kamailio.org/kamailio{{kamailio_version}} {{hostvars[inventory_hostname]['ansible_lsb']['codename']}} main
#The full list of Debian repos can be found at http://deb.kamailio.org/
#The version is based off the versions listed there and the release is based on the codename of the Debian / Ubuntu release.
state: present
- name: Copy Config Template
#Copies config from the template, fills in variables and uplaods to the server
template:
src: kamailio.cfg.j2
dest: /etc/kamailio/kamailio.cfg
owner: root
group: root
backup: yes
register: config_changed
- name: Install Kamailio
#Updates cache (apt-get update) and then installs Kamailio
apt:
name: kamailio
update_cache: yes
state: present
register: kamailio_installed_firstrun
- name: Restart Kamailio if config changed
service:
name: kamailio
state: restarted
when: config_changed.changed
- name: Start Kamailio if installed for the first time
service:
name: kamailio
state: started
when: kamailio_installed_firstrun.changed
Should be pretty straight forward to anyone who’s used Ansible before, but the real magic happens in the template module. Let’s take a look;
Pusing out static config is one thing, but things like IP Addresses, FQDNs and SSL certs may differ from machine to machine, so instead of just pushing one config, I’ve created a config and added some variables in Jinja2 format to the config, that will be filled with the value on the target when pushed out.
In the template module of the playbook you can see I’ve specified the file kamailio.cfg.j2 this is just a regular Kamailio config file but I’ve added some variables, let’s look at how that’s done.
On the machine 10.0.1.194 we want it to listen on 10.0.1.194, we could put list 0.0.0.0 but this can lead to security concerns, so instead let’s specify the IP in the Jinja config,
listen=udp:{{ ansible_default_ipv4.address }}:5060
listen=tcp:{{ ansible_default_ipv4.address }}:5060
listen=udp:{{ ansible_default_ipv4.address }}:5090
By putting ansible_default_ipv4.address in two sets of curly brackets, this tells Ansible to fill in thes values from the template with the Ansible IPv4 Address of the target machine.
Let’s take a look on the 10.0.1.194’s actual kamailio.cfg file:
Let’s take another example,
To keep DMQ humming it makes sense to have different DMQ domains for different versions of Kamailio, so in the Kamailio config file template I’ve called the variable kamailio_version in the DMQ address,
This means on a Kamailio 5.2 version this URL look like this on the boxes’ config:
# ---- dmq params ----
modparam("dmq", "server_address", "sip:10.0.1.194:5090")
modparam("dmq", "notification_address", "sip:dmq-53.nickvsnetworking.com:5090")
Running it is just a simple matter of calling ansible-playbook and pointing it at the playbook we created, here’s how it looks setting up the 3 hosts from their vanilla state:
The great thing about Kamailio is it’s omnipotent – This means it will detect if it needs to do each of the tasks specified in the playbook.
So if we run this again it won’t try and add the repo, GPG keys, install Kamailio and load the template, it’ll look and see each of those steps have already been done and skip each of them.
But what if someone makes some local changes on one of the boxes, let’s look at what happens:
Likewise now if we decide to change our config we only need to update the template file and Ansible will push it out to all our machines, I’ve added a comment into the Kamailio template, so let’s run it again and see the config pushed out to all the Kamailio instances and them restarting.
Hopefully this gives you a bit more of an idea of how to manage a large number of Kamailio instances at scale, as always I’ve put my running code on GitHub, Ansible Playbook (configure_kamailio.yml) and Kamailio Jinja config template (kamailio.cfg.j2)
I’ve been working for some time on open source mobile network cores, and one feature that has been a real struggle for a lot of people (Myself included) is getting VoLTE / IMS working.
Here’s some of the issues I’ve faced, and the lessons I learned along the way,
Sadly on most UEs / handsets, there’s no “Make VoLTE work now” switch, you’ve got a satisfy a bunch of dependencies in the OS before the baseband will start sending SIP anywhere.
Your eNB must support additional bearers (dedicated bearers I’ve managed to get away without in my testing) so the device can setup an APN for the IMS traffic.
Sadly at the moment this rules our Software Defined eNodeBs, like srsENB.
In the end I opted for a commercial eNB which has support for dedicated bearers.
According to the 3GPP IMS docs, an ISIM (IMS SIM) is not a requirement for IMS to work.
However in my testing I found Android didn’t have the option to enable VoLTE unless an ISIM was present the first time.
In a weird quirk I found once I’d inserted an ISIM and connected to the VoLTE network, I could put a USIM in the UE and also connect to the VoLTE network.
Obviously the parameters you can set on the USIM, such as Domain, IMPU, IMPI & AD, are kind of “guessed” but the AKAv1-MD5 algorithm does run.
There’s a lot of things you’ll need to have correct on your UE before it’ll even start to think about sending SIP messaging.
I was using commercial UE (Samsung handsets) without engineering firmware so I had very limited info on what’s going on “under the hood”. There’s no “Make VoLTE do” tickbox, there’s VoLTE enable, but that won’t do anything by default.
In the end I found adding a new APN called ims with type ims and enabling VoLTE in the settings finally saw the UE setup an IMS dedicated bearer, and request the P-CSCF address in the Protocol Configuration Options.
If your P-GW doesn’t know the IP of your P-CSCF, it’s not going to be able to respond to it in the Protocol Configuration Options (PCO) request sent by the UE with that nice new bearer for IMS we just setup.
Coming from a voice background, and pretty much having RFC 3261 tattooed on my brain, when I finally got the SIP REGISTER request sent to the Proxy CSCF I knocked something up in Kamailio to send back a 200 OK, thinking that’d be the end of it.
For any other SIP endpoint this would have been fine, but IMS Clients, nope.
Reading the specs drove home the same lesson anyone attempting to setup their own LTE network quickly learns – Mutual authentication means both the network and the UE need to verify each other, while I (as the network) can say the UE is OK, the UE needs to check I’m on the level.
In the end I added Multimedia Authentication support to PyHSS, and responded with a Crypto challenge using the AKAv1-MD5 auth,
I saw my 401 response go back to the UE and then no response. Nada.
This led to my next lesson…
According to the 3GPP docs, support for IPsec is optional, but I found this not to be the case on the handsets I’ve tested.
After sending back my 401 response the UE looks for the IPsec info in the 401 response, then tries to setup an IPsec SA and sends ESP packets back to the P-CSCF address.
Even with my valid AKAv1-MD5 auth, I found my UE wasn’t responding until I added IPsec support on the P-CSCF, hence why I couldn’t see the second REGISTER with the Authentication Info.
After setting up IPsec support, I finally saw the UE’s REGISTER with the AKAv1-MD5 authentication, and was able to send a 200 OK.
To learn all these lessons took a long time,
One thing I worked out a bit late but would have been invaluable was cracking into the Engineering Debug options on the UEs I was testing with.
Samsung UEs feature a Sysdump utility that has an IMS Debugging tool, sadly it’s only their for carriers doing IMS interop testing.
After a bit of work I detailed in this post – Reverse Engineering Samsung Sysdump Utils to Unlock IMS Debug & TCPdump on Samsung Phones – I managed to create a One-Time-Password generator for this to generate valid Samsung OTP keys to unlock the IMS Debugging feature on these handsets.
I outlined turning on these features in this post.
This means without engineering firmware you’re able to pull a bunch of debugging info off the UE.
If you’ve recently gone through this, are going through this or thinking about it, I’d love to hear your experiences.
I’ll be continuing to share my adventures here and elsewhere to help others get their own VoLTE networks happening.
If you’re leaning about VoLTE & IMS networks, or building your own, I’d suggest checking out my other posts on the topic.
In my last post I talked about using KEMI in Kamailio and how you can integrate in a different programming language to handle your SIP request handling in a language you already know – Like Python!
So in this post I’ll cover the basics of how we can manage requests and responses from Kamailio in Python, if you haven’t already read it, go back to last weeks post and get that running, it’s where we’ll start off.
Before we get too excited there’s some boilerplate we’ve got to add to our Python script, we need to create a class called kamailio and populate the class by defining some functions, we’ll define an __init__ to handle loading of the class, define a child_init for handling child processes, define ksr_request_route to handle the initial requests. We’ll also need to define a mod_init – outside of the Kamailio class to initialize the class.
import sys
import Router.Logger as Logger
import KSR as KSR
import requests
# global function to instantiate a kamailio class object
# -- executed when kamailio app_python module is initialized
def mod_init():
KSR.info("===== from Python mod init\n");
return kamailio();
# -- {start defining kamailio class}
class kamailio:
def __init__(self):
KSR.info('===== kamailio.__init__\n');
# executed when kamailio child processes are initialized
def child_init(self, rank):
KSR.info('===== kamailio.child_init(%d)\n' % rank);
return 0;
# SIP request routing
# -- equivalent of request_route{}
def ksr_request_route(self, msg):
KSR.info("===== request - from kamailio python script\n");
KSR.dbg("method " + KSR.pv.get("$rm") + " r-uri " + KSR.pv.get("$ru"))
Most of these should be pretty self explanatory for anyone who’s done a bit more in-depth Python programming, but it’s no big deal if you don’t understand all this, the only part you need to understand is the ksr_request_route function.
ksr_request_route: translates to our request_route{} in the Kamailio native scripting language, all requests that come in will start off in this part.
So let’s start to build upon this, so we’ll blindly accept all SIP registrations;
...
# SIP request routing
# -- equivalent of request_route{}
def ksr_request_route(self, msg):
KSR.info("===== request - from kamailio python script\n");
KSR.dbg("method " + KSR.pv.get("$rm") + " r-uri " + KSR.pv.get("$ru"))
if KSR.is_method("REGISTER"):
KSR.sl.send_reply(200, "Sure")
Here you’ll see we’ve added an if statement, as if we were doing any other If statement in Python, in this case we’re asking if the KSR.is_method(“REGISTER”), and if it is, we’ll send back a 200 OK response.
All the Kamailio bits we’ll use in Python will have the KSR. prefix, so let’s take a quick break here to talk about KSR. The KSR. functions are the KEMI functions we’ve exposed to Python.
Without them, we’re just writing Python, and we’d have to do all the functions provided by Kamailio nativeley in Python, which would be crazy.
So we leverage the Kamailio modules you know and love from Python using Python’s logic / programming syntax, as well as opening up the ability to pull in other libraries from Python.
There’s a full (ish) list of the KEMI functions here, but let’s talk about the basics.
Let’s look at how we might send a stateless reply,
There’s a module function to send a stateless reply;
KSR.sl.send_reply(200, "OK")
The vast majority of functions are abstracted as module functions, like the example above, but not all of them.
So every function doesn’t need to be wrapped up as a module, there’s also a way to call any function that you’d call from the native scripting language, wrapped up, kind of like an Exec command:
KSR.x.modf("sl_send_reply", "200", "OK");
So thanks to this we can call any Kamailio function from Python, even if it’s not explicitly in the KEMI abstraction.
So earlier we managed REGISTER requests and sent back a 200 OK response.
What about forwarding a SIP Request to another proxy? Let’s follow on with an elif statement to test if the method is an INVITE and statelessly forward it.
elif KSR.is_method("INVITE"):
#Lookup our public IP address
try:
ip = requests.get('https://api.ipify.org').text
except:
ip = "Failed to resolve"
#Add that as a header
KSR.hdr.append("X-KEMI: I came from KEMI at " + str(ip) + "\r\n");
#Set host IP to 10.1.1.1
KSR.sethost("10.1.1.1");
#Forward the request on
KSR.forward()
Now an incoming SIP invite will be proxied / forwarded to 10.1.1.1, all from Python.
But so far we’ve only done things in KEMI / Python that we could do in our native Kamailio scripting language, so let’s use some Python in our Python!
I utterly love the Python Requests library, so let’s use that to look up our public IP address and add it as a header to our forwarded SIP INVITE;
elif KSR.is_method("INVITE"):
#Lookup our public IP address
try:
ip = requests.get('https://api.ipify.org').text
except:
ip = "Failed to resolve"
#Add that as a header
KSR.hdr.append("X-KEMI: I came from KEMI at " + str(ip) + "\r\n");
#Set host IP to 10.1.1.1
KSR.sethost("10.1.1.1");
#Forward the request on
KSR.forward()
(For anyone pedantic out there, Kamailio does have an HTTP client module that could do this too, but Requests is awesome)
So let’s have a look at our forwarded request:
So let’s wrap this up a bit and handle any other request that’s not an INVITE or a REGISTER, with a 500 error code.
# SIP request routing
# -- equivalent of request_route{}
def ksr_request_route(self, msg):
KSR.dbg("method " + KSR.pv.get("$rm") + " r-uri " + KSR.pv.get("$ru"))
if KSR.is_method("REGISTER"):
KSR.sl.send_reply(200, "OK")
elif KSR.is_method("INVITE"):
#Lookup our public IP address
try:
ip = requests.get('https://api.ipify.org').text
except:
ip = "Failed to resolve"
#Add that as a header
KSR.hdr.append("X-KEMI: I came from KEMI at " + str(ip) + "\r\n");
#Set host IP to 10.1.1.1
KSR.sethost("10.1.1.1");
#Forward the request on
KSR.forward()
else:
KSR.sl.send_reply(500, "Got no idea...")
I’ve talked about using the UAC module, but as promised, here’s how we can use the UAC module to send SIP REGISTER requests to another SIP server so we can register to another SIP proxy.
Let’s say we’re using Kamailio to talk to a SIP Trunk that requires us to register with them so they know where to send the calls. We’d need to use Kamailio UAC module to manage SIP Registration with our remote SIP Trunk.
But Kamailio’s a proxy, why are we sending requests from it? A proxy just handles messages, right?
Proxies don’t originate messages, it’s true, and Kamailio can be a proxy, but with the UAC module we can use Kamailio as a Client instead of a server. Keep in mind Kamailio is what we tell it to be.
Before we can go spewing registrations out all over the internet we need to start by getting a few things in place;
First of which is configuring UAC module, which is something I covered off in my last post,
Once we’ve got that done we’ll need to tell the UAC module our IP Address for the from address for our Contact field, and the database URL of what we’ve setup.
modparam("uac", "reg_contact_addr", "192.168.1.99:5060")
modparam("uac", "reg_db_url", "mysql://kamailio:kamailiorw@localhost/kamailio")
I haven’t used a variable like DBURL for the database information, but you could.
Finally a restart will see these changes pushed into Kamailio.
/etc/init.d/kamailio restart
This is the end of the Kamailio config side of things, which you can find on my GitHub here.
Once we’ve got a database connection in place and UAC module loaded, then we can configure an entry in the uacreg table in the database, in my example I’m going to be registering to an Asterisk box on 192.168.1.205, so I’ll insert that into my database:
mysql> INSERT INTO `uacreg` VALUES (NULL,'myusername','myusername','192.168.1.205','myusername','192.168.1.205','asterisk','myusername','mypassword','','sip:192.168.1.205:5060',60,0,0);
Note: If you’re using a later version of Kamailio (5.4+) then the DB schema changes and you may want something like this:
insert into uacreg values ('', 'myusername', 'myusername', 'mydomain', 'myusername', 'mydomain', 'asteriskrealm', 'myusername', 'mypassword', '', 'sip:remoteproxy.com:5060', 60, 0, 0, 0)
Having a look at the fields in our table makes it a bit clearer as to what we’ve got in place, setting flags to 0 will see Kamailio attempt registration. Make sure the auth_proxy is a SIP URI (Starts with sip:) and leave the auth_ha1 password empty as we haven’t calculated it.
mysql> SELECT * FROM 'uacreg' \Gid: 2
l_uuid: myusername
l_username: myusername l_domain: 192.168.1.205 r_username: myusername r_domain: 192.168.1.205 realm: asterisk auth_username: myusername auth_password: mypassword auth_ha1: auth_proxy: sip:192.168.1.205:5060 expires: 60 flags: 0 reg_delay: 0
After we’ve got our database connection in place, UAC module configured and database entries added, it’s time to put it into play, we’ll use Kamcmd to check it’s status:
kamcmd> uac.reg_reload kamcmd> uac.reg_dump
Unfortunately from Kamcmd we’re not able to see registration status, but Sngrep will show us what’s going on:
From Sngrep we can see the REGISTRATION going out, the authentication challenge and the 200 OK at the end.
Make sure you’ve got your Realm correct, otherwise you may see an error like this:
RROR: {2 10 REGISTER [email protected]} uac [uac_reg.c:946]: uac_reg_tm_callback(): realms do not match. requested realm: [localhost]
When learning to use Kamailio you might find yourself thinking about if you really want to learn to write a Kamailio configuration file, which is another weird scripting language to learn to achieve a task.
Enter KEMI – Kamailio Embedded Interface. KEMI allows you to abstract the routing logic to another programing language. In layman’s terms this means you can write your routing blocks, like request_route{}, reply_route{}, etc, in languages you already know – like Lua, JavaScript, Ruby – and my favorite – Python!
You don’t need to learn how to do write complex routing logic in Kamailio’s native scripting language, you can instead do it in a language you’re already familiar with, writing your Routing Blocks in another programming language.
By writing the routing logic in KEMI allows you to change your routing blocks without having to restart Kamailio, something you can’t do with the “native” scripting language – This means you can change your routing live.
Note: This isn’t yet in place for all languages – Some still require a restart.
While Kamailio’s got a huge list of modules to interface with a vast number of different things, the ~200 Kamailio modules don’t compare with the thousands of premade libraries that exist for languages like Python, Ruby, JavaScript, etc.
We’ll obviously need Kamailio installed, but we’ll also need the programming language we want to leverage setup (fairly obvious).
KEMI only takes care of the routing of SIP messages inside our routing blocks – So we’ve still got the Kamailio cfg file (kamailio.cfg) that we use to bind and setup the service as required, load the modules we want and configure them.
Essentially we need to load the app for the language we use, in this example we’ll use app_python3.so and use that as our Config Engine.
loadmodule "app_python3.so"
modparam("app_python3", "load", "/etc/kamailio/kemi.py")
cfgengine "python"
After that we just need to remove all our routing blocks and create a basic Python3 script to handle it,
We’ll create a new python file called kemi.py
## Kamailio - equivalent of routing blocks in Python
import sys
import Router.Logger as Logger
import KSR as KSR
# global function to instantiate a kamailio class object
# -- executed when kamailio app_python module is initialized
def mod_init():
KSR.info("===== from Python mod init\n");
return kamailio();
# -- {start defining kamailio class}
class kamailio:
def __init__(self):
KSR.info('===== kamailio.__init__\n');
# executed when kamailio child processes are initialized
def child_init(self, rank):
KSR.info('===== kamailio.child_init(%d)\n' % rank);
return 0;
# SIP request routing
# -- equivalent of request_route{}
def ksr_request_route(self, msg):
KSR.info("===== request - from kamailio python script\n");
KSR.info("===== method [%s] r-uri [%s]\n" % (KSR.pv.get("$rm"),KSR.pv.get("$ru")));
So that’s it! We’re running,
Running code for kamailio.cfg (Kamailio config) and kemi.py (Python3 script).
I’ve talked a little about my adventures with Diameter in the past, the basics of Diameter, the packet structure and the Python HSS I put together.
Kamailio is generally thought of as a SIP router, but it can in fact handle Diameter signaling as well.
Everything to do with Diameter in Kamailio relies on the C Diameter Peer and CDP_AVP modules which abstract the handling of Diameter messages, and allow us to handle them sort of like SIP messages.
CDP on it’s own doesn’t actually allow us to send Diameter messages, but it’s relied upon by other modules, like CDP_AVP and many of the Kamailio IMS modules, to handle Diameter signaling.
Before we can start shooting Diameter messages all over the place we’ve first got to configure our Kamailio instance, to bring up other Diameter peers, and learn about their capabilities.
C Diameter Peer (Aka CDP) manages the Diameter connections, the Device Watchdog Request/Answers etc, all in the background.
We’ll need to define our Diameter peers for CDP to use so Kamailio can talk to them. This is done in an XML file which lays out our Diameter peers and all the connection information.
In our Kamailio config we’ll add the following lines:
loadmodule "cdp.so"
modparam("cdp", "config_file", "/etc/kamailio/diametercfg.xml")
loadmodule "cdp_avp.so"
This will load the CDP modules and instruct Kamailio to pull it’s CDP info from an XML config file at /etc/kamailio/diametercfg.xml
Let’s look at the basic example given when installed:
<?xml version="1.0" encoding="UTF-8"?> <!-- DiameterPeer Parameters - FQDN - FQDN of this peer, as it should apper in the Origin-Host AVP - Realm - Realm of this peer, as it should apper in the Origin-Realm AVP - Vendor_Id - Default Vendor-Id to appear in the Capabilities Exchange - Product_Name - Product Name to appear in the Capabilities Exchange - AcceptUnknownPeers - Whether to accept (1) or deny (0) connections from peers with FQDN not configured below - DropUnknownOnDisconnect - Whether to drop (1) or keep (0) and retry connections (until restart) unknown peers in the list of peers after a disconnection. - Tc - Value for the RFC3588 Tc timer - default 30 seconds - Workers - Number of incoming messages processing workers forked processes. - Queue - Length of queue of tasks for the workers: - too small and the incoming messages will be blocked too often; - too large and the senders of incoming messages will have a longer feedback loop to notice that this Diameter peer is overloaded in processing incoming requests; - a good choice is to have it about 2 times the number of workers. This will mean that each worker will have about 2 tasks in the queue to process before new incoming messages will start to block. - ConnectTimeout - time in seconds to wait for an outbound TCP connection to be established. - TransactionTimeout - time in seconds after which the transaction timeout callback will be fired, when using transactional processing. - SessionsHashSize - size of the hash-table to use for the Diameter sessions. When searching for a session, the time required for this operation will be that of sequential searching in a list of NumberOfActiveSessions/SessionsHashSize. So higher the better, yet each hashslot will consume an extra 2xsizeof(void*) bytes (typically 8 or 16 bytes extra). - DefaultAuthSessionTimeout - default value to use when there is no Authorization Session Timeout AVP present. - MaxAuthSessionTimeout - maximum Authorization Session Timeout as a cut-out measure meant to enforce session refreshes. --> <DiameterPeer FQDN="pcscf.ims.smilecoms.com" Realm="ims.smilecoms.com" Vendor_Id="10415" Product_Name="CDiameterPeer" AcceptUnknownPeers="0" DropUnknownOnDisconnect="1" Tc="30" Workers="4" QueueLength="32" ConnectTimeout="5" TransactionTimeout="5" SessionsHashSize="128" DefaultAuthSessionTimeout="60" MaxAuthSessionTimeout="300" > <!-- Definition of peers to connect to and accept connections from. For each peer found in here a dedicated receiver process will be forked. All other unkwnown peers will share a single receiver. NB: You must have a peer definition for each peer listed in the realm routing section --> <Peer FQDN="pcrf1.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <Peer FQDN="pcrf2.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <Peer FQDN="pcrf3.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <Peer FQDN="pcrf4.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <Peer FQDN="pcrf5.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <Peer FQDN="pcrf6.ims.smilecoms.com" Realm="ims.smilecoms.com" port="3868"/> <!-- Definition of incoming connection acceptors. If no bind is specified, the acceptor will bind on all available interfaces. --> <Acceptor port="3868" /> <Acceptor port="3869" bind="127.0.0.1" /> <Acceptor port="3870" bind="192.168.1.1" /> <!-- Definition of Auth (authorization) and Acct (accounting) supported applications. This information is sent as part of the Capabilities Exchange procedures on connecting to peers. If no common application is found, the peers will disconnect. Messages will only be sent to a peer if that peer actually has declared support for the application id of the message. --> <Acct id="16777216" vendor="10415" /> <Acct id="16777216" vendor="0" /> <Auth id="16777216" vendor="10415"/> <Auth id="16777216" vendor="0" /> <!-- Supported Vendor IDs - list of values which will be sent in the CER/CEA in the Supported-Vendor-ID AVPs --> <SupportedVendor vendor="10415" /> <!-- Realm routing definition. Each Realm can have a different table of peers to route towards. In case the Destination Realm AVP contains a Realm not defined here, the DefaultRoute entries will be used. Note: In case a message already contains a Destination-Host AVP, Realm Routeing will not be applied. Note: Routing will only happen towards connected and application id supporting peers. The metric is used to order the list of prefered peers, while looking for a connected and application id supporting peer. In the end, of course, just one peer will be selected. --> <Realm name="ims.smilecoms.com"> <Route FQDN="pcrf1.ims.smilecoms.com" metric="3"/> <Route FQDN="pcrf2.ims.smilecoms.com" metric="5"/> </Realm> <Realm name="temp.ims.smilecoms.com"> <Route FQDN="pcrf3.ims.smilecoms.com" metric="7"/> <Route FQDN="pcrf4.ims.smilecoms.com" metric="11"/> </Realm> <DefaultRoute FQDN="pcrf5.ims.smilecoms.com" metric="15"/> <DefaultRoute FQDN="pcrf6.ims.smilecoms.com" metric="13"/> </DiameterPeer>
First we need to start by telling CDP about the Diameter peer it’s going to be – we do this in the <DiameterPeer section where we define the FQDN and Diameter Realm we’re going to use, as well as some general configuration parameters.
<Peers are of course, Diameter peers. Defining them here will mean a connection is established to each one, Capabilities exchanged and Watchdog request/responses managed. We define the usage of each Peer further on in the config.
The Acceptor section – fairly obviously – sets the bindings for the addresses and ports we’ll listen on.
Next up we need to define the Diameter applications we support in the <Acct id=” /> and <SupportedVendor> parameters, this can be a little unintuitive as we could list support for every Diameter application here, but unless you’ve got a module that can handle those applications, it’s of no use.
Instead of using Dispatcher to manage sending Diameter requests, CDP handles this for us. CDP keeps track of the Peers status and it’s capabilities, but we can group like Peers together, for example we may have a pool of PCRF NEs, so we can group them together into a <Realm >. Instead of calling a peer directly we can call the realm and CDP will dispatch the request to an up peer inside the realm, similar to Dispatcher Groups.
Finally we can configure a <DefaultRoute> which will be used if we don’t specify the peer or realm the request needs to be sent to. Multiple default routes can exist, differentiated based on preference.
We can check the status of peers using Kamcmd’s cdp.list_peers command which lists the peers, their states and capabilities.
You may already be familiar with Kamailio’s Disptacher module, if you’re not, you can learn all about it in my Kamailio Bytes – Dispatcher Module post.
One question that’s not as obvious as it perhaps should be is the different states shown with kamcmd dispatcher.list command;
So what do the flags for state mean?
The first letter in the flag means is the current state, Active (A), Inactive (I) or Disabled (D).
The second letter in the flag means monitor status, Probing (P) meaning actively checked with SIP Options pings, or Not Set (X) denoting the device isn’t actively checked with SIP Options pings.
AP – Actively Probing – SIP OPTIONS are getting a response, routing to this destination is possible, and it’s “Up” for all intents and purposes.
IP – Inactively Probing – Destination is not meeting the threshold of SIP OPTIONS request responses it needs to be considered active. The destination is either down or not responding to all SIP OPTIONS pings. Often this is due to needing X number of positive responses before considering the destination as “Up”.
DX – Disabled & Not Probing – This device is disabled, no SIP OPTIONS are sent.
AX – Active & Not Probing– No SIP OPTIONS are sent to check state, but is is effectively “Up” even though the remote end may not be reachable.
Back to basics today,
In the third part of the Kamailio 101 series I briefly touched upon pseudovariables, but let’s look into what exactly they are and how we can manipulate them to change headers.
The term “pseudo-variable” is used for special tokens that can be given as parameters to different script functions and they will be replaced with a value before the execution of the function.
https://www.kamailio.org/wiki/cookbooks/devel/pseudovariables
You’ve probably seen in any number of the previous Kamailio Bytes posts me use pseudovariables, often in xlog or in if statements, they’re generally short strings prefixed with a $ sign like $fU, $tU, $ua, etc.
When Kamailio gets a SIP message it explodes it into a pile of variables, getting the To URI and putting it into a psudovariable called $tU, etc.
We can update the value of say $tU and then forward the SIP message on, but the To URI will now use our updated value.
When it comes to rewriting caller ID, changing domains, manipulating specific headers etc, pseudovariables is where it mostly happens.
Kamailio allows us to read these variables and for most of them rewrite them – But there’s a catch. We can mess with the headers which could result in our traffic being considered invalid by the next SIP proxy / device in the chain, or we could mess with the routing headers like Route, Via, etc, and find that our responses never get where they need to go.
So be careful! Headers exist for a reason, some are informational for end users, others are functional so other SIP proxies and UACs can know what’s going on.
When Kamailio’s SIP parser receives a SIP request/response it decodes the vast majority of the SIP headers into a variety of pseudovariables, we can then reference these variables we can then reference from our routing logic.
Let’s pause here and go back to the Stateless SIP Proxy Example, as we’ll build directly on that.
Follow the instructions in that post to get your stateless SIP proxy up and running, and we’ll make this simple change:
####### Routing Logic ########
/* Main SIP request routing logic
* - processing of any incoming SIP request starts with this route
* - note: this is the same as route { ... } */
request_route {
xlog("Received $rm to $ru - Forwarding");
$fU = "Nick Blog Example"; #Set From Username to this value
#Forward to new IP
forward("192.168.1.110");
}
Now when our traffic is proxied the From Username will show “Nick Blog Example” instead of what it previously showed.
Pretty simple, but very powerful.
As you’ve made it this far might be worth familiarising yourself with the different types of SIP proxy – Stateless, Transaction Stateful and Dialog Stateful.
I recently wrote a post on software-based transcoding limits on common virtualisation hardware.
To do this I needed to make a lot of calls, consistently, so as to generate some pretty graphs and stats.
To do this, I used SIPp (a performance testing tool for SIP) to simulate many concurrent calls leading to many concurrent transcoding sessions.
I built SIPp on Ubuntu 18.04:
apt-get install make autoconf pkg-config automake libncurses5-dev libpcap* git clone https://github.com/SIPp/sipp.git cd sipp/ ./build.sh --with-rtpstream cp sipp /usr/local/bin/
Next I setup RTPengine and setup Kamailio to use it.
I modified the Kamailio config allow Transcoding, as I talked about in the post on setting up Transcoding in RTPengine with Kamailio.
Now I had a working Kamailio instance with RTPengine that was transcoding.
So the next step becomes testing the transcoding is working, for this I had two SIPp instances, one to make the calls and once to answer them.
Makes calls to the IP of the Kamailio / RTPengine instance, for this I modified the uac_pcap scenario to playback an RTP stream of a PCMA (G.711 a-law) call to the called party (stored in a pcap file), and made it call the Kamailio instance multiple times based on how many concurrent transcoding sessions I wanted:
sipp -m 120 -r 200 -sf uac_pcap.xml rtpenginetranscode.nickvsnetworking.com
Instance 2 acted as a simple SIP UAS, the call came in, it answered and echoed back the RTP stream it received.
sipp -rtp_echo -sf uas.xml