We’ve talked a bit in the Kamailio Bytes series about different modules we can use, but I thought it’d be useful to talk about setting up a SIP Proxy using Kamailio, and what’s involved in routing a message from Host A to Host B.
When we talk about proxying for the most part we’re talking about forwarding, let’s look at the steps involved:
Our Kamailio instance receives a SIP request (for simplicity we’ll assume an INVITE).
Kamailio looks at it’s routing logic to lookup where to forward the request to. You could find out where to send the request to from a lot of different sources, you could consult the Dialplan Module or Dispatcher Module, perform an SQL lookup, consult UsrLoc table to find a AoR, etc.
Add it’s own Via header (more on that later)
Forward the Request (aka Proxy it) to the destination selected
Let’s take a look at a very simple way we can do this with two lines in Kamailio to forward any requests to 192.168.1.110:
####### Routing Logic ########
/* Main SIP request routing logic
* - processing of any incoming SIP request starts with this route
* - note: this is the same as route { ... } */
request_route {
xlog("Received $rm to $ru - Forwarding");
#Forard to new IP
forward("192.168.1.110");
}
After we restart Kamailio and send a call (INVITE) to it let’s see how it handles it:
Let’s make a small modification, we’ll add a header called “X-Proxied” to the request before we forward it.
####### Routing Logic ########
/* Main SIP request routing logic
* - processing of any incoming SIP request starts with this route
* - note: this is the same as route { ... } */
request_route {
xlog("Received $rm to $ru - Forwarding");
append_hf("X-Proxied: You betcha\r\n");
#Forard to new IP
forward("192.168.1.110");
}
On the wire the packets still come from the requester, to the Proxy (Kamailio) before being forwarded to the forward destination (192.168.1.110):
We’ve now got a basic proxy that takes all requests to the proxy address and forwards it to an IP Address.
If you’re very perceptive you might have picked up the fact that the in-dialog responses, like the 100 Trying, the 180 Ringing and the 200 Ok also all went through the proxy, but if you look at syslog you’ll only see the initial request.
/usr/sbin/kamailio: Received INVITE to sip:[email protected]:5060 - Forwarding
So why didn’t we hit that xlog() route and generate a log entry for the replies?
But before we can talk too much about managing replies, let’s talk about Via…
It’s all about the Via
Before we can answer that question let’s take a look at Via headers.
The SIP Via header is added by a proxy when it forwards a SIP message onto another destination,
When a response is sent the reverse is done, each SIP proxy removes their details from the Via header and forwards to the next Via header along.
SIP Via headers in action
As we can see in the example above, each proxy adds it’s own address as a Via header, before it uses it’s internal logic to work out where to forward it to, and then forward on the INVITE.
Now because all our routing information is stored in Via headers when we need to route a Response back, each proxy doesn’t need to consult it’s internal logic to work out where to route to, but can instead just strip it’s own address out of the Via header, and then forward it to the next Via header IP Address down in the list.
Via headers are also used to detect looping, a proxy can check when it receives a SIP message if it’s own IP address is already in a Via header, if it is, there’s a loop there.
Managing Responses in Kamailio
By default Kamailio manages responses by looking at the Via header, if the top Via header is its own IP address, it strips it’s own Via header and forwards it onto the next destination in the Via header.
We can add our own logic into this by adding a new route called onreply_route{}
onreply_route{
xlog("Got a reply $rs");
append_hf("X-Proxied: For the reply\r\n");
}
Now we’ll create a log entry with the response code in syslog for each response we receive, and we’ll add a header on the replies too:
Recap
A simple proxy to forward INVITEs is easy to implement in Kamailio, the real tricky question is what’s the logic involved to make the decision,
Now we’ll put both together to create something functional you could use in your own deployments. (You’d often find it’s faster to use HTable to store and retrieve data like this, but that’s a conversation for another day)
The Project
We’ll build a SIP honeypot using Kamailio. It’ll listen on a Public IP address for SIP connections from people scanning the internet with malicious intent and log their IPs, so our real SIP softswitches know to ignore them.
We’ll use GeoIP2 to lookup the location of the IP and then store that data into a MySQL database.
Lastly we’ll create a routing block we can use on another Kamailio instance to verify if that the IP address of the received SIP message is not in our blacklist by searching the MySQL database for the source IP.
The Database
In this example I’m going to create a database called “blacklist” with one table called “baddies”, in MySQL I’ll run:
CREATE database blacklist;
CREATE TABLE `baddies` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`ip_address` INT unsigned UNIQUE,
`hits` INT,
`last_seen` DATETIME,
`ua` TEXT,
`country` TEXT,
`city` TEXT
);
I’ll setup a MySQL user to INSERT/UPDATE/SELECT data from the MySQL database.
For storing IP addresses in the database we’ll store them as unsigned integers, and then use the INET_ATON('127.0.0.1') MySQL command to encode them from dotted-decimal format, and the INET_NTOA('2130706433') to put them back into dotted decimal.
Modparams
Now we’ll need to configure Kamailio, I’ll continue on from where we left off in the last post on GeoIP2 as we’ll use that to put Geographic data about the IP before adding the MySQL and SQLOps modules:
# ----- SQL params -----
loadmodule "db_mysql.so"
loadmodule "sqlops.so"
#Create a new MySQL database connection called blacklist_db
modparam("sqlops","sqlcon","blacklist_db=>mysql://root:yourpassword@localhost/blacklist")
#Set timeouts for MySQL Connections
modparam("db_mysql", "ping_interval", 60)
modparam("db_mysql", "auto_reconnect", 1)
modparam("db_mysql", "timeout_interval", 2)
After loading db_mysql and sqlops we create a new object / connection called blacklist_db with our MySQL Database parameters.
Now after a restart we’ll be connected to our MySQL database.
Honeypot Routing Logic
Now we’ll create a route to log the traffic:
####### Routing Logic ########
/* Main SIP request routing logic
* - processing of any incoming SIP request starts with this route
* - note: this is the same as route { ... } */
request_route {
route(AddToBlacklist);
sl_reply('200', 'Sure thing boss!');
}
route[AddToBlacklist]{
xlog("Packet received from IP $si");
sql_xquery("blacklist_db", "insert into baddies (ip_address, hits, last_seen, ua, country, city) values (2130706433, 10, NOW(), 'testua2', 'Australia', 'Hobart');");
}
Now for each SIP message received a new record will be inserted into the database:
root@ip-172-31-8-156:/etc/kamailio# mysql -u root -p blacklist -e "select * from baddies;"
Enter password:
+----+------------+------+---------------------+---------+-----------+--------+
| id | ip_address | hits | last_seen | ua | country | city |
+----+------------+------+---------------------+---------+-----------+--------+
| 1 | 2130706433 | 10 | 2019-08-13 02:52:57 | testua2 | Australia | Hobart |
| 2 | 2130706433 | 10 | 2019-08-13 02:53:01 | testua2 | Australia | Hobart |
| 3 | 2130706433 | 10 | 2019-08-13 02:53:05 | testua2 | Australia | Hobart |
+----+------------+------+---------------------+---------+-----------+--------+
This is great but we’re not actually putting the call variables in here, and we’ve got a lot of duplicates, let’s modify our sql_xquery() to include the call variables:
Now we’re setting the IP Address value to the Source IP psedovariable ($si) and formatting it using the INET_ATON function in MySQL, setting the last_seen to the current timestamp and setting the user agent to the User Agent psedovariable ($ua).
Let’s restart Kamailio, truncate the data that’s currently in the DB, send some SIP traffic to it and then check the contents:
mysql -u root -p blacklist -e "select *, INET_NTOA(ip_address) from baddies;"
Here you can see we’re starting to get somewhere, the IP, UA and last_seen values are all now correct.
We’re getting multiple entries from the same IP though, instead we just want to increment the hits counter and set the last_seen to the current time, for that we’ll just update the SQL query to set the time to be NOW() and if that IP is already in the database to update the last_seen value and incriment the hits counter:
route[AddToBlacklist]{
xlog("Packet received from IP $si");
geoip2_match("$si", "src"))
sql_xquery("blacklist_db", "insert into baddies (ip_address, hits, last_seen, ua, country, city) values (INET_ATON('$si'), 1, NOW(), '$ua', '$gip2(src=>cc)', '$gip2(src=>city)') ON DUPLICATE KEY UPDATE last_seen = NOW(), hits = hits + 1;", "r_sql");
}
The only issue with this is if GeoIP2 doesn’t have a match, no record will be added in the database, so we’ll add a handler for that:
route[AddToBlacklist]{
xlog("Packet received from IP $si");
if(geoip2_match("$si", "src")){
sql_xquery("blacklist_db", "insert into baddies (ip_address, hits, last_seen, ua, country, city) values (INET_ATON('$si'), 1, NOW(), '$ua', '$gip2(src=>cc)', '$gip2(src=>city)') ON DUPLICATE KEY UPDATE last_seen = NOW(), hits = hits + 1;", "r_sql");
}else{ ##If no match in GeoIP2 leave Country & City fields blank
sql_xquery("blacklist_db", "insert into baddies (ip_address, hits, last_seen, ua, country, city) values (INET_ATON('$si'), 1, NOW(), '$ua', '', '') ON DUPLICATE KEY UPDATE last_seen = NOW(), hits = hits + 1;", "r_sql");
}
}
Now let’s check our database again and see how the data looks:
mysql -u root -p blacklist -e "select *, INET_NTOA(ip_address) from baddies;"
Perfect! Now we’re inserting data into our blacklist from our honeypot. Now we’ll configure a new routing block we can use on another Kamailio instance to see if an IP is in the blacklist.
I left this running on my AWS box for a few hours, and lots of dodgy UAs dropped in to say hello, one of which was very insistent on making calls to Poland…
Querying the Data
Now we’ve got a blacklist it’s only useful if we block the traffic from our malicous actors who we’ve profiled in the database.
You could feed this into BGP to null route the traffic, or hook this into your firewall’s API, but we’re going to do this in Kamailio, so we’ll create a new routing block we can use on a different Kamailio instance – Like a production one – to see if the IP it just received traffic from is in the blacklist.
We’ve already spoken about querying databases in the SQLops Kamailio bytes, but this routing block will query the blacklist database, and if the sender is in the database, one or more records will be returned, so we know they’re bad and will drop their traffic:
route[CheckBlacklist]{
xlog("Checking blacklist for ip $si");
#Define a variable containing the SQL query we'll run
$var(sql) = "select INET_NTOA(ip_address) as ip_address from baddies;";
#Log the SQL query we're going to run to syslog for easy debugging
xlog("Query to run is $var(sql)");
#Query blacklist_db running the query stored in $var(sql) and store the result of the query to result_sql
sql_query("blacklist_db", "$var(sql)", "result_sql");
#If more than 0 records were returned from the database, drop the traffic
if($dbr(result_sql=>rows)>0){
xlog("This guy is bad news. Dropping traffic from $si");
exit;
}else{
xlog("No criminal record for $si - Allowing to progress");
}
}
This Honeypot use case just put those elements together.
In reality a far better implementation of this would use HTable to store this data, but hopefully this gives you a better understanding of how to actually work with data.
Final Note
I wrote this post about a week ago, and left the config running on an AWS box. I was getting hits to it within the hour, and in the past week I’ve had 172 IPs come and say hello, and some like the FriendlyScanner instance at 159.65.220.215 has sent over 93,000 requests:
RTPengine has an API / control protocol, which is what Kamailio / OpenSER uses to interact with RTPengine, called the ng Control Protocol.
Connection is based on Bencode encoded data and communicates via a UDP socket.
I wrote a simple Python script to pull active calls from RTPengine, code below:
#Quick Python library for interfacing with Sipwise's fantastic rtpengine - https://github.com/sipwise/rtpengine
#Bencode library from https://pypi.org/project/bencode.py/ (Had to download files from webpage (PIP was out of date))
import bencode
import socket
import sys
import random
import string
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_address = ('188.0.169.13', 2224) #Your server address
cookie = "0_2393_6"
data = bencode.encode({'command': 'list'})
message = str(cookie) + " " + str(data)
print(message)
sent = sock.sendto(message, server_address)
print('waiting to receive')
data, server = sock.recvfrom(4096)
print('received "%s"' % data)
data = data.split(" ", 1) #Only split on first space
print("Cookie is: " + str(data[0]))
print("Data is: " + str(bencode.decode(data[1])))
print("There are " + str(len(bencode.decode(data[1])['calls'])) + " calls up on RTPengine at " + str(server_address[0]))
for calls in bencode.decode(data[1])['calls']:
print(calls)
cookie = "1_2393_6"
data = bencode.encode({'command': 'query', 'call-id': str(calls)})
message = str(cookie).encode('utf-8') + " ".encode('utf-8') + str(data).encode('utf-8')
sent = sock.sendto(message, server_address)
print('\n\nwaiting to receive')
data, server = sock.recvfrom(8192)
data = data.split(" ", 1) #Only split on first space
bencoded_data = bencode.decode(data[1])
for keys in bencoded_data:
print(keys)
print("\t" + str(bencoded_data[keys]))
sock.close()
As anyone who’s setup a private LTE network can generally attest, APNs can be a real headache.
SIM/USIM cards, don’t store any APN details. In this past you may remember having to plug all these settings into your new phone when you upgraded so you could get online again.
Today when you insert a USIM belonging to a commercial operator, you generally don’t need to put APN settings in, this is because Android OS has its own index of APNs. When the USIM is inserted into the baseband module, the handset’s OS looks at the MCC & MNC in the IMSI and gets the APN settings automatically from Android’s database of APN details.
There is an option for the network to send the connectivity details to the UE in a special type of SMS, but we won’t go into that.
All this info is stored on the Android OS in apns-full-conf.xml which for non-rooted (stock) devices is not editable.
This file can override the user’s APN configuration, which can lead to some really confusing times as your EPC rejects the connection due to an unrecognized APN which is not what you have configured on the UE’s operating system, but it instead uses APN details from it’s database.
The only way around this is to change the apns-full-conf.xml file, either by modifying it per handset or submitting a push request to Android Open Source with your updated settings.
(I’ve only tried the former with rooted devices)
The XML file itself is fairly self explanatory, taking the MCC and MNC and the APN details for your network:
Once you’ve added yours to the file, inserting the USIM, rebooting the handset or restarting the carrier app is all that’s required for it to be re-read and auto provision APN settings from the XML file.
We’ve touched a tiny bit on basic database functionality in Kamailio, using MySQL to store User Data for authentication, ACLs or Dispatcher entries.
But with all those we were using Databases to load the config / dynamic data for a module.
We’ll build upon that, to connect to a Database that we can INSERT, UPDATE and SELECT data from within the dialplan.
For today’s example we’ll lookup the To address from a SIP INVITE and send back
Heads Up
There’s a lot of different use cases for reading and writing data from a database, but Kamailio also has a lot of native modules that handle this better, for example:
You might want to store a record of each INVITE and BYE you recieve for accounting, a better option is to use the Accounting Module in Kamailio.
You might want to authenticate user’s based on ACLs stored in a database, a better option would be to use Permissions Module.
User authentication info is best handled by Auth DB module.
The Dialplan module handles number translation really well and is lightning quick.
Just keep this in mind before jumping in that a lot of use cases have already been covered by a Kamailio module.
The Architecture
For today’s example we’ll be using MySQL as the database backend (db_mysl), but the db_mysql module simply connects us to a database, a bit like ODBC.
The real MVP is the SQLops module, that does all the heavy lifting by running the queries and managing the responses.
The majority of this config would work fine for other database backends, like PostGres, MongoDB, Oracle, etc.
I’ll demonstrate this same setup using different database backends in future posts.
MySQL Database
Before we get too excited we’ll need to setup a database to query. I’ll create a database called dyamic_routing with a table called routing storing source / destinations.
CREATE DATABASE phonebook;
USE phonebook;
CREATE TABLE contacts (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
source TEXT,
name TEXT
);
INSERT INTO contacts VALUES (1, '200', 'Nick Deskphone');
I’ll setup a MySQL user to INSERT/UPDATE/SELECT data from the MySQL database the normal way.
Modparam
The module parameters for connecting to a database backend are fairly straight forward, but we’ll go into a bit of depth here to drive home the point.
# ----- SQL params -----
loadmodule "db_mysql.so"
loadmodule "sqlops.so"
#Create a new MySQL database connection called contacts_db
modparam("sqlops","sqlcon","contacts_db=>mysql://root:youshouldntrealluseroot@localhost/phonebook")
#Set timeouts for MySQL Connections
modparam("db_mysql", "ping_interval", 60)
modparam("db_mysql", "auto_reconnect", 1)
modparam("db_mysql", "timeout_interval", 2)
First off we load the two modules we need for this, db_mysql and sqlops. This is fairly self explanatory, if we were using db_postgres, db_mongodb or even db_text we’d load them instead of db_mysql.
The sqlops “sqlcon” modparam is where we define our MySQL database connection.
In this case we create a new database connection object called contacts_db– We can have connections to multiple databases, hence requiring a name.
The MySQL URL is fairly straightforward, database type, username, password, host and database:
mysql://root:password@localhost/phonebook
In production obviously you shouldn’t use root as the user account to log into the database, and lock it down to only the source IP of your Kamailio instance and with only the permissions it needs. (If it’s just selecting data there’s no need for GRANT…)
Basic Query
Now we’ve created a database connection it’s time to start using it.
request_route {
if(method=="INVITE"){
xlog("New request to $tU");
#Query database object called "contacts_db", run the below query and store the output to a variable called result_sql
sql_query("contacts_db", "select * from contacts;", "result_sql");
#output number of rows in database returned
xlog("number of rows in table is $dbr(result_sql=>rows)\n");
}
}
If the method is an INVITE we’ll query the database object called “contacts_db” to run the query select * from contacts;
We’ll then output the number of rows in the table to xlog.
The query actually happens in the sql_query() command, which takes the name of the database object ( contacts_db ), the query itself ( select * from contacts; ) and stores it into a variable called result_sql.
Finally xlog references the variable we stored our result in (result_sql) using the $dbr() handler to output the number of rows in the table.
If you save this and send an INVITE to any destination and watch the syslog you should see something along the lines of this:
/usr/sbin/kamailio[7815]: ERROR: : New request to 200
/usr/sbin/kamailio[7815]: ERROR: <script>: number of rows in table is 1
This means we’ve got a connection to the database and we can run queries.
Accessing the Output
Now we’ve got the data back from the database and stored it in result_sql we probably want to do something with the outputted data.
By wrapping the result_sql variable in the $dbr() tags we can access it’s jucy insides, let’s take a look:
#output number of columns
xlog("Result has $dbr(result_sql=>cols) Columns");
#output number of rows
xlog("Result has $dbr(result_sql=>rows) rows");
#output contents of row 0, column 2
xlog("Contents of row 0 col 2 is $dbr(result_sql=>[0,2]) ");
#output name of column 2
xlog("name of column 2 is $dbr(result_sql=>colname[2]) ");
If we add this after our last xlog line, restart Kamailio and view syslog it should look something like this:
/usr/sbin/kamailio[8249]: ERROR: <script>: New request to 200
/usr/sbin/kamailio[8249]: ERROR: <script>: number of rows in table is 1
/usr/sbin/kamailio[8249]: ERROR: <script>: Result has 3 Columns
/usr/sbin/kamailio[8249]: ERROR: <script>: Result has 1 rows
/usr/sbin/kamailio[8249]: ERROR: <script>: Contents of row 0 column 2 is Nick Deskphone
Now we can see the data in the result we’ll start to refine this down a bit, we’ll begin by limiting the SQL query to search for the called number.
For this we’ll update the sql_query(); function to:
sql_query("contacts_db", "select * from contacts where source = $tU;", "result_sql");
This will include the the To URI Username pseudo variable in our query, so will only return results if the number we dial has one or more matching “source” entries in the database.
If we dial 200 the query that’s actually run against the database will look like this:
select * from contacts where source = '200';
Now once we save and try again our traffic will look the same, except it’ll only return data if we dial 200, if we dial 201 the SQL server won’t have any matching results to return:
/usr/sbin/kamailio[9069]: ERROR: : New request from 2029
/usr/sbin/kamailio[9069]: ERROR: number of rows in table is 0
/usr/sbin/kamailio[9069]: ERROR: Result has 0 Columns
So that’s all well and good but we haven’t really got the data easily yet, while we’re outputting the contents of row 0 col 2 to syslog, it’s not going to handle multiple results being returned, or 0 results being returned, so we need a better way to handle this.
We’ll use a for loop to loop through all the results returned and output the second column of each (the “name” field in the database).
#Loop through results
#Create variable i to use as the counter
$var(i) = 0;
#While the contents of row i, position 2, is not null:
while ($dbr(result_sql=>[$var(i),2]) != $null) {
#Output row i, position 2 (name)
xlog("name is $dbr(result_sql=>[$var(i),2])");
#increment i by 1
$var(i) = $var(i) + 1;
}
So while the contents of row i, position 2, is not null, we’ll output the contents and increment i to get the next row in the database until there are none left.
Now we can give our code a bit of a clean up:
request_route {
if(method=="INVITE"){
xlog("New request from $tU");
#Query database object called "contacts_db", run the below query and store the output to a variable called result_sql
sql_query("contacts_db", "select * from contacts where source = $tU;", "result_sql");
#Loop through results
#Create variable i to use as the counter
$var(i) = 0;
#While the contents of row i, position 2, is not null:
while ($dbr(result_sql=>[$var(i),2]) != $null) {
#Output row i, position 2 (name)
xlog("name $dbr(result_sql=>[$var(i),2])");
#increment i by 1
$var(i) = $var(i) + 1;
}
}
if(method=="REGISTER"){ sl_reply('200', 'OK'); }
}
I’ve removed many of our xlog entries we put in for debugging and also added a handler to handle REGISTER requests to keep my IP phone happy.
Now if we make a call to number 200:
/usr/sbin/kamailio[9686]: ERROR: New request from 200
/usr/sbin/kamailio[9686]: ERROR: name Nick Deskphone
And for comparison a call to 201 (no matching database entry):
/usr/sbin/kamailio[9686]: ERROR: New request from 200
Using the Resulting Output
Now we’ve got access to the data from the database let’s do something with it.
Inside our loop we’ll send a reply to the SIP requester, with a 410 “Gone” response with the body containing the data returned from the database:
#Loop through results
#Create variable i to use as the counter
$var(i) = 0;
#While the contents of row i, position 2, is not null:
while ($dbr(result_sql=>[$var(i),2]) != $null) {
#Output row i, position 2 (name)
xlog("name $dbr(result_sql=>[$var(i),2])");
$var(name) = $dbr(result_sql=>[$var(i),2]);
#increment i by 1
$var(i) = $var(i) + 1;
#Reply with a 410 (User Gone) response with the name returned from the database
sl_reply("410", "Sorry $var(name) has gone home");
exit;
}
Now calls to 200 will get the message “Sorry Nick desk phone has gone home”.
Lastly we probably want to only loop through the output if there’s more than one row returned from the database, so we’ll put the looping code in an if statement that evaluates if the number of returned rows from the database is 1 or more, and if not just send a 404 response:
#if one or more results are returned from the database
if($dbr(result_sql=>rows)>0){
#Loop through results
#Create variable i to use as the counter
$var(i) = 0;
#While the contents of row i, position 2, is not null:
while ($dbr(result_sql=>[$var(i),2]) != $null) {
#Output row i, position 2 (name)
xlog("name $dbr(result_sql=>[$var(i),2])");
$var(name) = $dbr(result_sql=>[$var(i),2]);
#increment i by 1
$var(i) = $var(i) + 1;
#Reply with a 410 (User Gone) response with the name returned from the database
sl_reply("410", "Sorry $var(name) has gone home");
exit;
}
}else{
#if 0 results are returned from database
sl_reply("404", "Never heard of them");
}
INSERT, DELETE, UPDATE, etc
Although we only covered SELECT, queries that don’t return data like an INSERT, UPDATE, DELETE etc, can all be run the same way but we just don’t need to worry about managing the returned data.
For example we could delete a record using:
sql_query("contacts_db", "delete * from contacts where source = $tU;");
We don’t even need to store the output unless we need to.
Summary
Hopefully you’ve now got an idea how to query data from a database and view / manipulated the returned data.
Generally Kamailio functions as a SIP router, receiving SIP messages and then responding with SIP.
Sometimes we may have a use case where we need to interact with Kamailio but with a request that isn’t a SIP message.
You’ve got a motion activated IP Camera, and you want to send an alert to a SIP phone if it detects motion.
The problem? The IP camera doesn’t speak SIP. but it can send an HTTP request if it detects motion.
Enter xHTTP!
We’ll get Kamailio to listen for HTTP requests and send an instant message using method “MESSAGE” to a SIP phone to let someone know there’s been motion.
Use Case
The sending of the message is fairly straight forward, we’ll use the UAC module to perform a uac_req_send() and put it in it’s own routing block called “SEND_MESSAGE”, which we’ll add after the request_route{} block:
Now when we call the route ROUTE(SEND_MESSAGE); the SIP phone at 10.0.1.5 will get a message that pops up on the screen.
So the next step is getting something other than a SIP request to call our SEND_MESSAGE route.
For this we’ll use the xHTTP module, a barebones HTTP server for Kamailio.
It’s requirements are pretty easy, sl_reply needs to be loaded before xHTTP, and you may need to add tcp_accept_no_cl=yes to your core settings (aka Global Parameters at the top).
The two lines we’ll need to load and configure the module are equally as easy:
The url_match modparam just means that any HTTP request has to start with /sip/ in the URL.
Finally we’ll define an event route to handle any xHTTP requests after our request_route{} block:
event_route[xhttp:request] {
xlog("Recieved HTTP request with request $hu"); #Write to log the URL of http request.
xhttp_reply("200", "OK", "text/html", "<html><body>OK</body></html>"); #Send HTTP response
}
Now if we restart Kamailio, and open a browser pointed to the IP of your Kamailio server, port 5060 /sip/Hello World you should get a response of “OK” in the browser:
Perfect, we now have an HTTP server in Kamailio, and we can read the HTTP request URL into a variable.
Next up we can call the route(SEND_MESSAGE) and our SIP phone will get a message
event_route[xhttp:request] {
xlog("Recieved HTTP request with request $hu"); #Write to log the URL of http request.
xhttp_reply("200", "OK", "text/html", "<html><body>OK</body></html>"); #Send HTTP response
route(SEND_MESSAGE);
}
Presto! When we call that URL (http://your-kamailio-ip:5060/sip/whatever) a SIP SIMPLE MESSAGE is sent.
But why stop there, let’s make this a bit prettier, we’ll set the message to equal the part of the HTTP request after the /sip/ so we can send custom data, replace the %20 with underscores and send it:
route[SEND_MESSAGE]{
$uac_req(method)="MESSAGE";
$uac_req(ruri)="sip:192.168.3.227:5060";
$uac_req(furi)="sip:nickvsnetworking.com";
$uac_req(turi)="sip:thisphone";
$uac_req(callid)=$(mb{s.md5});
$uac_req(hdrs)="Subject: Test\r\n";
$uac_req(hdrs)=$uac_req(hdrs) + "Content-Type: text/plain\r\n";
$uac_req(body)=$var(message);
uac_req_send();
}
event_route[xhttp:request] {
xlog("Recieved HTTP request with request $hu"); #Write to log the URL of http request.
$var(message) = $hu; #Set variable $var(message) to equal the URL of http request.
$var(message) = $(var(message){s.substr,5,0}); #Cut off first 5 chars to exclude the /sip/ prefix from the HTTP request
$var(message) = $(var(message){s.replace,%20,_}); #Replace %20 of space in HTTP request with Underscore _ for spaces
xlog("var message is $var(message)"); #Write to log the http request minus the /sip/ prefix
xhttp_reply("200", "OK", "text/html", "<html><body>OK</body></html>"); #Send HTTP response
route(SEND_MESSAGE); #Call the SEND_OPTIONS route (See above)
}
We’ll also set our core settings / gloabal parameters to listen on TCP port 80 as well as UDP port 5060 so we don’t need to specify the port in the browser:
Hopefully by now you can see some of the cool things you can do with the HTTP module. Kamailio is so much more than just a SIP router / proxy, and these external modules being able to interface with it give you so many options.
Want to offer webhooks to customers to control their calls? xHTTP can do that!
Want to play a message to all users on calls announcing to them lunch is ready? RTPengine and xHTTP can do that too!
If you’re planning on using this in production you probably want to automate the pulling of this data on a regular basis and keep it in a different directory.
I’ve made a very simple example Kamailio config that shows off some of the features of GeoIP2’s logic and what can be shown, so let’s look at the basics of the module:
if(geoip2_match("$si", "src")){
xlog("Packet received from IP $si");
xlog("Country is: $gip2(src=>cc)\n");
}
If we put this at the top of our request_route block every time we recieve a new request we can see the country from which the packet came from.
Let’s take a look at the output of syslog (with my IP removed):
#> tail -f /var/log/syslog
ERROR: <script>: Packet received from IP 203.###.###.###
ERROR: <script>: Country is: AU
ERROR: <script>: City is: Melbourne
ERROR: <script>: ZIP is: 3004
ERROR: <script>: Regc is: VIC
ERROR: <script>: Regn is: Victoria
ERROR: <script>: Metro Code is: <null>
We can add a bunch more smarts to this and get back a bunch more variables, including city, ZIP code, Lat & Long (Approx), timezone, etc.
if(geoip2_match("$si", "src")){
xlog("Packet received from IP $si");
xlog("Country is: $gip2(src=>cc)\n");
xlog("City is: $gip2(src=>city)");
xlog("ZIP is: $gip2(src=>zip)");
xlog("Regc is: $gip2(src=>regc)");
xlog("Regn is: $gip2(src=>regn)");
xlog("Metro Code is: $gip2(src=>metro)");
if($gip2(src=>cc)=="AU"){
xlog("Traffic is from Australia");
}
}else{
xlog("No GeoIP Match for $si");
}
#> tail -f /var/log/syslog
ERROR: <script>: Packet received from IP ###.###.###.###
ERROR: <script>: Country is: AU
ERROR: <script>: City is: Melbourne
ERROR: <script>: ZIP is: 3004
ERROR: <script>: Regc is: VIC
ERROR: <script>: Regn is: Victoria
ERROR: <script>: Metro Code is: <null>
Using GeoIP2 you could use different rate limits for domestic users vs overseas users, guess the dialling rules based on the location of the caller and generate alerts if accounts are used outside their standard areas.
We’ll touch upon this again in our next post on RTPengine where we’ll use an RTPengine closes to the area in which the traffic originates.
Diameter is used extensively in 3GPP networks (Especially LTE) to provide the AAA services.
The Diameter protocol is great, and I’ve sung it’s praises before, but one issue operators start to face is that there are a lot of diameter peers, each of which needs a connection to other diameter peers.
This diagram is an “Overview” showing one of each network element – In reality almost all network elements will exist more than once for redundancy and scalability.
What you start to end up with is a rats nest of connections, lines drawn everywhere and lots of manual work and room for human error when it comes to setting up the Diameter Peer relationships.
Let’s say you’ve got 5x MME, 5x PCRF, 2x HSS, 5x S-SCSF and 5x Packet Gateways, each needing Diameter peer relationships setup, it starts to get really messy really quickly.
Enter the Diameter Routing Agent – DRA.
Now each device only needs a connection to the DRA, which in turn has a connection to each Diameter peer. Adding a new MME doesn’t mean you need to reconfigure your HSS, just connect the MME to the DRA and away you go.
I’ll cover using Kamailio to act as a Diameter routing agent in a future post.
ENUM was going to change telephone routing. No longer would you need to pay a carrier to take your calls across the PSTN, but rather through the use of DNS your handset would look up a destination and route in a peer to peer fashion.
Number porting would just be a matter of updating NAPTR records, almost all calls would be free as there’s no way/need to charge and media would flow directly from the calling party to the called party.
In 2005 ACMA became the Tier 1 provider from RIPE for the ENUM zone 4.6.e164.arpa
A trial was run and Tier 2 providers were sought to administer the system and to verify ownership of services before adding NAPTR records for individual services and referral records for ranges / delegation.
In 2007 the trial ended with only two CSPs having signed up and a half a dozen test calls made between them.
Now, over a decade later as we prepare for the ISDN switch off, NBN is almost finished rolling out, the Comms Alliance porting specs remain as rigid as ever, it might be time to look again at ENUM in Australia…
I recently started working on an issue that I’d seen was to do with the HSS response to the MME on an Update Location Answer.
I took some Wireshark traces of a connection from the MME to the HSS, and compared that to a trace from a different HSS. (Amarisoft EPC/HSS)
The Update Location Answer sent by the Amarisoft HSS to the MME over the S6a (Diameter) interface includes an AVP for “Multiple APN Configuration” which has the the dedicated bearer for IMS, while the HSS in the software I was working on didn’t.
After a bit of bashing trying to modify the S6a responses, I decided I’d just implement my own Home Subscriber Server.
I’m a big fan of RTPengine, and I’ve written a bit about it in the past.
Let’s say we’re building an Australia wide VoIP network. It’s a big country with a lot of nothing in the middle. We’ve got a POP in each of Australia’s capital cities, and two core softswitch clusters, one in Melbourne and one in Sydney.
These two cores will work fine, but a call from a customer in Perth, WA to another customer in Perth, WA would mean their RTP stream will need to go across your inter-caps to Sydney or Melbourne only to route back to Perth.
That’s 3,500Km each way, which is going to lead to higher latency, wasted bandwidth and decreased customer experience.
What if we could have an RTPengine instance in our Perth POP, handling RTP proxying for our Perth customers? Another in Brisbane, Canberra etc, all while keeping our complex expensive core signalling in just the two locations?
RTPengine to the rescue!
Preparing our RTPEngine Instances
In each of our POPs we’ll spin up a box with RTPengine,
The only thing we’d do differently is set the listen-ng value to be 0.0.0.0:2223 and the interface to be the IP of the box.
By setting the listen-ng value to 0.0.0.0:2223 it’ll mean that RTPengine’s management port will be bound to any IP, so we can remotely manage it via it’s ng-control protocol, using the rtpengine Kamailio module.
Naturally you’d limit access to port 2223 only to allowed devices inside your network.
Next we’ll need to add the details of each of our RTP engine instances to MySQL, I’ve used a different setid for each of the RTPengines. I’ve chosen to use the first digit of the Zipcode for that state (WA’s Zipcodes / Postcodes are in the format 6xxx while NSW postcodes are look like 2xxx), we’ll use this later when we select which RTPengine instances to use.
I’ve also added localhost with setid of 0, we’ll use this as our fallback route if it’s not coming from Australia.
Bingo, we’re connected to three RTPengine instances,
Next up we’ll use the Geoip2 module to determine the source of the traffic and route to the correct, I’ve touched upon the Geoip2 module’s basic usage in the past, so if you’re not already familiar with it, read up on it’s usage and we’ll build upon that.
We’ll load GeoIP2 and run some checks in the initial request_route{} block to select the correct RTPengine instance:
if(geoip2_match("$si", "src")){
if($gip2(src=>cc)=="AU"){
$var(zip) = $gip2(src=>zip);
$avp(setid) = $(var(zip){s.substr,0,1});
xlog("rtpengine setID is $avp(setid)");
}else{
xlog("GeoIP not in Australia - Using default RTPengine instance");
set_rtpengine_set("0");
}
}else{
xlog("No GeoIP Match - Using default RTPengine instance");
set_rtpengine_set("0");
}
In the above example if we have a match on source, and the Country code is Australia, the first digit of the ZIP / Postcode is extracted and assigned to the AVP “setid” so RTPengine knows which set ID to use.
In practice an INVITE from an IP in WA returns setID 6, and uses our RTPengine in WA, while one from NSW returns 2 and uses one in NSW. In production we’d need to setup rules for all the other states / territories, and generally have more than one RTPengine instance in each location (we can have multiple instances with the same setid).
Hopefully you’re starting to get an idea of the fun and relatively painless things you can achieve with RTPengine and Kamailio!
Forsk Atoll is software for wireless network planning, simulation and optimization.
Atoll can do some amazingly powerful things, especially when you start feeding real world data and results back into it, but for today we’ll be touching upon the basics.
As I’m learning it myself I thought I’d write up a basic tutorial on setting up the environment, importing some data, adding some sites and transmitters to your network and then simulating it.
We’ll be using Christmas Island, a small island in the Indian ocean that’s part of Australia, as it’s size makes it easy and the files small.
The Environment (Geographic Data)
The more data we can feed into Atoll the more accurate the predictions that come out of it.
Factors like terrain, obstructions, population density, land usage (residential, agricultural, etc) will all need to be modeled to produce accurate results, so getting your geographic data correct is imperative.
Starting a new Document
We’ll start by creating a new document:
We’ll simulate an LTE network, so we’ll create it using the LTE project template.
Coordinate Reference
Before we can get to that we’re going to have to tell Atoll where we are and what datum we’re working in.
The data sets we’re working were provided by the Government, who use the Australian Geodetic Datum, and Christmas Island is in Zone 48.
We’ll select Document -> Properties
We’ll set the projection first.
Once that’s set we’ll set our display coordinates, this is what we’ll actually work in.
I’m using WGS 84 in the -xx.xxxxxx format, aka Lat & Long in decimal format.
Elevation
Elevation data is hugely important when network planning, your point-to-point links need LOS, and if your modeling / simulation doesn’t know there’s a hill or obstruction between the two sites, it’s not going to work.
There’s plenty of online sources for this data, some of which is paid, but others are provided free by Government agencies.
In this case the Digital Elevation Models for Christmas Island data can be downloaded from Geo-science Australia.
We’ll download the 5m DEM GDA94 UTM zone 48 Christmas Island.
The real reason I picked Christmas Island is that it’s DEM data is 16Mb instead of many Gigabytes and I didn’t want to wait for the download…
After a lot of messing around I found I couldn’t import the multi layered TIF provided by Geo Science Australia, Atoll gave me this error:
I found I could the TIFF formatted DEM files it in a package called VTBuilder, export it as a PNG and then import it into Atoll.
Using VTBuilder to convert DEMs in TIFF to PNG for importing into Atoll
To save some steps I’ve attached a copy of the converted file here.
You can then import the files straight into Atoll,
We’ll need to define what this dataset is, in our cases our Digital Elevation Models (aka Digital Terrain Models) contain Altitude information, so we’ll select Altitude (DTM)
We know from the metadata on the Geo Science Australia site we got the files from the resolution is 5m, so we’ll set pixel size to 5m (Each pixel represents 5 meters).
We’ll need a Geographic Coordinate, this is the Easting and Westing in relation to UTM Zone 48. The values are:
West
557999.9999999991
North
8849000
All going well you should see the imported topography showing up in Atoll.
I’ve noticed on the version I’m on I had some weirdness when zoomed out, if you try Zooming in to more than 1:10,000 you should see the terrain data. Not sure why this is but I’ve attached a copy of my Atoll config so far so you in case you get stuck with this.
We’ll download real world sites from the ACMA’s database,
I’ll use the cheat way by just looking it up on their map and exporting the data.
We’ll download the CSV file from the Map.
One thing we’ll need to change in the CSV is that when no Altitude is set for the site ACMA puts “undefined” which Atoll won’t be able to parse. So I’ve just opened it up in N++ and replaced undefined with 0.
I’ve attached a copy here for you to import / skip this step. Mastering messing with CSV is a super useful skill to have anyways, but that’s a topic for another day.
Next we’ll import the sites into Atoll, to define our sites, we’ll jump to the Network Tab and double click on Sites.
Now we’ll import our CSV file
Next we’ll need to define the fields for the import
All going well you’ll now have a populated site list.
Now if we go back to view we should see these points plotted.
Clutter
Forested areas, large bodies of water, urban sprawl, farmland, etc, all have different characteristics and will cause different interference patterns, refraction, shadow fading, etc.
Clutter Data is the classification of land use or land cover which impacts on RF propagation.
However this dataset doesn’t include Christmas Island. Really shot myself in the foot there, huh?
For examples’ sake we’ll import the terrain data again as clutter.
We’d normally define terrain classes, for example, this area is residential low rise etc, but as we don’t have areas set out we’ll skip that for now.
You can set different layer visibility by enabling and disabling layers in the Geo tab, in this case I’ve disabled my Digital Terrain Model layer and just left the Clutter Heights we just imported.
I got hit with the same Zoom bug here, not sure if it’s still loading in the background or something but the clutter data is only visible when zoomed to 1:10,000 or more, but after doing so you should see the clutter data:
So now we’ve got our environment stuff we can start to add some cell sites and model the propagation & expected signal levels throughout the island in the next post.
The History Info extension defined in RFC7044 sets a way for an INVITE to include where the session (call) has been before that.
For example a call may be made to a desk phone, which is forwarded (302) to a home phone. The History Info extension would add a History Info header to the INVITE to the home phone, denoting the call had come to it via the desk phone.
Each Diameter packet has at a the following headers:
Version
This 1 byte field is always (as of 2019) 0x01 (1)
Length
3 bytes containing the total length of the Diameter packet and all it’s contained AVPs.
This allows the receiver to know when the packet has ended, by reading the length and it’s received bytes so far it can know when that packet ends.
Flags
Flags allow particular parameters to be set, defining some possible options for how the packet is to be handled by setting one of the 8 bits in the flags byte, for example Request Set, Proxyable, Error, Potentially Re-transmitted Message,
Command Code
Each Diameter packet has a 3 byte command code, that defines the method of the request,
The IETF have defined the basic command codes in the Diameter Base Protocol RFC, but many vendors have defined their own command codes, and users are free to create and define their own, and even register them for public use.
To allow vendors to define their own command codes, each command code is also accompanied by the Application ID, for example the command code 257 in the base Diameter protocol translates to Capabilities Exchange Request, used to specify the capabilities of each Diameter peer, but 257 is only a Capabilities Exchange Request if the Application ID is set to 0 (Diameter Base Protocol).
If we start developing our own applications, we would start with getting an Application ID, and then could define our own command codes. So 257 with Application ID 0 is Capabilities Exchange Request, but command code 257 with Application ID 1234 could be a totally different request.
Hop-By-Hop Identifier
The Hop By Hop identifier is a unique identifier that helps stateful Diameter proxies route messages to and fro. A Diameter proxy would record the source address and Hop-by-Hop Identifier of a received packet, replace the Hop by Hop Identifier with a new one it assigns and record that with the original Hop by Hop Identifier, original source and new Hop by Hop Identifier.
End-to-End Identifier
Unlike the Hop-by-Hop identifier the End to End Identifier does not change, and must not be modified, it’s used to detect duplicates of messages along with the Origin-Host AVP.
AVPs
The real power of Diameter comes from AVPs, the base protocol defines how to structure a Diameter packet, but can’t convey any specific data or requests, we put these inside our Attribute Value Pairs.
Let’s take a look at a simple Diameter request, it’s got all the boilerplate headers we talked about, and contains an AVP with the username.
Here we can see we’ve got an AVP with AVP Code 1, containing a username
Let’s break this down a bit more.
AVP Codes are very similar to the Diameter Command Codes/ApplicationIDs we just talked about.
Combined with an AVP Vendor ID they define the information type of the AVP, some examples would be Username, Session-ID, Destination Realm, Authentication-Info, Result Code, etc.
AVP Flags are again like the Diameter Flags, and are made up a series of bits, denoting if a parameter is set or not, at this stage only the first two bits are used, the first is Vendor Specific which defines if the AVP Code is specific to an AVP Vendor ID, and the second is Mandatory which specifies the receiver must be able to interpret this AVP or reject the entire Diameter request.
AVP Length defines the length of the AVP, like the Diameter length field this is used to delineate the end of one AVP.
AVP Vendor ID
If the AVP Vendor Specific flag is set this optional field specifies the vendor ID of the AVP Code used.
AVP Data
The payload containing the actual AVP data, this could be a username, in this example, a session ID, a domain, or any other value the vendor defines.
AVP Padding
AVPs have to fit on a multiple of a 32 bit boundary, so padding bits are added to the end of a packet if required to total the next 32 bit boundary.
3GPP selected Diameter protocol to take care of Authentication, Authorization, and Accounting (AAA).
It’s typically used to authenticate users on a network, authorize them to use services they’re allowed to use and account for how much of the services they used.
In a EPC scenario the Authentication function takes the form verifying the subscriber is valid and knows the K & OP/OPc keys for their specific IMSI.
The Authorization function checks to find out which features, APNs, QCI values and services the subscriber is allowed to use.
The Accounting function records session usage of a subscriber, for example how many sessional units of talk time, Mb of data transferred, etc.
Diameter Packets are pretty simple in structure, there’s the packet itself, containing the basic information in the headers you’d expect, and then a series of one or more Attribute Value Pairs or “AVPs”.
These AVPs are exactly as they sound, there’s an attribute name, for example username, and a value, for example, “Nick”.
This could just as easily be for ordering food; we could send a Diameter packet with an imaginary command code for Food Order Request, containing a series of AVPs containing what we want. The AVPs could belike Food: Hawian Pizza, Food: Garlic Bread, Drink: Milkshake, Address: MyHouse. The Diameter server could then verify we’re allowed to order this food (Authorization) and charge us for the food (Accounting), and send back a Food Order Response containing a series of AVPs such as Delivery Time: 30 minutes, Price: $30.00, etc.
Diameter packets generally take the form of a request – response, for example a Capabilities Exchange Request contains a series of AVPs denoting the features supported by the requester, which is sent to a Diameter peer. The Diameter peer then sends back a Capabilities Exchange Response, containing a series of AVPs denoting the features that it supports.
Diameter is designed to be extensible, allowing vendors to define their own type of AVP and Diameter requests/responses and 3GPP have defined their own types of messages (Diameter Command Codes) and types of data to be transferred (AVP Codes).
LTE/EPC relies on Diameter and the 3GPP/ETSI defined AVP / Diameter Packet requests/responses to form the S6a Interface between an MME and a HSS, the Gx Interface between the PCEF and the PCRF, Cx Interface between the HSS and the CSCF, and many more interfaces used for Authentication in 3GPP networks.
ARP in LTE is not the Ethernet standard for address resolution, but rather the Allocation and Retention Policy.
A scenario may arise where on a congested cell another bearer is requested to be setup.
The P-GW, S-GW or eNB have to make a decision to either drop an existing bearer, or to refuse the request to setup a new bearer.
The ARP value is used to determine the priority of the bearer to be established compared to others,
For example a call to an emergency services number on a congested cell should drop any other bearers so the call can be made, thus the request for bearer for the VoLTE call would have a higher ARP value than the other bearers and the P-GW, S-GW or eNB would drop an existing bearer with a lower ARP value to accommodate the new bearer with a higher ARP value.
ARP is only used when setting up a new bearer, not to determine how much priority is given to that bearer once it’s established (that’s defined by the QCI).
MBR stands for Maximum Bit Rate, and it defines the maximum rate traffic can flow between a UE and the network.
It can be defined on several levels:
MBR per Bearer
This is the maximum bit rate per bearer, this rate can be exceeded but if it is exceeded it’s QoS (QCI) values for the traffic peaking higher than the MBR is back to best-effort.
AMBR
Aggregate Maximum Bit Rate – Maximum bit rate of all Service Data Flows / Bearers to and from the network from a single UE.
APN-MBR
The APN-MBR allows the operator to set a maximum bit rate per APN, for example an operator may choose to limit the MBR for subscriber on an APN for a MVNO to give it’s direct customers a higher speed.
The QCI (Quality Class Indicator) is a value of 0-9 to denote the service type and the maximum delays, packet loss and throughput the service requires.
Different data flows have different service requirements, let’s look at some examples:
A VoLTE call requires low latency and low packet loss, without low latency it’ll be impossible to hold a conversation with long delays, and with high packet loss you won’t be able to hear each other.
On the other hand a HTTP (Web) browsing session will be impervious to high latency or packet loss – the only perceived change would be slightly longer page load times as lost packets are resent and added delay on load of a few hundred ms.
So now we understand the different requirements of data flows, let’s look at the columns in the table above so we can understand what they actually signify:
GBR
Guaranteed Bit Rate bearers means our eNB will reserve resource blocks to carry this data no matter what, it’ll have those resource blocks ready to transport this data.
Even if the data’s not flowing a GBR means the resources are reserved even if nothing is going through them.
This means those resource blocks can’t be shared by other users on the network. The Uu interface in the E-UTRAN is shared between UEs in time and frequency, but with GBR bearers parts of this can be reserved exclusively for use by that traffic.
Non-GBR
With a Non-GBR bearer this means there is no guaranteed bit rate, and it’s just best effort.
Non-GBR traffic is scheduled onto resource blocks when they’re not in use by other non-GBR traffic or by GBR traffic.
Priority
The priory value is used for preemption by the PCRF.
The lower the value the more quickly it’ll be processed and scheduled onto the Uu interface.
Packet Delay Budget
Maximum allowable packet delay as measured from P-GW to UE.
Most of the budget relates to the over-the-air scheduling delays.
The eNB uses the QCI information to make its scheduling decisions and packet prioritisation to ensure that the QoS requirements are met on a per-EPS-bearer basis.
(20ms is typically subtracted from this value to account for the radio propagation delay on the Uu interface)
Packet Error Loss Rate (PELR)
This is packets lost on the Uu interface, that have been sent but not confirmed received.
The PELR is an upper boundary for how high this can go, based on the SDUs (IP Packets) that have been processed by the sender on RLC but not delivered up to the next layers (PDCP) by the receiver.
(Any traffic bursting above the GBR is not counted toward the PELR)