Featured

Generating Blog Content with ChatGPT

So, ChatGPT has been released... and I am fairly excited to see how it can be incorporated into our tooling as network engineers. Maybe you haven’t heard of it;

I haven’t blogged here in a while, but ChatGPT is good… like really good; it can do all sorts of useful things, such as write code, generate content and even write music;

Write a song about a lonely Wi-Fi access point with lots of CCI

Once upon a lonely street, There sat a Wi-Fi access point, With lots of CCI, It longed for a connection to keep.

Chorus: Oh, lonely Wi-Fi access point, With all your CCI, Why must you be so alone, With no one to connect to thee?

It sat and waited all day, For a device to come its way, But none would connect, And the access point’s heart began to break.

Chorus: Oh, lonely Wi-Fi access point, With all your CCI, Why must you be so alone, With no one to connect to thee?

It tried to broadcast its signal strong, But still, no one came along, It was doomed to be alone, A lonely Wi-Fi access point.

Chorus: Oh, lonely Wi-Fi access point, With all your CCI, Why must you be so alone, With no one to connect to thee?

It can even write code…

Write a simple python 3 script to check my website beaconsandwich.co.uk is up

Here is a simple Python 3 script that can check whether the website beaconsandwich.co.uk is up:

import requests

def check_website():
    url = "https://beaconsandwich.co.uk"
    try:
        response = requests.get(url)
        if response.status_code == 200:
            print("Website is up and running")
        else:
            print("Website is not responding")
    except requests.ConnectionError:
        print("Failed to connect to website")

check_website()

To use this script, save it to a file and run it using Python 3. The script will check the website and print a message indicating whether it is up and running or not. Note that this script will only check the website’s availability and not its content, so even if the website is up and running, it may not be displaying the expected content.

Anyway, I was thinking – I haven’t updated this blog in a while, could I just go and do a bit of mild plagiarism to generate said content? Then the question of ownership came up, something not covered in the FAQs or something that the bot won’t answer, so it felt like rocky ground. Instead I thought I’d provide a couple of relevant examples.

Question 1:
Can you write a blog on the benefits of software engineering for a Wi-Fi professional, including relevant tools and qualifications

Response:
As a wifi professional, learning software engineering can be incredibly beneficial for a number of reasons. First and foremost, having a strong understanding of software engineering principles can help you design and implement more efficient and effective wifi networks. By knowing how to code and build software, you’ll be able to automate tasks and processes, allowing you to work more efficiently and effectively.

In addition to improving your day-to-day work, learning software engineering can also open up new career opportunities. As the world becomes increasingly digital, there is a growing demand for skilled software engineers who can design and build the technology that powers our modern world. With a background in software engineering, you’ll be well-positioned to take advantage of these new opportunities and potentially earn a higher salary.

Another benefit of learning software engineering is that it can help you better understand the technology that you work with on a daily basis. As a wifi professional, you likely deal with a variety of hardware and software systems, and having a strong understanding of how these systems work can help you troubleshoot and solve problems more effectively. Additionally, understanding software engineering principles can also help you communicate more effectively with other members of your team, as well as with your clients and customers.

Furthermore, learning software engineering can also be a great way to stay up-to-date with the latest developments in the field. The technology landscape is constantly evolving, and staying current with the latest trends and best practices in software engineering can help ensure that you are providing the best possible service to your clients and customers.

Some relevant software engineering skills and tools for wifi professionals include:

  • Network programming and networking protocols: As a wifi professional, you’ll need to be familiar with network programming languages like C and Python, as well as networking protocols like TCP/IP, HTTP, and UDP.
  • Wireless security: Security is a critical aspect of wifi networks, so it’s important for wifi professionals to be knowledgeable about wireless security protocols and tools like WPA2, HTTPS, and SSL/TLS.
  • Network simulation and testing: In order to design and implement effective wifi networks, you’ll need to be able to simulate and test network behavior using tools like Wireshark, Mininet, and GNS3.
  • Cloud computing: As more and more organizations move their infrastructure to the cloud, wifi professionals need to be familiar with cloud computing platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
  • DevOps: In order to effectively manage and maintain wifi networks, it’s important to have a strong understanding of DevOps principles and tools like Git, Jenkins, and Ansible.

By having a strong understanding of these skills and tools, wifi professionals can design and implement more effective and efficient wifi networks, and stay current with the latest developments in the field.

Overall, learning software engineering can provide a range of benefits for wifi professionals. From improving your day-to-day work to opening up new career opportunities and staying current with the latest trends, there are many reasons why it can be valuable to learn this skill. Whether you decide to take an online course or enroll in a formal education program, investing in software engineering knowledge can be a smart move for anyone looking to advance their career in the wifi industry.

Question 2:
Write a blog christmas decoration based wi-fi interference

Response:
As the holiday season approaches, many of us are getting ready to deck the halls with festive decorations. But while twinkling lights and mistletoe may add to the holiday cheer, they can also cause interference with your wifi signal.

One of the most common culprits of wifi interference during the holiday season is the Christmas tree. The combination of metal and water in the tree, along with the hundreds of twinkling lights, can create interference that disrupts your wifi signal. To avoid this issue, it’s a good idea to keep your wifi router as far away from the tree as possible. Additionally, try to avoid placing the tree near windows or doors, as these areas can also cause interference.

Another common source of wifi interference during the holiday season is wireless speakers. Many of us like to play holiday music throughout the house, and wireless speakers are a convenient way to do this. However, the wireless signals used by these speakers can interfere with your wifi signal, causing slowdowns and dropped connections. To avoid this issue, try to keep your wireless speakers as far away from your wifi router as possible, and consider using wired speakers instead.

Tinsel can also cause wifi interference, as the metal in the tinsel can create interference that disrupts your wifi signal. To avoid this issue, it’s a good idea to keep your wifi router as far away from tinsel as possible. Additionally, try to avoid placing tinsel near windows or doors, as these areas can also cause interference. If you’re using tinsel on your Christmas tree, try to keep it towards the outer edges of the tree, away from the trunk and branches where your wifi router is likely to be located.

Finally, holiday decorations like string lights and garlands can also cause wifi interference. The metal in the wires and the electrical currents running through them can create interference that disrupts your wifi signal. To avoid this issue, try to keep these decorations as far away from your wifi router as possible. Additionally, consider using LED lights, as they produce less interference than traditional incandescent bulbs.

In conclusion, while holiday decorations can add to the festive atmosphere, they can also cause interference with your wifi signal. By keeping your wifi router away from metal objects, avoiding wireless speakers, and using LED lights, you can minimize interference and ensure that your wifi stays strong and stable during the holiday season. Happy holidays!

Conclusion
Are we out of a job? Well, me perhaps. The answers, whilst high level enough for a mainstream tech article and a mainstream audience, don’t give the level of detail you all desire – and look at me, writing a blog about writing blogs without actually writing a blog – job done, thanks AI.
It is useful for generating quick and easy code though, and as mentioned at the top, should be a tool you all introduce to your belt – given it can give fairly detailed responses it may sometimes be a quicker alternative to googling, and probably is a glimpse into the search engines of the future.

Shut Your Pi-hole

Featured

Recently, within the wireless community there has been a lot of noise around the ad-blocking project pi-hole – and why not? It’s a lightweight DNS and DHCP server which blackholes ads – it’s highly customisable and will run on pretty much anything that runs linux (or in a docker container).

I’ve personally used pi-hole for a number of years, and thought it would be a good idea to show some tips, tricks and highlight some of the drawbacks of such tools.

So, I don’t intend to write a blog on how pi-hole works in any sort of detail, instead I wanted to share my own experiences.

Which Blocklist?

There is only one blocklist you should be using, https://dbl.oisd.nl
This is a list of c1m unique entries, and is basically a combination of all other known blocklists; it will block pretty much any add that can be blocked. There is a support page here: https://redd.it/dwxgld

A very important discussion point is the 33.9% blocked… that doesn’t mean a third of my traffic is ads, it means that some services hate being blocked, and will repeatedly make requests forever, and ever, and ever. It’s fine, let them, they want to call home – and remember, this is DNS requests, not traffic.

Roku hates pi-hole

usability

Some websites work better with ads, which is a clever trick. For tech savvy users, it isn’t an issue, but one thing I quickly found is that my family hated using Google with blocks, here’s why:

Google clearly marks ads, but sticks them at the top of the search results – I can ignore them easily, but my wife can’t, sometimes the result she wants is an ad. That’s fine, but by default pi-hole will drop DNS requests for these pages (due to a redirect), if those are dropped you get ‘Page cannot be displayed’ and an annoyed wife. Do you know what I like much less than ads? Annoying my wife – so that’s whitelisted straight away. Even using Twitter can be frustrating, as they force a redirect through their tracking platform.

In addition, some video streaming sites won’t play content if you use default block lists, or sometimes the default block lists don’t block the ads – this is where the pi-holes logs are super handy. Filter on your device and refresh to see the entries come in, and with a bit of trial and error you can blacklist/whitelist domains until you find the sweet spot – or just admit defeat and allow the ads.

One other very important feature is the temporary disable – it will allow you to quickly prove whether or not the pi-hole is to blame:

Some services, like Youtube and Amazon Prime are clever, and they stream their ads from the same source as the content – pi-hole can’t block those I’m afraid.

REGEX

You can use regex in blacklisted entries, which is really useful, I use it to capture some of the more complex ad services which have numerous sources for ads (Roku is a prime example) or, if you don’t trust your cheap IoT devices you can blanket block by TLD – for example, a Chinese domain block is (^|.)cn$

Control those cheap IP cameras!

DHCP

When you first set pi-hole up you will see all entries in the logs are from your router, and thats because your router is forwarding the requests. Easy way to sort that, disable your routers DHCP and enable it on pi-hole, you get much richer reporting without losing any functionality.

BIND it with unbound

DNS by default isn’t very secure, and when your pi-hole doesn’t know the answer, it forwards to an upstream DNS server – can you trust them? Why would you, if you don’t control them.
One way of improving things is mixing pi-hole with unbound, which allows you to have a recursive DNS server, and this means the DNS request will be sent to the authoritative DNS server for that domain, starting with the TLD – instead of boring you directly, I’ll bore you indirectly, read this and install unbound and enable DNSSEC: https://docs.pi-hole.net/guides/unbound/

Pi-hole Version 5 (Beta)

If you’re feeling brave, jump to v5 – it has plenty of features, such as adding custom DNS entries (previously was done by editing the host file), creating groups of users (not blocking ads for the kids, as an example), CNAME inspection, plus much more, and it’s very stable.
https://pi-hole.net/2020/01/19/announcing-a-beta-test-of-pi-hole-5-0/

echo "release/v5.0" | sudo tee /etc/pihole/ftlbranch
pihole checkout core release/v5.0
pihole checkout web release/v5.0
Example of v5 feature

WLAN Monitoring – Splunking on Pi

FeaturedCreative Charts
Splunking Around

The Caveat/Disclaimer

I’m going to start with some caveats that completely undermine myself and this blog, but I feel it is needed and I need to be completely honest:

  1. I am by no means an expert in Splunk
  2. I am not an expert in Wi-Fi
  3. I am not an expert in any of the tools we talk about here, including Linux

Why did I feel that was necessary? Well, some of the things I talk about here are simply my own way of discovering how to use these tools and how to get these tools to interface with each other, and they probably do not follow any industry standards or best practices.
This for me is a learning journey, and some of the documentation is lacking or out of date so I have found my own ways to overcome these challenges. To make things easier for myself, where the documentation is up to scratch I will simply refer to that, but I have had to deviate from some standard instructions and ‘bodge’ things.

Finally, this is evolving, I’ve only just got it working!

Now for the blog…

Ever since I started playing with Splunk at work I have become quite obsessed with it, as have most of my team. Seeing data displayed in different ways helps provoke new thoughts, ideas, and also helps identify issues that you never knew – it is an awesome bit of software and I encourage all WLAN professionals to have a go. On top of that, there are some great innuendos you can use playing with Splunk, which helps keep the work day light hearted.

Splunk

I’ve also been playing a bit with a WLANPi, and some of the tools like Horst and Kismet, and have been really impressed with the wealth of data that you can export – the WLANPi is one of the best collaboration efforts of the industry

Over a year ago I had a vision of both these worlds colliding and using a remote Pi to stream data back to Splunk, but I had numerous problems with the hardware and software that was available. For example, I could never get the Splunk Forwarding software installed on a WLANPi, and some problems with the RPI platforms that I couldn’t get around (until now)

When the RPI 4 came out, I decided to give it another go – here’s how you can.

What You Need

  1. RPI
  2. Server/Laptop/Desktop
  3. NICs capable of Monitor mode (I use Comfast CF-912AC)
  4. SD Card

Getting Installed

  1. Install Splunk on a server/laptop/desktop. By default it will install Splunk Enterprise, you can convert to Splunk Free (500MB/day data cap) once installed.
    1. Link to Download: https://www.splunk.com/en_us/download/splunk-enterprise.html
    2. Link to Install Manual: https://docs.splunk.com/Documentation/Splunk/7.2.4/Installation/Whatsinthismanual
  2. Install Raspbian Lite on RPI
    1. Link to Download: https://www.raspberrypi.org/downloads/raspbian/
    2. Link to Install Manual: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
    3. Important, before putting the SD Card into the Pi, create a file in the boot folder of the media called ssh (no extension). This will allow SSH access as soon as it is connected to a network.
  3. Insert the SD Card, plug into a network and power.
  4. SSH onto the RPI (you may need to find its IP from a network scanning tool) and install the Re4son Kernel. This Kernel has the support for a heck of a lot of adapters and makes things easier:
    1. Download and Install Manual: https://re4son-kernel.com/re4son-pi-kernel/
  5. Install the Splunk Universal Forwarder on the RPI (ARMv6) – this is the software used to forward data into Splunk:
    1.  Link to Download: https://www.splunk.com/en_us/download/universal-forwarder.html (hint, once you  start the download you can view a wget link on the right – you can copy this and download directly onto your RPI)
    2. This is the most problematic install, but this got it working for me – just run these commands:
      1. sudo tar xvzf splunkforwarder.....64.tgz -C /opt
      2. Set the environment variable as documented here: http://dev.splunk.com/view/quickstart/SP-CAAAFDH
      3. ln -s /lib/arm-linux-gnueabihf/ld-linux.so.3 /lib/ld-linux.so.3
      4. Run sudo ./splunk start from $SPLUNK_HOME/bin
      5. Hopefully, it should ask you to accept an EULA and set a username and password, if that works, you’re in!
  6. Disable Predictable Network Names – this gets around an annoyance where the first adapter Raspbian sees becomes “wlan0”, so without disabling this, it can sometimes assign the wrong interface as wlan0 with an external NIC:
    1. Run sudo raspi-config
    2. Go to Option 2 – Network Options
    3. Go to Option 3 – Network interface Names
    4. Change to disable
    5. Exit the wizard and reboot (nb, the Splunk Forwarder doesn’t start automatically so will need starting again)
  7. Back on your main Splunk Instance, you need to now add a Receiving Port:
    1. Click Settings > Forwarding & Receiving > under Configure Receiving click Add New, and enter a port number (I used the suggested port, 9997)

Getting Data In

Right, that should be your Splunk instances up and running, and now you need some data. You can import data in many formats, but initially I’m going to focus on horst and it’s ability to export to CSV

  1. On your RPI, install horst (sudo apt-get install horst)
  2. Create a folder called SplunkFiles (mkdir /home/pi/SplunkFiles)
  3. Run cd $SPLUNK_HOME/etc/system/local
  4. Create a file called inputs.conf (sudo nano inputs.conf)
  5. Enter the following details to enable the Splunk Forwarding software to monitor:
    [default]
    host = raspberrypi
    [monitor:///home/pi/SplunkFiles/horst.csv]
    index=main
    sourcetype=csv
  6. Press ctrl-x and then y at the prompt to save
  7. Create the file outputs.conf (sudo nano outputs.conf) in the same location as inputs.conf. If the file already exists, you can overwrite it:
    [tcpout]
    defaultGroup=my_indexers
    [tcpout:my_indexers]
    # this is the ip of your Splunk Enterprise
    server=xx.xx.xx.xx:9997
  8. Restart Splunk Forwarder (run sudo ./splunk restart from $SPLUNK_HOME/bin)
  9. Set Splunk Forwarder to run at boot (if you wish) – run sudo ./splunk enable boot-start from $SPLUNK_HOME/bin
  10. Now it’s time to start Horst! First, get your interface using ifconfig
  11. Now run the following command: sudo horst -i [interface] -o /home/pi/SplunkFiles/horst.csv (eg, sudo horst -i wlx40a5ef47aa11 -o /home/pi/SplunkFiles/horst.csv)
  12. Horst should run, and all being well it will start writing to horst.csv.
  13. If everything is working, Splunk Forwarder should fire that into Splunk – thats easy to check! On your Splunk Server, click Search and Reporting and simply search for *
    Splunk Data
  14. If you don’t see data, it might be due to the date and time being wrong on your RPI – you can set this in raspi-config: https://howchoo.com/g/njnlzjmyyjg/how-to-set-the-timezone-on-your-raspberry-pi
Once you’ve got your data in, you can simplify Horst by modifying the config file: sudo nano /etc/horst.conf
Then to start horst, you just run sudo horst.


To run horst automatically, you can run sudo nano /etc/rc.local and add sudo horst & prior to exit 0

A note about data inputs:

To add additional inputs, you need to edit inputs.conf as above:

[monitor:///path/to/files/csv.csv]
index=main
sourcetype=nameofsource

You can make up the source name, but one thing I have found tricky is the ingestion of headers. There are a couple of ways to tell Splunk which headers to use, I assume there is a bug as some work some of the time, but not all, all of the time.

On your UF, you can create a file in $SPLUNK_HOME/etc/system/local called props.conf. The format of the file is as follows:

[nameofsource]
CHARSET = UTF-8
INDEXED_EXTRACTIONS = csv
description = Comma-separated value format. Set header and other settings in "Delimited Settings"
DATETIME_CONFIG =
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
category = Custom
disabled = false
FIELD_NAMES = example,header,in,csv,format

You can create the same file on your Splunk server too, each time you edit the file you will need to restart Splunk on both the forwarder and server.

An other method is to view your data as set out below, but then click Extract Fields, and follow the instructions (Delimited, Comma, etc)

Visualising Your Data

Now, I don’t intend to teach you all how to use Splunk, as it’s all really well documented, but this should give you some things to consider. Once you get used to how to handle searches, you can use them to create really cool dashboards.

To help you get started though, the easiest way to construct data is as follows

  • Always start with the source of the data, eg: host=”raspberrypi” source=”/home/pi/SplunkFiles/horst.csv”
  • A Pipe ( | ) creates fragments.. I don’t know the technical term but I just think of it as a method of separating functions
  • All fields in each event are searchable (click the chevron to view them)
    Splunk Fields
  • Here are a few examples of the easiest search types you can use to begin are:
    • timechart [latest/max/min/avg](FIELD) by FIELD
      • eg. to see the maximum SIGNAL by CHANNEL you can run timechart max(SIGNAL) by “MAC SRC”
      • You can also run multiple, such as timechart max(SIGNAL), min(SIGNAL), avg(SIGNAL) by “MAC SRC”
      • You can give friendly names to things too.. timechart min(SIGNAL) as “Min Signal” by “MAC SRC”
      • You can throw in a span to change the time range timechart span=30m min(SIGNAL) as “Min Signal” by “MAC SRC”
    • stats [latest/max/min/avg](FIELD) by FIELD
    • chart [latest/max/min/avg](FIELD) by FIELD [over FIELD]
      • chart [latest/max/min/avg](FIELD) by _time is similar to a time chart

Clicking on the Visualisation tab will allow you to display in various graph and chart type

Once you’ve got used to doing searches, you can get creative and even create dashboards

Creative Charts
Creative Charts
Splunk

Kismet

It is also possible to install Kismet and export data using the well documented Kismet API, but you need a third party app installed on Splunk which used to be free, but now costs $99,

To get it working is quite easy, you just follow the documentation on Kismet’s page as well as installing the third party app into splunk – if you use Kismet you do not need to install the Universal Forwarder.

Edit: I have very quickly thrown together a dashboard which you can copy.

Example Dashboard

In the Dashboards menu, simply click Create New Dashboard and then click on XML, you can replace the entire contents with the below – remember these are examples that I have put together very quickly, it is intended to give you a feel for how searches work and how to use dashboards:

<form>
  <label>Horst</label>
  <fieldset submitButton="false">
    <input type="time" token="field1">
      <label></label>
      <default>
        <earliest>-24h@h</earliest>
        <latest>now</latest>
      </default>
    </input>
    <input type="text" token="mac" searchWhenChanged="true">
      <label>MAC Address</label>
      <default>*</default>
    </input>
  </fieldset>
  <row>
    <panel>
      <title>Frame Type By Channel</title>
      <chart>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" "WLAN TYPE"="*"
| chart count("WLAN_TYPE") over "CHANNEL" by "WLAN TYPE" usenull=false</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>10m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">column</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
    <panel>
      <title>Top Talkers Per Channel (Bytes)</title>
      <chart>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv"  
| chart sum("LENGTH") as Bytes over CHANNEL by "MAC SRC"  usenull=false limit=10000</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>10m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">column</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
  </row>
  <row>
    <panel>
      <title>Avg Bytes per Client</title>
      <chart>
        <title>SSID</title>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" "MAC SRC"=$mac$
| timechart span=1m avg("LENGTH") by "MAC SRC" usenull=false useother=false limit=10000</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>1m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">line</option>
        <option name="charting.chart.nullValueMode">connect</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
    <panel>
      <title>Avg RSSI per Client</title>
      <chart>
        <title>SSID</title>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" "MAC SRC"=$mac$
| timechart span=1m avg("SIGNAL") by "MAC SRC" usenull=false useother=false limit=10000</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>1m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">line</option>
        <option name="charting.chart.nullValueMode">connect</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
  </row>
  <row>
    <panel>
      <title>Frame Types</title>
      <chart>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" "MAC SRC"=$mac$
| timechart span=1m count("WLAN TYPE") by "WLAN TYPE" usenull=false useother=false limit=10000</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>1m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">area</option>
        <option name="charting.chart.nullValueMode">connect</option>
        <option name="charting.chart.stackMode">stacked100</option>
        <option name="charting.drilldown">none</option>
        <option name="charting.legend.mode">seriesCompare</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
    <panel>
      <title>Beacon RSSI</title>
      <input type="multiselect" token="SSID" searchWhenChanged="true">
        <label>SSID</label>
        <choice value="*">All</choice>
        <fieldForLabel>ESSID</fieldForLabel>
        <fieldForValue>ESSID</fieldForValue>
        <search>
          <query>host=raspberrypi source="/home/pi/SplunkFiles/horst.csv" ESSID!=ESSID
| dedup ESSID
| table ESSID</query>
          <earliest>-24h@h</earliest>
          <latest>now</latest>
        </search>
        <default>*</default>
      </input>
      <chart>
        <title>SSID</title>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" ESSID=$SSID$ "WLAN TYPE"=BEACON
| timechart span=30s avg("SIGNAL") by "MAC SRC" usenull=false</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>1m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.chart">line</option>
        <option name="charting.chart.nullValueMode">connect</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
  </row>
  <row>
    <panel>
      <title>Frame Type By Channel</title>
      <chart>
        <search>
          <query>host="raspberrypi" source="/home/pi/SplunkFiles/horst.csv" "WLAN TYPE"="*"
| chart count("WLAN TYPE") over "CHANNEL" by "WLAN TYPE" limit=10000 useother=false usenull=false</query>
          <earliest>$field1.earliest$</earliest>
          <latest>$field1.latest$</latest>
          <refresh>10m</refresh>
          <refreshType>delay</refreshType>
        </search>
        <option name="charting.axisTitleX.visibility">collapsed</option>
        <option name="charting.axisTitleY.visibility">collapsed</option>
        <option name="charting.axisTitleY2.visibility">collapsed</option>
        <option name="charting.chart">pie</option>
        <option name="charting.chart.stackMode">stacked</option>
        <option name="charting.drilldown">none</option>
        <option name="charting.legend.placement">none</option>
        <option name="height">487</option>
        <option name="refresh.display">progressbar</option>
        <option name="trellis.enabled">1</option>
        <option name="trellis.splitBy">CHANNEL</option>
      </chart>
    </panel>
  </row>
</form>

Living with Wiperf

A while back I debuted a new tool that I had been helping @WiFiNigel to develop called Wiperf. You can read all about WiPerf here on Nigel’s blog or on github. What I haven’t really discussed is the fact that I have been using Wiperf privately for over 6 months now, and it is incredibly useful and powerful, but dealing with remote sensors isn’t always as straight forward as you might think and sometimes it’s pretty hard keeping them online.

One of the massive benefits of ‘vendor’ sensor solutions is that they have a management platform, and in it’s simplest form, although a brilliant piece of work, Wiperf is a collection of scripts to test the network – there is no simple way of managing remote Pi’s that I have yet discovered.

As you all know, the wireless community is very active with some pretty bright and clever people, so some of todays problems probably won’t exist in the future; I will do my best to keep this up to date as the collective finds solutions.

Provisioning

First off, provisioning is a challenge. Whilst with a bit of attention and time you could probably code it, all the probes need configuring. First you have to get the SD card provisioned, then you have to get it on a network. If you’re local to a sensor, thats easy, but if you’re remote, you’re going to struggle.

My method at the moment of provisioning devices is using a private GitHub repo with the required config files – as an example:

sudo apt-get update && sudo apt-get upgrade -y
curl -s 'https://raw.githubusercontent.com/zerotier/ZeroTierOne/master/doc/contact%40zerotier.com.gpg' | gpg --import && \ if z=$(curl -s 'https://install.zerotier.com/' | gpg); then echo "$z" | sudo bash; fi
sudo zerotier-cli join <ztkey>
sudo apt-get install python3 python3-pip iperf3 git watchdog unattended-upgrades msmtp msmtp-mta bsd-mailx -y
sudo pip3 install iperf3 speedtest-cli configparser
sudo pip3 install git+git://github.com/georgestarcher/Splunk-Class-httpevent.git
cd ~
sudo git clone https://github.com/wifinigel/wiperf.git
sudo curl -s https://<gitkey>@raw.githubusercontent.com/<myprofile>/<supersecretrepo>/master/config.ini -o ~/wiperf/config.ini
sudo curl -s https://<gitkey>@raw.githubusercontent.com/<myprofile>/<supersecretrepo>/master/watchdog.conf -o /etc/watchdog.conf
sudo apt-get update && sudo apt-get upgrade -y
sudo systemctl enable watchdog
sudo curl -s https://<gitkey>@raw.githubusercontent.com/<myprofile>/<supersecretrepo>/master/mycron -o mycron
sudo crontab mycron
sudo curl -s https://<gitkey>@raw.githubusercontent.com/<myprofile>/<supersecretrepo>/master/50unattended-upgrades -o /etc/apt/apt.conf.d/50unattended-upgrades     
sudo tee -a /etc/msmtprc > /dev/null <<EOT
account        mailgun
host           smtp.eu.mailgun.org
port           587
from           $HOSTNAME@<supersecretdomain>
user           postmaster@<supersecretdomain>
password       <supersecretpassword>
auth           on
tls            on
tls_starttls   on
tls_certcheck  on
logfile        ~/.msmtp.log
account default : mailgun
EOT
echo "This is the email body" > /tmp/body.txt && sudo mailx -s "This is the subject" me@<supersecretdomain> < /tmp/body.txt; rm /tmp/body.txt
sudo reboot

The above script isn’t really a script, it’s just a collection of commands – until I pick up a book and find some time, it’s how I provision. The benefit is that the only things I have to do is get a network connection locally and give a hostname – then copy and paste that into the Pi and we’re all good.
From top to bottom, this is what’s going on:

  1. Update and Upgrade via apt
  2. Install ZeroTier and join my Network
  3. Install the Wiperf pre-reqs, Watchdog, unattended-upgrades and an SMTP client (more on these later)
  4. Install Wiperf pre-reqs from Pip
  5. Clone the Wiperf repo
  6. Grab my Wiperf config
  7. Grab my Watchdog config
  8. Update and Upgrade via apt, again, just for good measure
  9. Enable Watchdog as a service
  10. Grab my crontab config
  11. Grab my unattended-upgrade config
  12. Configure SMTP using Mailgun
  13. Test Mailgun
  14. Reboot

All being well, your sensor is rocking and rolling by this point – it’s rough, nasty and crude but it works. Eventually I want to build an automated provision where you just feed a seed file to the Pi, but I’m not that clever.

Oh, and if you have a captive portal, you need to put additional thought into how you log in.

watchdog

Nigel has done a brilliant job of making Wiperf resilient, so Watchdog may not give you anything, but for me it’s ‘another line of defence’. Wiperf already reboots the sensor given certain conditions, and Watchdog pretty much does the same. Basically, it’s a daemon which runs and conducts predefined tests, depending on the outcome of those tests it can re-initiate a reboot. I use it quite simply, if a Pi cannot contact the Splunk server over ZeroTier it will reboot, but you can set many more tests, as defined here.
My config is pretty straight forward, I just amend the top section:

ping                    = zt.ip.add.here
#ping                   = 172.26.1.255
interface               = ztinterface
#file                   = /var/log/messages
#change                 = 1407

Unattended Upgrades

So, you have all your Pi’s around your network, across countries or whatever, how do you update them? Doing this manually would be a nightmare. Unattended-Upgrades just updates packages via apt automatically – again, it’s configurable – this guide roughly describes it.
For me, email updates were key. I opted to use mailgun’s free tier to email me with the changelog and notify of any updates. You can do this by editing the following section of the config file:

// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
Unattended-Upgrade::Mail "email@goes.here";

On top of that, you need the capability to send email from your Pi, thats the msmtp msmtp-mta bsd-mailx bit of my apt-get install above. On top, I configure the msmtp file, which is the last section of my ‘script’ above – finally, it sends a test email using the hostname@domain.

ZT MTU & CRONTAB

ZeroTier derives its MTU from the physical NIC, which you will often see is 2600 bytes. This works, most of the time, but as soon as you start tunnelling this via CAPWAP/EoIP or IPSec you may see performance problems. I haven’t found an elegant way of fixing the MTU permanently, so I do this via cron:

*/2 * * * * /usr/bin/python3 /home/user/wiperf/wi-perf.py >> /home/user/wiperf/wi-perf.log 2>&1
*/5 * * * * /sbin/ifconfig ztnfapqn4s mtu 1200
@reboot /sbin/ifconfig ztnfapqn4s mtu 1200

Device types

So far, I’ve experimented with various models of Raspberry Pi – some of them never work as well as others. The most stable I have found is the RPI 4B+ 4GB model without an external NIC, and RPI 3B+ with a Comfast CF-912ac.
I have tried 3A+s, Zeros, etc, none of them seem as performant and stable. To put it one way, I have had 3x 3B+s online with the Comfast NICs permanently for over 6 months, they have never missed a beat – my 3A+ worked fine for a few days before it started struggling. I guess the point is you need to pick your Pi wisely and test it – when these are deployed remotely it can be difficult to rectify any errors. The below is a view of my 3A+, which is quite clearly struggling.

The other thing to consider is client capability, even the best RPI is a 1×1:1 device, so don’t expect anything a better MCS rate than MCS8 (86.6Mbps on a 20MHz channel with SGI)

Recovery Processes

If you rush into deploying these without any real recovery process you’re going to end up with some stranded Pi’s, and nobody likes a stranded Pi. We’ve heard people running into every problem you can imagine – building closures, dodgy config changes, things going missing, corrupt SD cards, each of which has left people wandering around remote buildings trying to figure out where they had actually put them and trying to get them online. Even the model of device is important, if you forget you’ve deployed an RPI 4B+ and rock up with a micro-USB cable, you’re not going to have a difficult time. It can be a pain. If you want to deploy Wiperf sensors in anger make sure you document their location and have a process for manual recovery!

Cyber Security

If you’re deploying these sensors, tell your Cyber Security team, otherwise they may end up being picked up and confused as a Russian spying device. One approach was to run purely over a guest wireless network using ZeroTier, so no data sent actually transits the network or internet natively and the underlaying wireless network almost becomes irrelevant – it’s just another client doing a network test and sending it over the internet, just like Ookla and Speedtest.net, who also collect stats from the tests. If you are working for an organisation which is security sensitive make sure they are involved and happy to sign off their deployment.

Conclusion

These sensors are very useful for an extremely low cost point, but I guess the whole point of this blog is that you need to have a think about how you want to design your solution. I would not recommend buying 10s of sensors without ensuring there is a management wrap around them, and without testing. Once you get it right, they can make a big difference to how you manage your network, both from day to day operations to assessing impact of change. Rushing into deploying sensors like this will cause you issues.
As mentioned in the intro, as the tools mature we will discover new tools and between us we will design more elegant solutions to provision and manage the devices – I have tried some linux MDM solutions but none of them really blew me away or offered anything, but it is still early days and there is a lot of water to flow under the bridge before I would class Wiperf as an ‘enterprise ready’ solution.

So, who can you trust?

There’s a big misinformation problem at the moment, you see Fake News been plastered everywhere by media outlets, but there’s a bigger piece of misinformation which, although well intended, puts you at significant risk – and not many people are willing to speak out against it!

Everybody knows, open wireless networks are the enemy. They offer no security, no encryption, and let bad guys capture, sniff and even trick you. Nobody likes being sniffed, especially without consent. So, don’t use them! Use anything but them! And whatever you do, stay away from them!

But, wait! What if I told you that you shouldn’t trust any network you don’t control? You probably wouldn’t listen, as it sounds like I’m scare mongering, it sounds like I’m putting more responsibility on you, right? It shifts the responsibility from the network provider to you, and you don’t like that, right?

TOUGH!

This is a real problem, with the focus being on open wireless networks being such a security risk, it actually means you’re letting your guard down elsewhere. If you do not control a network end to end, it cannot be considered secure – end of. This misinformation leads to so many bad habits, and the general public simply do not understand what is a secure network and what isn’t.

People think a PSK is secure – SPOILER: If shared to anyone but yourself, it is not.
People think a Captive Portal adds security – SPOILER: It does not!
People think a cable is more secure than wireless – SPOILER: It isn’t!
People think using a Public VPN will secure everything – SPOILER: It does not, no way, ever.

Instead of teaching the public good practice and good personal security whilst on the net, the public is taught to take precautions to avoid being exposed on the shortest transmission path of a networked conversation, and ultimately leave themselves exposed to everything beyond.
The bottom line is that unless you control every aspect of the network, end to end, you have absolutely no comfort of security and you should practice good habits to make sure you are secure wherever you go. Let me give you a little clue as to where this is heading – you don’t control the internet.

Here are some tips on how to stay secure when using Zero Trust Networks:

  1. Use HTTPS!
    Back in the day everything was HTTP, which literally sent everything unsecured, you could capture that traffic at any point in transit and see exactly what is going on. Always check for the little padlock in your browser, always ensure you’re using https, and never, not ever, should you ignore https error messages unless you really know what you’re doing. Even better still, when layered with HSTS it’s even more secure.
    Read here for more info: https://www.howtogeek.com/181767/htg-explains-what-is-https-and-why-should-i-care/
  2. Don’t trust Public VPNs!
    A Public VPN does encrypt your traffic over the local network, and it does encrypt it over some of the internet, thats a fact – I’m not arguing with that! BUT, your traffic has to pop out unencrypted somewhere, which is your VPN providers network. Why do you trust that it’s any more secure than your local ISP or network provider? Clue: You can’t.
    I’m not regurgitating other peoples content: https://www.youtube.com/watch?v=WVDQEoe6ZWY#action=share
  3. Keep your software up to date
    Obvious advice, but bugs and vulnerabilities exist in all software. Whenever that vulnerability is exposed the software is patched. Stay up to date, silly.
  4. Don’t reuse passwords
    Chrome and iOS have built in password managers now, so you don’t ever have to remember them, so don’t reuse them. Third party password managers exist like 1Password.
    Here are some tools which can assist you to understand if your details have been breached:
    Google: https://passwords.google.com/checkup/start?utm_source=chrome&utm_medium=ios&utm_campaign=leak_dialog
    Have I Been Pwned: https://haveibeenpwned.com/
  5. Use 2 Factor Auth
    Wherever supported, use 2 Factor Auth. This is basically a way of verifying that you are you – the 2 factors being something you know (a password) and something you have (a phone/app etc). Most popular websites allow you to use 2FA, and you can either use SMS or an app like Google Authenticator to store the codes.
  6. Use a Private VPN
    A Private VPN is a good way of protecting you from the local network and over the air, however it does not protect you from anything on the internet. You can run Private VPNs for free, and you just need a Raspberry Pi and https://www.pivpn.io/.
  7. Use Apps from verified sources
    If you know what you’re doing, then thats fine, but as a general rule don’t trust apps that have fallen outside of Apple, Microsofts or Androids vetting regime – if you’re unsure, don’t jail break, don’t root, use the app store – simple.
  8. Watch out for Phishing and dodgy emails
    If you’re unsure if an email is genuine, delete it, if it is important they will try contact you via other means.
    Don’t open them if you can avoid it, don’t click on links. Easy ways to spot phishing or spam emails is to inspect the content, check the ‘from’ field is the actual company you expected, hover your mouse over links to see if the URL is what you expected, check for spelling and grammar errors. If you are still unsure whether or not the email is genuine, go to their website in a browser and phone them.
  9. Always expect the worse
    This goes for all communication mediums, and is really the entire point of this blog, don’t trust! For example, if you get a phone call, letter, email or whatever from your bank, verify they are your bank – ring them on a familiar number! Remember, no company should ever ask you for your password or any pins and security codes, these are yours and only yours.

I could go on, but good security hygiene is crucial on line. There are thousands of articles offering advice, but the key bit I wanted to be clear about is that an open wireless network is not the enemy, and any way of trying to convince you that a network is secure by adding layers of encryption and the likes just clouds the matter.
Of course the security of your corporate wireless network for your corporate device is probably fine, but still be careful.
As I have mentioned, if you do not control the end to end network then err on the side of caution and be aware of what you are doing – unfortunately your personal security is down to you, and nobody else.

So yes, connect to that open wireless network, enjoy the free access to the internet, just be sensible.

OFDMA Filler Blogpost

I haven’t got a great deal to share at the moment, but I have been playing around with 802.11ax with a Cisco 9120 and a client with an intel ax200 NIC and an iPhone 11 – enjoy some OFDMA whilst they both do a speedtest at the same time.

This was captured on a Sidekick on ESSl, using 160MHz wide channels.

Splunking on Pi – DIY Active Sensors [UPDATE: Now working with the WLANPi!]

Update: Now WLANPi compatible

I’d love to take the credit, but I can’t, @WifiNigel has been hard at work making his WiPerf tool work cross platform, and therefore on the WLANPi. Please scroll down to updated instructions, they header is nice and red so should stand out!
It’s getting a bit cumbersome keeping this blog up to date, so for latest read me’s head to https://github.com/wifinigel/wiperf

There are a few different flavour of sensor on the market; you have sensors which perform active tests, you have sensors which sniff frames, you have sensors which capture spectrum data, you even have sensors which sit on your wired network and try figure out how your network is performing.. blah, blah blah.

Don’t get me wrong, some are really good, but they all have a few things in common:
1) Fairly substantial up front cost
2) Subscription based
3) Proprietary
4) Require substantial change, whether it be appliances, cablers, downtime, etc.

So, what if I told you that there was a way to get an active sensor without a subscription? What if I told you you could get sensors that had an upfront cost of less than £50/$50? What if I told you that it wasn’t proprietary, and could be built on by the community to bring in new features? What if I told you the leg work had already been done, and the instructions are right here, in this blog?

Huh? Have I got your attention? Good, read on!

Here’s a bit of background.. shortly after I published my latest blog I got chatting to WiFiNigel. See, Nigel had a nifty bit of code that ran speed test outputs to a file, however he didn’t have an elegant way of logging that data to visualise – In my eyes, Splunk was the perfect tool to do that. So, Nigel did the easy bit (coding) and I did the hard bit (made pretty graphs). Very soon we saw the potential of Mr Bowdens labour, and we soon realised just how beneficial this could be.

Utilising a range of tools, we ultimately designed a sensor solution that ran on Raspberry Pi’s, and could connect over a VPN (ZeroTier) to log data to a central Splunk server (under my desk in my home office). We tested for a while in a production environment and I introduced some additional tools, such as Watchdog (to auto-reboot devices on network/other failure) and Apache Guacamole (which can be used to remotely and securely access devices outside of the ZeroTier VPN). Nigel’s python skills meant that we have quickly been able to implement several tests, such as:

  1. Speed test
  2. Ping Test of multiple hosts
  3. UDP iPerf Test
  4. TCP iPerf Test
  5. DNS resolution timer of multiple hosts
  6. DHCP timer
  7. Data Rate, RSSI, TX retransmits (for your Pi) and BSSID, SSID, etc

All of these tests can be turned on and off individually, and configured to test your favourite hosts, exportable in either JSON or CSV – and the code is on GitHub for you to pull/clone/branch/merge to your hearts content – WiPerf.
And I guess thats the key thing, the reason this has excited us as much as it has is because of the success of the WLANPi, the community have built a home grown tool that rivals any vendor tool, and the potential of a community built and driven distributed sensor network is huge.

So, what am I waiting for? Well.. here’s what you need

  1. RPI 3B+ or 4B (Older models are 2.4GHz only, you can still use with an external NIC)
  2. A device capable of running a Splunk Free server (https://docs.splunk.com/Documentation/Splunk/8.0.0/Installation/Systemrequirements)
  3. Thats it 🙂

New: Cross Platform Instructions for WLANPi/RPI

  1. Install Splunk on a server/laptop/desktop. By default it will install Splunk Enterprise, you can convert to Splunk Free (500MB/day data cap) once installed.
    1. Link to Download: https://www.splunk.com/en_us/download/splunk-enterprise.html
    2. Link to Install Manual: https://docs.splunk.com/Documentation/Splunk/7.2.4/Installation/Whatsinthismanual
    3. After installation, go to Settings > Data Inputs
    4. Click HTTP Event Collector
    5. Click Global Settings and configure as per the below:
      All Tokens: Enabled
      Default Source Type: _JSON
      Default Index: Default
      Default Output Group: None
      Use Deployment Server: Unticked
      Enable SSL: Ticked
      HTTP Port Number: Up to you, but default is 8088 >> This is needed for your wiperf config file
    6. Click Save, then click New Token, with the following settings on the initial page:
      Name: Anything you want, haven’t worked out what this is used for
      All Other Fields: Blank/Default
    7. Click Next for the secondary page, and configure as follows:
      Source Type: Automatic
      App Context: Searching & Reporting
      Index: Add all
      Default Index: Main
    8. Click Review, then Submit. On the next screen it will display your HEC Token >> This is needed for your wiperf config file

      Your Splunk server is now configured, you can re-use your Token for all your sensors!
  2. Install Your Pi OS: Raspberry Pi
    1. Link to Download: https://www.raspberrypi.org/downloads/raspbian/
    2. Link to Install Manual: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
    3. Important, before putting the SD Card into the Pi, create a file in the boot folder of the media called ssh (no extension). This will allow SSH access as soon as it is connected to a network.
  3. Install your Pi OS: WLANPi
    1. Visit http://wlanpi.com to get the image and write to an SD card.
  4. Insert the SD Card, plug into a network and power.
  5. Go to the WiPerf github repo and follow the README to install the scripts: https://github.com/wifinigel/wiperf
  6. Set the general section of the config.ini file, set your architecture to the correct platform, set data_format to _json, set data_transport to hec, and data_host to your Splunk server IP, and data_port and splunk_token to the relevant information from your Splunk server captured above.
  7. HEC will use your devices hostname as the default host in Splunk, so set this to something unique and meaningful otherwise all your data will be from host=raspberrypi

    Thats it, you should now be receiving data into Splunk. Scroll down past the UFW instructions for more Splunky goodness

Instructions for Slunk UFW
This is still a valid method, however more complex to install and does not work on the WLANPi

  1. Install Splunk on a server/laptop/desktop. By default it will install Splunk Enterprise, you can convert to Splunk Free (500MB/day data cap) once installed.
    1. Link to Download: https://www.splunk.com/en_us/download/splunk-enterprise.html
    2. Link to Install Manual: https://docs.splunk.com/Documentation/Splunk/7.2.4/Installation/Whatsinthismanual
    3. After installation, you need to add a Receiving Port: Click Settings > Forwarding & Receiving > under Configure Receiving click Add New, and enter a port number (I used the suggested port, 9997)
  2. Install Raspbian Lite on RPI
    1. Link to Download: https://www.raspberrypi.org/downloads/raspbian/
    2. Link to Install Manual: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
    3. Important, before putting the SD Card into the Pi, create a file in the boot folder of the media called ssh (no extension). This will allow SSH access as soon as it is connected to a network.
  3. Insert the SD Card, plug into a network and power.
  4. Install the Splunk Universal Forwarder on the RPI (ARMv6) – this is the software used to forward data into Splunk: Link to Download: https://www.splunk.com/en_us/download/universal-forwarder.html (hint, once you start the download you can view a wget link on the right – you can copy this and download directly onto your RPI)
  5. This is the most problematic install, but this got it working for me – just run these commands:sudo tar xvzf splunkforwarder.....64.tgz -C /opt
  6. Run sudo ./splunk start from /opt/splunkforwarder/bin
  7. Hopefully, it should ask you to accept an EULA and set a username and password, if that works, you’re in!
  8. Go to the WiPerf github repo and follow the README to install the scripts: https://github.com/wifinigel/wiperf
  9. Set the WiPerf config.ini to write JSON files
  10. Head over to https://github.com/Krisalexroberts/wiperf-splunk-config and copy the .conf files into /opt/splunkforwarder/etc/system/local
  11. Set a unique sensor name on row 2 of inputs.conf and insert your Splunk server IP address in outputs.conf. If you don’t set a unique name for your sensor, all the data you receive will look like it came from the same host!
  12. In /opt/splunkforwarder/bin run sudo ./splunk enable boot-start
  13. Reboot!

That should be enough to get data into Splunk. Now the interesting bit is actually displaying the data – if you head over to Splunking on Pi you can see some example searches and how to display data, these searches can be inserted in Dashboard Panels.

The great thing about this is it barely uses any data, with Splunk free you get 500MB of data a day, with 5 sensors running tests every 2 minutes I’m using 0.1% of that allocation!

Anyway… thats how you get one sensor talking to Splunk. How about.. I don’t know.. 100?

ZeroTier

If you haven’t used ZeroTier before, you’re missing out.. so, what is it? Well, think of it as a big switch in the sky. It’s basically an overlay network which allows nodes to talk to eachother securely with minimal config, and for 100 nodes in a network, it’s free.

  1. Head over to https://zerotier.com/ and create an account
  2. Despite being free, you need to give your card details – but you’re not going to be charged, don’t worry!
  3. Click Networks, and Create a Network
  4. Once created, you will see it populate in the list, click on it – it will display a Network ID – thats the important bit.
  5. Click on Download and on your Sensors and your server, download the agent. Once downloaded and installed, on each node type sudo zerotier-cli join networkID or enter it in the relevant ZeroTier GUI on your server
  6. Back on the ZeroTier website, navigate back to your networks and scroll down, you’ll see a list of all the devices that want to join, tick the box and you’re done – everything’s talking. Remember to give devices a meaningful name!
  7. On your Splunk server, run ifconfig and you should see a new interface for your VPN, take a note of its IP address – you need to head back over to outputs.conf on your Sensors and replace the previous IP with the ZT IP.
  8. Once you’ve changed outputs.conf, navigate to /opt/splunkforwarder/bin and run ./splunk restart

Again.. thats it, that will allow up to 100 nodes to talk to each other – easy as pi! One problem I did run across was the encapsulation overhead on some networks makes it difficult to access the sensors via CLI. I had to set up a cronjob to set the ZeroTier MTU down to 1200 bytes (sudo ifconfig zerotierinterface mtu 1200) – thats obviously network independent though – just watch your toxic tails!

One thing to watch out for is that the DHCP reset in WiPerf doesn’t work well with ZeroTier as it has to re-establish a tunnel afterwards.

Watchdog

One thing I spotted pretty quickly was that the sensors went offline sometimes.. and quite randomly. I quickly worked out that RPIs aren’t great at handling network disconnections, so decided to install Watchdog to automatically reboot in preconfigured conditions.

  1. On your Pi run sudo apt-get install watchdog
  2. Edit the config file (sudo nano /etc/watchdog.conf)
  3. You can set watchdog to ping a host, and time out after x failures by setting the ping field to an IP/Host, interface to your wlan interface, and retry-timeout to whatever value you want – I found that anything less than 60 caused frequent reboots.

Apache Guacamole

Apache Guacamole is one thing that gets me excited, see the project here: https://guacamole.apache.org/

What is it? Well, it’s a web server that will let you SSH/RDP/etc to hosts from a web interface. Whilst it’s not really necessary for this project, I found that having multiple devices to manage was cumbersome. You’re best spinning up a CentOS server and joining it direct to ZeroTier, then running the install script found here: https://github.com/Zer0CoolX/guacamole-install-rhel

If you configure TOPT (2FA) and LetsEncrypt (SSL/TLS) via the script, you get really easy and straight forward secure access to a number of devices

As of today, this doesn’t work on the WLANPi – we simply can’t get Splunk installed, but there are plans to output the WiPerf data as Syslog which should allow you to output to Splunk (hence the unused transport config items in config.ini).

As the RPI runs a similar architecture to the WLANPi, you could in theory run most applications you can on a WLANPi on an RPI, eg Kismet – it’s pretty much a blank canvas though and we encourage you to share ideas and develop your own code and use cases!

Cisco Blog Awards
Vote for Nigel!

Personally, I think this is super cool, and the possibilities are endless and could quite easily rival a vendor tool – imagine kicking off a HORST capture when a threshold is breached, or mapping your sensors based on available BSSIDs using wigle.net – and they’re just the two I’ve thought of whilst writing this conclusion! I haven’t even begun to think about any integration between your WLC data and the data from the sensors!

Most of the effort here is from WiFiNigel – like I said, all I did was make pretty graphs. Nigel has been nominated in the Cisco 2019 IT Blog Awards, please repay him for not only his effort here, but his effort in all things wireless, by voting here, and of course read his blog here

Thanks!

The Big DFS Debate – Real Life Findings

Whether to use DFS channels or not is an increasing topic of debate, with a vocal few (yes, I mean you, Andrew McHale) choosing to recommend the banishment of these borrowed channels when you’re running time critical traffic over your network, such as Voice.

As a recap, DFS means Dynamic Frequency Selection, and to cut a long story short 802.11 borrows  5GHz channels which have a primary use elsewhere, mainly RADAR.
There are a few rules, I won’t go into it in detail, but the main ones are:

  1. If an AP operates on a DFS channel and detects a DFS event, it isn’t welcome and must change channels. This isn’t great, but something we all have to cope with
  2. A client cannot probe on a DFS channel unless it hears a beacon or probe response on that channel – this is very important in Andrew’s anti-DFS mantra.

In the UK, we have just 4 non-DFS channels, the US has more. Wikipedia may help: https://en.wikipedia.org/wiki/List_of_WLAN_channels#5_GHz_(802.11a/h/j/n/ac/ax)

Using iOS as an example, it is widely understood that iOS devices only scan DFS channels in UNII-2e/Band B every 6 scans. Andrew goes into this in detail, but the point is that DFS channels are invisible to iOS devices on 5/6 scans, which got me thinking; does this mean only 1/6 iOS devices are using these channels? Logically, the answer would be yes. If you had 2 access points at a minimum of -67dBm, then 5/6 of the clients would end up on a non-DFS AP, but as with anything Wi-Fi… it’s not that straight forward.

At work, we are big iOS users – and big DFS users, every bit of frequency helps when you service 500k unique clients a month. So, I put this theory to the test, expecting to find a DFS shaped coverage hole.

I decided to export a list of all Apple iOS devices and their channels, and work out how many clients we see on each channel.

My first thought was to simply look at the ratio of Non-DFS to DFS clients:
Screenshot 2019-08-04 at 16.59.03

I quickly realised that this was pointless, we have far more access points on Non-DFS channels so naturally they will attract more clients. I also realised that in some of our sites, due to size, we may not have any access points on DFS channels – so they were filtered out as I needed to ensure the client actually had a decision to make. Similarly, I also removed DFS only sites (of which there were 3 or 4, damn you RRM).
I decided it was probably worth looking at the average number of clients per AP on DFS vs Non-DFS, as this will remove any bias formed from the number of APs per channel, what I found was pretty surprising.

Screenshot 2019-08-04 at 17.04.33

Theres barely any difference, certainly not enough difference to cause me to take any action. For clarity, we do not have 802.11k/v enabled to influence the results.

What does this tell me?

Well, the obvious answer is that iOS devices have no bias to DFS. One thing that is often overlooked which I cover in my blog Ping Really Does Pong is that devices periodically scan – even when not looking to roam. Whilst in the midst of a panic roam (where a device signal drops off a cliff and it has to find another AP to roam to quickly) it would take a while to pick up DFS, if your device roams naturally due to a gradual degradation of its favourite metrics (RSSI, MCS, SNR, whatever – see green diamond), it probably already has a good idea of the world around it.
No vendor documents roaming algorithms in detail, so this is just hypothesising, but if a device already knows the world around it, I would assume it doesn’t scan through the entire channel list at a natural roam point.

The other possibility is that our sites have coverage issues and clients are forced to DFS because they have a limited choice in non-DFS – we have nearly 2000 sites so it’s more than likely, and we don’t run voice so it’s probably not a big issue!

I don’t have all the answers, and I would be open to any other suggestions, but from the data I have on my own network, DFS channels aren’t causing any issue.

It’s a DF-Yes from this corner of the UK!

 

Set Your Portal Free

It’s no secret that users hate captive portals. It’s no secret that WiFi engineers hate captive portals. So, why do we still have them? Well, there’s a few reasons people might have about why they think they need a portal, here are a some:

  1. Management wants to sell stuff
  2. Management wants to email stuff
  3. Management wants to track people to sell stuff, email stuff
  4. Management wants to make money
  5. To satisfy legal and regulatory requirements

So, other than number 5, there’s a common theme – this problem is well above Layer 7 – the majority of Captive Portals are used solely to exploit customers, staff, and the likes. It’s a bit of a fallacy though, captive portals are deterrents and actually give users a false sense of security, potentially exposing user data through poor knowledge of how these networks are secured.

Satisfying legal requirements is a difficult one to crack – whilst I’m not an expert, in the UK we have 2 pieces of legislation that we have to abide by if we’re the ISP and if we hold customer data – Investigatory Powers Act and GDPR.

Feel free to correct me, but to comply with the Investigatory Powers Act if you’re an ISP, you need to be able to identify a user and at the very least log source and destination addresses of services they are accessing. When the relevant authorities spot that you have a user accessing illegal content you need to be able to offer them up on a plate.
GDPR means that as well as offering a user up on a plate you have to tell them everything you plan to do with their data, whether its email lists, browsing habits, or just a list of their names. Users have to be able to opt out (which can be as simple as not using the network) and they have the right to be given all data that you hold (via a Data Subject Access Request).
In order to comply, it can be pretty costly to store and process all that data, and you probably need a few lawyers to help on the way.
There is a simple way around this though, but it comes at a cost… Don’t be the ISP! There are a few companies that you could partner with which takes the headache away, or you could build the capability internally – but let’s face it, who wants that burden?

I’m sure management won’t be keen on losing an advertising/revenue stream, but if people don’t use your network because of the portal, there isn’t much benefit! Just remind management that if somebody is on your wireless network they can see your store – a few eye-catching posters will get much more of a reaction than an ad on a portal.

One of the things that is almost impossible to find is data on some of the burning questions a WLAN engineer might have – how does it impact my network, how do I scale, and what do I need to watch out for?

Security and the bad habits

One of the first things that is noticed is that users have no idea how wireless security works! Now, I wouldn’t expect them to be experts, but there needs to be an education piece to tell users that Captive Portals do not mean security! 

Most users know that https in a URL means the website is secure and they incorrectly link a secure captive portal with a secure wireless network, potentially exposing theirselves. If you’re operating an internal guest network or a BYOD network, make sure your colleagues know that in order to be secure they ideally need to be using a VPN or at the very least, checking that their webpages are SSL.
A security savvy colleague is a safe one, even if you’re not thinking about kicking the portal it’s probably a good thing to do.
Oh yeah, and never log onto a website with a certificate error!

You will also probably get requests to put a PSK on the network, or even hide the SSID. Now, the analogy we have used to explain why this is stupid is that a widely shared PSK and a hidden SSID is akin to your favourite supermarket taking down all branding and signage, locking the doors with a single key, and then posting the address and a copy of they key to all its customers – that’s not security, just the illusion of security, and again, it drives bad behaviour.

How much bandwidth?

Well, this is obviously unique to all use cases, but in a carpeted office or high street retail store with users just doing normal user stuff (Facebook, Cloud Services, Email, Browsing, etc) you’re looking at less than 1Mbps per user over the air. It’s very, very hard to come up with a figure, but from experience, it barely tickles your WAN links – it’s not a lot to worry about! I would just make sure you have proper protection mechanisms such as a tested QoS policy. 

Try not to rate limit though, especially by using the wireless infrastructure. You may find a protection mechanism such as a policy map with a capped CIR is more beneficial at pinch points within your network if you are concerned, but make sure you build enough head room in – ideally, you want this traffic off your network as quickly as possible.

What kind of growth should you expect?

Again, unique to your use case. In an office environment, we find it works out quite well to assume that on average your users have 2.3 devices each, but would struggle to use more than 2 at any one time. If you’re on the phone, you’re not listening to music, but you could be checking your emails on an iPad. Obviously there are some exceptions to the rule, but using a multiplier of 2.3 has given fairly accurate figures in the past.
Clearly, 2.3 is a point in time figure, and you have to use your multiplier on the number of people in the locations, not the number of people using the service with a portal! Remember, people don’t like captive portals and won’t use them unless they simply have to! It would be safe to assume a growth of 300-500% of concurrent connections.

One of the more difficult things to overcome, especially in a large organisation across multiple areas, is that some APs get pretty hot and you start exceeding a comfortable number of users per AP – there isn’t much I can say about that except that the AP density in these sites was probably wrong to start with, use it as a basis to fight for funding and improve! 

What about channel utilisation?

More users means an increase of channel utilisation, right?

Wrong. Associated clients are happy clients, they don’t probe aggressively. You will quickly realise that a client associated and not talking uses less airtime than a client which isn’t associated, especially in 2.4GHz where clients quite happily probe on overlapping channels.

One of the other overlooked pieces is that again, people hate captive portals! Your users aren’t idiots, they’ll quickly learn that MiFi/Personal Hotspots/3G & 4G routers and even fixed line broadband is a quick way of getting online without the hassle; the friction the portal adds doesn’t stop them. Once you remove the portal, that headache will start to vanish – and guess what, that’s less beacons and less overhead! Great.

Don’t expect miracles though, you’re probably looking at an average of 2-5% decrease in 2.4GHz channel utilisation. 5GHz will more than likely remain the same.

Summary

A wise man by the name of Keith Parsons has campaigned that users want fast, free and frictionless WiFi access – and as I have mentioned above, users will do whatever they need to do to get online, usually at a detriment to your network. With the advent of 5G, we will see WiFi and Cellular technologies move closer than ever before, and users will simply expect a seamless shift onto WiFi – if that’s not offered, your network simply won’t be used. 

It is a bit of a journey, and there are a lot of unknowns – bad user behaviour will quickly come out of the woodwork and will need combatting.

In my opinion, in a corporate setting especially, a captive portal such as a sponsor portal shows zero trust; it shows users that they aren’t trusted to think or act by themselves and somebody else needs to be accountable for their behaviour. A portal free wireless service makes for a happy customer and a happy colleague.

The bottom line is, don’t be scared – your users will reward your hard work.