802.11k has been getting a bit of attention lately and a few weeks ago Aruba announced features in their software to deal with “sticky clients” i.e. those that seem to associate with a specific AP and then hold onto that connection for dear life even when there are significantly better options available from nearer access points.
I’ve recently been running a couple of our larger events using a new feature in the 7.4 Cisco WLAN Controller code release. These sorts of events are a great tested for us as they provide a large variety of device types, and a large number of potential users too.
This is by no means a controlled environment, but it is the real world and that is a lot more interesting to me than any other sort of testing.
Digging around in the Cisco documentation for their 11k implementation I found the following section, this details not only how to enable 11k support on a specific WLAN profile but also enable an “assisted roaming” option for non 11k clients. I’d give that document a good read as there are a couple of restrictions on the use of these features that may rule them out for a lot of environments.
There are currently no GUI controls for any of these features so you will need to go into the CLI to enable them.
So why did I want to test this?
I was primarily interested in how this setup would compare to the aggressive load balancing feature offered in the WLC in a high density environment (auditorium, approx. 1000 devices).
Frequently when using aggressive load balancing we will see that AP’s in an auditorium are well balanced in terms of client count and distribution across bands, however we often notice that there are a lot of clients that seem reluctant to move to a better AP instead preferring to sit on one on the other side of the room and refusing to move, this is even following some fairly savage changes in supported data rates to try and help discourage this kind of behaviour.
Often we will see that the loading of clients whilst relatively balanced does not bear any resemblance to the physical distribution of clients in the room at a time – this means in my mind that there are potentially clients not being served by the best AP for them.
What did we observe in our testing?
Test setup: Cisco 5508 WLC / 3602 AP’s / 188.8.131.52 Code / HD WLAN Tweaks
When we enabled these features on a large public network for a recent event what we saw was very encouraging. Client distribution across the AP’s remained relatively even, however what we were noticing in this setup was that the more loaded AP’s were those that were in areas of the seating more heavily occupied, not heavily overloaded but handling 15-20 more clients per radio than the average (60 clients) for the room at that moment in time. As the room filled out the overall distribution was pretty even, but then every seat was occupied.
Additionally were noticing very few issues with client connectivity too, something that I have found the aggressive load balancing if not carefully tuned can often result in.
In other areas of the building, foyers and the exhibition floor for example we were noticing that strategically placed AP’s in entry ways into those areas were seeing reduced client counts and more people were being roamed onto AP’s inside the rooms and client distribution across the show floor was also much more even than we had previously been able to achieve.
Would I run this in a production network?
This was by no means a controlled test environment and there could have been any number of unseen issues as a result of implementing these features on the controller.
There are also a few caveats about the use of this configuration such as you can’t have this running across more than one WLC – I’m not sure how many people out there run one WLC per site but I can’t imagine it’s a lot.
I would welcome comments or questions on this post, unfortunately I don’t have much quantifiable data to share on this one other than a few screen grabs from Prime showing client loading, but no direct comparison other than prior experience to compare this like for like with aggressive load balancing.