I recently asked the Ekahau site survey tool maker if they could add a feature that allows visualisation of different qualities of client transceiver. The reason I asked is because the range of Wi-Fi client ability continues to expand. Very soon the new Broadcom BCM4354 chip will deliver two-stream 802.11ac Wi-Fi to smartphones. Meanwhile some very old clients are still in use along with clients that have poor design and/or build quality. Some websites report that the Samsung Galaxy S5 should be available in April using the BCM4354 and that the iPhone 6 will also use it. So the Broadcom BCM4354 will rapidly expand to the range Wi-Fi client ability that WLANs are expected to manage. I think that site survey tools should allow me to deliver reports that visualise this diversity of connection quality. If you have to provide a certain level of Wi-Fi service to a diverse set of clients you need to know what they will experience, not just what can be obtained on the high end equipment that WLAN professionals use.
This is a link to Cisco’s latest interesting and comprehensive set of statistics and predictions about mobile data traffic from 2013 to 2018. Note that mobile data traffic discussed in it is traffic passing though mobile operator macrocells. This post is more broadly interested in the growth of wireless data transport, especially as it concerns Wi-Fi.
In their study Cisco note that “globally, 45 percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2013.” They add that “without offload, mobile data traffic would have grown 98 percent rather than 81 percent in 2013.” The study predicts 52% mobile offload by 2018. Nonetheless, it still predicts a 61% compound annual growth rate in global mobile data traffic from 2013 to 2018 (i.e. an increase by 10.6 times) from 1.5 exabytes per month at the end of 2013 to 15.9 exabytes of data per month by 2018. The study further predicts that “the average smartphone will generate 2.7 GB of traffic per month by 2018, a fivefold increase over the 2013 average of 529 MB per month.” i.e. a 63% compound annual growth rate. In more nascent areas the study projects that “globally, M2M connections will grow from 341 million in 2013 to over 2 billion by 2018, a 43 percent CAGR”. It does not estimate traffic volumes for M2M, but does notes its overlap with wearables. The study estimates that “there will be 177 million wearable devices globally, growing eight-fold from 22 million in 2013 at a CAGR of 52 percent”, but “only 13 percent will have embedded cellular connectivity by 2018, up from 1 percent in 2013”. The study also considers that “globally, traffic from wearables will account for 0.5 percent of smartphone traffic by 2018” and “grow 36-fold from 2013 to 61 petabytes per month by 2018 (CAGR 105 percent)”. The study also states that “globally, traffic from wearable devices will account for 0.4 percent of total mobile data traffic by 2018, compared to 0.1 percent at the end of 2013”. The study projects no significant change to the order of the share of mobile data type by 2018: mobile video 69.1%, mobile web/data 11.7%, mobile audio 10.6%, mobile M2M 5.7%, and mobile file sharing 2.9%. The study expects the following device type data consumption by 2018: laptop 5095 MB per month, 4G tablet 9183, tablet 5609, 4G smartphone 5371, smartphone 2672, wearable Device 345, M2M Module 451, non-smartphone 45.
As we can see from Cisco’s predictions Wi-Fi is set to become even more significant for offloading. To be clear, in their study offloading is defined as pertaining to devices enabled for cellular and Wi-Fi connectivity, but excluding laptops. Their study says that “offloading occurs at the user/device level when one switches from a cellular connection to Wi-Fi/small cell access”. Eventually offloading should be substantially simplified by Hotspot 2.0 enabled equipment, although it will take years for Wi-Fi CERTIFIED Passpoint equipment deployments to reach significant levels. The impact of Hotspot 2.0 is not mentioned in Cisco’s study.
Obviously Wi-Fi is far more than just an offloading adjunct for mobile operators. It also provides adaptability to local needs, containment of data, and will facilitate the Internet of Things and the Internet of Everything. Ownership of wireless network infrastructure allows wireless data transport to be better matched to local needs; for example providing more control of costs, throughput, latency, and service reliability, along with competitive advantages through differentiating functionality. Wireless network infrastructure ownership also allows data to remain local, circumventing the security concerns and compliance requirements associated with data passing through equipment owned by others. Finally, ownership is the only viable approach for connecting massive numbers of diverse M2M and wearable devices to the Internet of Things and the Internet of Everything. ‘Fog computing’ promotes hyper local data processing that it argues is necessary to manage the rapid growth in transported data that is expected from the Internet of Everything. Naturally this makes no sense without hyper local connectivity that currently is dominated by Wi-Fi. Data cabling is clearly not adaptable enough to handle massive, transient, mobile, and rapidly scaling connectivity. So Wi-Fi is destined to continue its rapid growth, not just on its own merits as a general purpose wireless data transport that will continue to gain new uses, but also as a convenient offloading platform for mobile operators and a network edge for the Internet of Things and the Internet of Everything.
Many organisations recognising the significance of Wi-Fi have plans to expand its abilities with improved standards, more license free electromagnetic spectrum, and enhanced functionality and technology. Others are developing wireless data transport systems with more specialised uses to accompany Wi-Fi, such as Bluetooth, Zigbee, WirelessHD, and NFC. However, wireless data transport for the Internet of Things and Internet of Everything needs wireless access points to be low cost so they can be deployed in large quantities. They need to handle very high numbers of transient and mobile connections, and provide high throughput for uses such as video. They also need to operate at short range to make better use of their license free spectrum in a space. Finally they should operate out-of-band with other transceivers to maintain service levels. These requirements are difficult address coherently. We have previously suggested the concept of a ‘myrmidon’ access point that is in essence a simple (and therefore low cost) short range access point operating out-of-band to Wi-Fi that would specialise in handling very high numbers of connections and high throughput. Myrmidons would defer all or most other functionality to proximate and much fewer more intelligent (and so more expensive) access points and/or other specialist ‘orchestration devices’. WiGig is an obvious choice for myrmidons as it is out-of-band to Wi-Fi, has a short range, high throughput, and is controlled by the Wi-Fi Alliance. Certainly Cisco’s predictions concerning the numbers of connections from M2M and wearable devices suggest pause for thought, especially in light of how few are predicted to have their own cellular connectivity. Not using Wi-Fi is an expensive and slow to deploy route. This is why we believe the myrmidon access point concept is the most natural approach as it can be more easily integrated with Wi-Fi. Nonetheless, other approaches using Wi-Fi as it currently exists are possible, especially when more spectrum is made available.
Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).
Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.
If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.
Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.
Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.
Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.
The IEEE has finally announced approval of IEEE 802.11ac amendment to the 802.11 standard.
It seems like we waited forever.
Understandably makers were keen to sell us products that take advantage of it before it was finalised; that created the expectation.
As long as buyers are made aware of the risks associated with early adoption I am happy to have that choice.
802.11ac is expected to be an important step in Wi-Fi, but as it operates in the 5 GHz band only its shorter range will increase system costs where more access points are required to get coverage.
I have been disappointed for some time by the lack of progress from 1 GbE to 10 GbE. Links to APs do not serve one client device. APs working in two RF bands are common and extra bands are being tested to provide even more RF PHY. Better use is made of the 5 GHz band by 11ac. Directional antennas and 11ac MU-MIMO further increase capacity of RF PHY. These factors combined with climbing throughput demands means 1 GbE backhaul is already too little for some. Xirrus XR6000 series APs have 4x 1 GbE ports and 1x SFP+ 10 GbE port to handle backhaul from up to 16x 3 stream 11n or 11ac modules. 10 GbE needs to get a lot cheaper soon.
There are three main concerns when designing a Wi-Fi system: coverage, throughput, and features. In this post we consider coverage.
First a note on coverage as a distinct concern: It is not possible to completely separate issues of coverage from throughput. Better coverage will very often lead to better throughput. We will find out more about this in the post on throughput. Also some features are dependent on coverage. Again we will find out why in that post.
Terminology: A devices that provides Wi-Fi coverage is called an access point (AP). In commercial deployments APs are usually attached to ceilings or are fixed high up on walls, but they can be placed on surfaces. Most home and small business ‘routers’ now contain an AP, along with several other items of network technology. Any Wi-Fi enabled device that connects to an AP is called a station (STA). They includes all modern laptops, tablets, mobile phones, most e-readers, many games consoles, and increasingly TVs and other consumer electronics. In fact an AP is a special kind of STA which provides access to a distribution network for the STAs connected to it. A Wi-Fi system is an example of a wireless local area network (WLAN). Attenuation of a signal is weakening of it.
Coverage is about having a Wi-Fi signal where it is required. This has three main considerations: the number of APs that serve a location, the band of radio frequencies used, and obstacles to coverage.
The first consideration is the number of APs that serve a location. Even though most Wi-Fi systems try to ensure each location is covered by an AP, inevitably AP ranges overlap. So many locations will be covered by more than one AP. Indeed, in some cases we deliberately ensure that this is the case. This can be a kind of insurance policy against problems with APs. If your Wi-Fi coverage is critical or very important in some locations then you may wish to ensure they are covered by more than one AP. However, for Wi-Fi to work well APs with overlapping ranges should work on different radio frequencies. For a number of technical reasons this makes Wi-Fi more efficient, but for now it is enough to say we are trying to avoid interference. The most commonly used frequencies are in what is referred to as the 2.4 GHz band, which is available for use without a radio operator’s license. Consequently it is popular with many kinds of transmitting devices, which can be a problem; more on this later. The 2.4 GHz band is divided into 13 channels in the UK and Europe. Unfortunately each Wi-Fi channel is wider than one of these 13 channels, but four Wi-Fi channels can be well enough separated for tolerable interference levels by centring them on channels 1,5,9,13. In the US the situation is worse as they only have 11 channels in the 2.4 GHz band, so they can use only three Wi-Fi channels centred on channels 1,6,11. Much equipment is designed in America and they set defaults to suit their own market you will often see their pattern of channels 1,6,11 used in the UK too. Anyway, try to arrange things so that the APs that are closest together are not centred on the same channel. As your neighbours will also likely have Wi-Fi you may have to take this into account as well. So it may be best to start planning your channels by looking at what you are receiving from your neighbours at your boundary if coverage there is important.
The second consideration is the band of radio frequencies to use. In fact Wi-Fi can work in two radio frequency bands. The less commonly used higher frequency band is referred to as the 5 GHz band, which is also available for use without a radio operator’s license. In the UK it has 19 channels, although for arcane reasons only 16 are typically available. The 5 GHz band channels are a little complicated so we won’t discuss them here, but importantly 5 GHz Wi-Fi channels fit into them without interfering with their neighbours. These 5 GHz channels have two properties that are important to us that are due to the laws of physics for radio waves at higher frequencies. Firstly they don’t travel as far as 2.4 GHz band radio waves. Secondly it is easier to get them to transmit more information in an amount of time. For many the reduced range is the more important property. You will need to fit more APs working at 5 GHz for the same coverage so the cost of the system is higher. If however higher throughput and/or stability is important, then consider using the 5 GHz band. There is another important point to make about the 5 GHz band. Currently it is much less used so it is easier to get a signal free from interference. If your 2.4 GHz band is a mess with all sorts of transmissions you should consider using the 5 GHz band. The last thing we will say about the 5 GHz band is that the latest and greatest version of Wi-Fi – so called 5G Wi-Fi (802.11ac) – only works in this band. So if you are planning to upgrade from a 2.4 GHz system to 5G Wi-Fi, unless your existing system was designed for 5 GHz you will have more APs requiring extra cabling, almost certainly requiring existing cables to be repositioned, and probably requiring replacement of switches they connect to or extra switches.
The third consideration is that everything that Wi-Fi signals pass through attenuates them. Two things are particularly problematic: water and metal. Water absorbs Wi-Fi signals. As people are about 60% water they are a significant problem. That is why we prefer to place access points high up, so their signals pass more through air and less through people. This is particularly important where many people are close together, such as in locations that host events like conferences and social gatherings. Metal reflects Wi-Fi signals. So access points are generally best positioned away from metal. Unfortunately most buildings contain many metal parts. For example supporting columns contain steel, concrete is usually reinforced with steel, doorways may have a steel support above them, and separate rooms that have been made into one by removing a wall will have steel supporting the span. Some buildings use steel reinforced concrete for walls and floors. Stairs in non-domestic buildings are usually made of steel. Electrical, plumbing, and air conditioning infrastructure is mostly metal. In larger buildings you may need to consider where water and metal infrastructure is as they become significant sized obstructions. As a very rough guide, in the UK a 2.4 GHz signal will be usable in the next room, but weak in the room after that, so corridors can be good locations for access points to cover multiple rooms. Distance also weakens Wi-Fi signals, so downgrade your expectations with larger than average size rooms. Also downgrade coverage expectations significantly for 5 GHz. If it is important to get coverage right or you have doubts get one access point and test it in possible locations. There are many test tools, from free mobile phone apps through to very expensive professional equipment. Lastly, if your job or reputation will be damaged by getting it wrong you should probably call us. The small amount we charge for installing it is partly offset by the better prices we get for equipment, the more appropriate selection of equipment we will make, our more accurate and efficient installation and configuration, and your time saved.
To make affordable WLANs capable of handling the large numbers of connections and high throughput most of us anticipate, I would like two classes of access point. The smart access points that we already have would work with many more much simpler and therefore much cheaper myrmidon access points that make most of the connections and shift most of the data. These two classes of access point must occupy the same space so that the advantages of each are always available.
To be cheap enough to be deployed in large numbers myrmidons must be very simple, specialising only in high numbers of connections and / or high throughput. Ideally they should also be small and use little power, but importantly they should require no individual human configuration or attention. As they need to coexist in the same space as smart Wi-Fi based APs they would be advantaged in using out-of-band wireless technologies like 802.11ad / WirelessHD / WiGig, DASH7, Zigbee, and Li-Fi. The sophistication they need but lack is delegated to specialised proximate controller devices. Each controller will orchestrate the configuration and behaviour of large numbers of myrmidons according to localised conditions and usage patterns, and in anticipation of events.
Right now I would like to be installing myrmidons. Desks are an obvious place, but the lower price of this capability enables it to be installed in many more locations. For example, it would be much more affordable to fit out large venue halls, sports stadiums, and outdoor locations such as car parks and playgrounds. It would also help low margin businesses such as mainstream hotels and high street shops to offer a better wireless connectivity experience. As the internet-of-things, low cost robotics, WPANs, BANs, wearables, and wireless sensors become more common so we will need this kind of WLAN.
The Dash7 Alliance promotes the ISO 18000-7 standard for wireless sensor networking.
On 2013-09-25 it announced the public release of the first version of the DASH7 Alliance Protocol.
Low implementation costs will be important in its competition with Zigbee.
Operating at a lower frequency DASH7 has an inherent range advantage but lower throughput.
Dash7 also specifies lower power usage than Zigbee but lesser security features.
Although Zigbee and Dash7 have overlapping applications their different characteristics should allow both to find a niche.
On 2013-09-24 there were 2449 smartphones listed by the Wi-Fi Alliance as Wi-Fi Certified
72 were listed as 5G Wi-Fi enabled i.e. 802.11ac
63 were listed as Passpoint Certified i.e. 802.11u
5G Wi-Fi is important primarily because its speed and range improvements in the less congested 5 GHz frequencies lead to a better experience. More 5G Wi-Fi networks need to be deployed.
Passpoint (Hotspot 2.0) is important because it enables ‘Wi-Fi roaming’. This automates login to diverse Wi-Fi networks. The effect is generally a faster connection than a mobile carrier can provide because of the rapid growth in mobile data usage. As a result it is sometimes called ‘mobile carrier offloading’ or ‘Wi-Fi offloading’.
The Wireless Broadband Alliance are promoting Wi-Fi roaming in their Next Generation Hotspot (NGH) project.
Many companies are developing wireless power systems, but one seems to have an advantage.
Ossia Inc. is developing technology they claim can charge devices at up to 30 feet, others claim only millimetres or centimetres.
They say they are in talks to bring their Cota system to market and think it should be in consumer products by 2015.
At Wireless Head our business is the application of wireless technology to business; so if Ossia Inc. can free us from that last wire it is great news to us.
Take a look at this presentation by the founder and CEO of Ossia for TechCrunch.