Wi-Fi Aware

The ability for Wi-Fi enabled devices to automatically discover each other and understand each other’s public Wi-Fi offerings is a powerful enabler for point to point Wi-Fi connectivity. Standards based ad hoc point to point Wi-Fi connections are currently quite a manual arrangement and so have seen little usage. Attempts to initiate such connections using Bluetooth and NFC have lowered the hurdle, but pre-emptively discovered potential connections via Wi-Fi Aware will make it much easier.
As is very often the case the full potential of technology is unlocked by widely or ideally universal standards, so Wi-Fi Aware promises to create new possibilities.

MU-MIMO soon and trends

In April Qualcomm announced their forthcoming 802.11ac MU-MIMO chipsets. These include the QCA 9990 and QCA 9992 chipsets for business grade access points with 4 and 3 stream radios respectively. Their client device chipsets provide 1 and 2 streams. All these MU-MIMO chipsets provide up to 80 MHz channel width, not 160 MHz. Their highest link speed is then 1.73 Gbps on 4 stream access point and ‘home router’ chipsets, while their client device chipsets with 2 streams have a highest link speed of 867 Mbps. So, for an all Qualcomm setup the upper limits for access points and ‘home routers’ are more usefully considered as aggregate capacity limits, e.g. two 2 stream clients could in theory transfer at 1.73 Gbps. In practice of course it is more likely to be about half of that or less. As these chipsets were “expected to sample in the second quarter of 2014” we can expect them in the products in the second half of 2014, along with some of their competitors – Broadcom and Quantenna have made similar announcements.

With MU-MIMO access points can service multiple stations simultaneously, so the available streams can be more fully utilised. The most important effect of this is to effectively increase the capacity of the spectrum. Obviously this is good news for WLAN owners and managers who have spectrum operating around capacity. Although MU-MIMO does not make a connection faster than before, it does provide more uncontended air time to clients, so they should also feel the benefit as better transfer times.

As MU-MIMO is compute expensive we are going to see more PoE+ equipment. As more channels are available in the 5 GHz band, and they are being added to, it makes sense for access points with two or more radios with omnidirectional antennas to be deployed where spectrum is highly utilised. This will add further to power requirements so we may see a growing market for mid-span PoE+ injectors.

802.11ac and MU-MIMO is coming at a good time as expectations and use of WiFi are soaring; a trend that will continue as the Internet of Things and wearable devices gain traction. If rumours are correct, the ever growing bandwidth needs of static and moving images will soon be added to by the demands of holographic displays. Obviously with all this data aggregating over WiFi to Ethernet we need 10 GbE at a sensible price soon.

Wireless data transport growth

This is a link to Cisco’s latest interesting and comprehensive set of statistics and predictions about mobile data traffic from 2013 to 2018. Note that mobile data traffic discussed in it is traffic passing though mobile operator macrocells. This post is more broadly interested in the growth of wireless data transport, especially as it concerns Wi-Fi.

In their study Cisco note that “globally, 45 percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2013.” They add that “without offload, mobile data traffic would have grown 98 percent rather than 81 percent in 2013.” The study predicts 52% mobile offload by 2018. Nonetheless, it still predicts a 61% compound annual growth rate in global mobile data traffic from 2013 to 2018 (i.e. an increase by 10.6 times) from 1.5 exabytes per month at the end of 2013 to 15.9 exabytes of data per month by 2018. The study further predicts that “the average smartphone will generate 2.7 GB of traffic per month by 2018, a fivefold increase over the 2013 average of 529 MB per month.” i.e. a 63% compound annual growth rate. In more nascent areas the study projects that “globally, M2M connections will grow from 341 million in 2013 to over 2 billion by 2018, a 43 percent CAGR”. It does not estimate traffic volumes for M2M, but does notes its overlap with wearables. The study estimates that “there will be 177 million wearable devices globally, growing eight-fold from 22 million in 2013 at a CAGR of 52 percent”, but “only 13 percent will have embedded cellular connectivity by 2018, up from 1 percent in 2013”. The study also considers that “globally, traffic from wearables will account for 0.5 percent of smartphone traffic by 2018” and “grow 36-fold from 2013 to 61 petabytes per month by 2018 (CAGR 105 percent)”. The study also states that “globally, traffic from wearable devices will account for 0.4 percent of total mobile data traffic by 2018, compared to 0.1 percent at the end of 2013”. The study projects no significant change to the order of the share of mobile data type by 2018: mobile video 69.1%, mobile web/data 11.7%, mobile audio 10.6%, mobile M2M 5.7%, and mobile file sharing 2.9%. The study expects the following device type data consumption by 2018: laptop 5095 MB per month, 4G tablet 9183, tablet 5609, 4G smartphone 5371, smartphone 2672, wearable Device 345, M2M Module 451, non-smartphone 45.

As we can see from Cisco’s predictions Wi-Fi is set to become even more significant for offloading. To be clear, in their study offloading is defined as pertaining to devices enabled for cellular and Wi-Fi connectivity, but excluding laptops. Their study says that “offloading occurs at the user/device level when one switches from a cellular connection to Wi-Fi/small cell access”. Eventually offloading should be substantially simplified by Hotspot 2.0 enabled equipment, although it will take years for Wi-Fi CERTIFIED Passpoint equipment deployments to reach significant levels. The impact of Hotspot 2.0 is not mentioned in Cisco’s study.

Obviously Wi-Fi is far more than just an offloading adjunct for mobile operators. It also provides adaptability to local needs, containment of data, and will facilitate the Internet of Things and the Internet of Everything. Ownership of wireless network infrastructure allows wireless data transport to be better matched to local needs; for example providing more control of costs, throughput, latency, and service reliability, along with competitive advantages through differentiating functionality. Wireless network infrastructure ownership also allows data to remain local, circumventing the security concerns and compliance requirements associated with data passing through equipment owned by others. Finally, ownership is the only viable approach for connecting massive numbers of diverse M2M and wearable devices to the Internet of Things and the Internet of Everything. ‘Fog computing’ promotes hyper local data processing that it argues is necessary to manage the rapid growth in transported data that is expected from the Internet of Everything. Naturally this makes no sense without hyper local connectivity that currently is dominated by Wi-Fi. Data cabling is clearly not adaptable enough to handle massive, transient, mobile, and rapidly scaling connectivity. So Wi-Fi is destined to continue its rapid growth, not just on its own merits as a general purpose wireless data transport that will continue to gain new uses, but also as a convenient offloading platform for mobile operators and a network edge for the Internet of Things and the Internet of Everything.

Many organisations recognising the significance of Wi-Fi have plans to expand its abilities with improved standards, more license free electromagnetic spectrum, and enhanced functionality and technology. Others are developing wireless data transport systems with more specialised uses to accompany Wi-Fi, such as Bluetooth, Zigbee, WirelessHD, and NFC. However, wireless data transport for the Internet of Things and Internet of Everything needs wireless access points to be low cost so they can be deployed in large quantities. They need to handle very high numbers of transient and mobile connections, and provide high throughput for uses such as video. They also need to operate at short range to make better use of their license free spectrum in a space. Finally they should operate out-of-band with other transceivers to maintain service levels. These requirements are difficult address coherently. We have previously suggested the concept of a ‘myrmidon’ access point that is in essence a simple (and therefore low cost) short range access point operating out-of-band to Wi-Fi that would specialise in handling very high numbers of connections and high throughput. Myrmidons would defer all or most other functionality to proximate and much fewer more intelligent (and so more expensive) access points and/or other specialist ‘orchestration devices’. WiGig is an obvious choice for myrmidons as it is out-of-band to Wi-Fi, has a short range, high throughput, and is controlled by the Wi-Fi Alliance. Certainly Cisco’s predictions concerning the numbers of connections from M2M and wearable devices suggest pause for thought, especially in light of how few are predicted to have their own cellular connectivity. Not using Wi-Fi is an expensive and slow to deploy route. This is why we believe the myrmidon access point concept is the most natural approach as it can be more easily integrated with Wi-Fi. Nonetheless, other approaches using Wi-Fi as it currently exists are possible, especially when more spectrum is made available.

Fog computing

Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).

Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.

If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.

Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.

Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.

Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.

Press release by Cisco on their vision for fog computing

Myrmidon Access Points

To make affordable WLANs capable of handling the large numbers of connections and high throughput most of us anticipate, I would like two classes of access point. The smart access points that we already have would work with many more much simpler and therefore much cheaper myrmidon access points that make most of the connections and shift most of the data. These two classes of access point must occupy the same space so that the advantages of each are always available.
To be cheap enough to be deployed in large numbers myrmidons must be very simple, specialising only in high numbers of connections and / or high throughput. Ideally they should also be small and use little power, but importantly they should require no individual human configuration or attention. As they need to coexist in the same space as smart Wi-Fi based APs they would be advantaged in using out-of-band wireless technologies like 802.11ad / WirelessHD / WiGig, DASH7, Zigbee, and Li-Fi. The sophistication they need but lack is delegated to specialised proximate controller devices. Each controller will orchestrate the configuration and behaviour of large numbers of myrmidons according to localised conditions and usage patterns, and in anticipation of events.
Right now I would like to be installing myrmidons. Desks are an obvious place, but the lower price of this capability enables it to be installed in many more locations. For example, it would be much more affordable to fit out large venue halls, sports stadiums, and outdoor locations such as car parks and playgrounds. It would also help low margin businesses such as mainstream hotels and high street shops to offer a better wireless connectivity experience. As the internet-of-things, low cost robotics, WPANs, BANs, wearables, and wireless sensors become more common so we will need this kind of WLAN.

====================================

Update

This Wilocity chips sounds like a potential candidate to enable the client side connection to myrmidons