I recently asked the Ekahau site survey tool maker if they could add a feature that allows visualisation of different qualities of client transceiver. The reason I asked is because the range of Wi-Fi client ability continues to expand. Very soon the new Broadcom BCM4354 chip will deliver two-stream 802.11ac Wi-Fi to smartphones. Meanwhile some very old clients are still in use along with clients that have poor design and/or build quality. Some websites report that the Samsung Galaxy S5 should be available in April using the BCM4354 and that the iPhone 6 will also use it. So the Broadcom BCM4354 will rapidly expand to the range Wi-Fi client ability that WLANs are expected to manage. I think that site survey tools should allow me to deliver reports that visualise this diversity of connection quality. If you have to provide a certain level of Wi-Fi service to a diverse set of clients you need to know what they will experience, not just what can be obtained on the high end equipment that WLAN professionals use.
This is a link to Cisco’s latest interesting and comprehensive set of statistics and predictions about mobile data traffic from 2013 to 2018. Note that mobile data traffic discussed in it is traffic passing though mobile operator macrocells. This post is more broadly interested in the growth of wireless data transport, especially as it concerns Wi-Fi.
In their study Cisco note that “globally, 45 percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2013.” They add that “without offload, mobile data traffic would have grown 98 percent rather than 81 percent in 2013.” The study predicts 52% mobile offload by 2018. Nonetheless, it still predicts a 61% compound annual growth rate in global mobile data traffic from 2013 to 2018 (i.e. an increase by 10.6 times) from 1.5 exabytes per month at the end of 2013 to 15.9 exabytes of data per month by 2018. The study further predicts that “the average smartphone will generate 2.7 GB of traffic per month by 2018, a fivefold increase over the 2013 average of 529 MB per month.” i.e. a 63% compound annual growth rate. In more nascent areas the study projects that “globally, M2M connections will grow from 341 million in 2013 to over 2 billion by 2018, a 43 percent CAGR”. It does not estimate traffic volumes for M2M, but does notes its overlap with wearables. The study estimates that “there will be 177 million wearable devices globally, growing eight-fold from 22 million in 2013 at a CAGR of 52 percent”, but “only 13 percent will have embedded cellular connectivity by 2018, up from 1 percent in 2013”. The study also considers that “globally, traffic from wearables will account for 0.5 percent of smartphone traffic by 2018” and “grow 36-fold from 2013 to 61 petabytes per month by 2018 (CAGR 105 percent)”. The study also states that “globally, traffic from wearable devices will account for 0.4 percent of total mobile data traffic by 2018, compared to 0.1 percent at the end of 2013”. The study projects no significant change to the order of the share of mobile data type by 2018: mobile video 69.1%, mobile web/data 11.7%, mobile audio 10.6%, mobile M2M 5.7%, and mobile file sharing 2.9%. The study expects the following device type data consumption by 2018: laptop 5095 MB per month, 4G tablet 9183, tablet 5609, 4G smartphone 5371, smartphone 2672, wearable Device 345, M2M Module 451, non-smartphone 45.
As we can see from Cisco’s predictions Wi-Fi is set to become even more significant for offloading. To be clear, in their study offloading is defined as pertaining to devices enabled for cellular and Wi-Fi connectivity, but excluding laptops. Their study says that “offloading occurs at the user/device level when one switches from a cellular connection to Wi-Fi/small cell access”. Eventually offloading should be substantially simplified by Hotspot 2.0 enabled equipment, although it will take years for Wi-Fi CERTIFIED Passpoint equipment deployments to reach significant levels. The impact of Hotspot 2.0 is not mentioned in Cisco’s study.
Obviously Wi-Fi is far more than just an offloading adjunct for mobile operators. It also provides adaptability to local needs, containment of data, and will facilitate the Internet of Things and the Internet of Everything. Ownership of wireless network infrastructure allows wireless data transport to be better matched to local needs; for example providing more control of costs, throughput, latency, and service reliability, along with competitive advantages through differentiating functionality. Wireless network infrastructure ownership also allows data to remain local, circumventing the security concerns and compliance requirements associated with data passing through equipment owned by others. Finally, ownership is the only viable approach for connecting massive numbers of diverse M2M and wearable devices to the Internet of Things and the Internet of Everything. ‘Fog computing’ promotes hyper local data processing that it argues is necessary to manage the rapid growth in transported data that is expected from the Internet of Everything. Naturally this makes no sense without hyper local connectivity that currently is dominated by Wi-Fi. Data cabling is clearly not adaptable enough to handle massive, transient, mobile, and rapidly scaling connectivity. So Wi-Fi is destined to continue its rapid growth, not just on its own merits as a general purpose wireless data transport that will continue to gain new uses, but also as a convenient offloading platform for mobile operators and a network edge for the Internet of Things and the Internet of Everything.
Many organisations recognising the significance of Wi-Fi have plans to expand its abilities with improved standards, more license free electromagnetic spectrum, and enhanced functionality and technology. Others are developing wireless data transport systems with more specialised uses to accompany Wi-Fi, such as Bluetooth, Zigbee, WirelessHD, and NFC. However, wireless data transport for the Internet of Things and Internet of Everything needs wireless access points to be low cost so they can be deployed in large quantities. They need to handle very high numbers of transient and mobile connections, and provide high throughput for uses such as video. They also need to operate at short range to make better use of their license free spectrum in a space. Finally they should operate out-of-band with other transceivers to maintain service levels. These requirements are difficult address coherently. We have previously suggested the concept of a ‘myrmidon’ access point that is in essence a simple (and therefore low cost) short range access point operating out-of-band to Wi-Fi that would specialise in handling very high numbers of connections and high throughput. Myrmidons would defer all or most other functionality to proximate and much fewer more intelligent (and so more expensive) access points and/or other specialist ‘orchestration devices’. WiGig is an obvious choice for myrmidons as it is out-of-band to Wi-Fi, has a short range, high throughput, and is controlled by the Wi-Fi Alliance. Certainly Cisco’s predictions concerning the numbers of connections from M2M and wearable devices suggest pause for thought, especially in light of how few are predicted to have their own cellular connectivity. Not using Wi-Fi is an expensive and slow to deploy route. This is why we believe the myrmidon access point concept is the most natural approach as it can be more easily integrated with Wi-Fi. Nonetheless, other approaches using Wi-Fi as it currently exists are possible, especially when more spectrum is made available.
Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).
Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.
If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.
Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.
Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.
Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.