Wi-Fi Aware

The ability for Wi-Fi enabled devices to automatically discover each other and understand each other’s public Wi-Fi offerings is a powerful enabler for point to point Wi-Fi connectivity. Standards based ad hoc point to point Wi-Fi connections are currently quite a manual arrangement and so have seen little usage. Attempts to initiate such connections using Bluetooth and NFC have lowered the hurdle, but pre-emptively discovered potential connections via Wi-Fi Aware will make it much easier.
As is very often the case the full potential of technology is unlocked by widely or ideally universal standards, so Wi-Fi Aware promises to create new possibilities.

Mobile network operators using unlicensed spectrum

Obviously MNOs using unlicensed spectrum disadvantages others operating in that spectrum. The freedom to setup wireless networks for distinct needs without the burden of licenses for its use is an important right that has and will continue to enable innovation and advances in wireless technology. If allowed, MNOs could easily subvert that resource.
The existence of the IEEE 802.19 Wireless Coexistence Working Group to addresses coexistence between wireless standards of unlicensed devices, and in particular its Coexistence in Unlicensed Bands Study Group, is late but welcome. Perhaps equipment working in the unlicensed spectrum will ultimately be required to conform to a coexistence protocol that can be mandated by the ETSI et al. Although and extra burden on those developing for unlicensed frequencies it would ultimately be a benefit as we move to higher utilisation.

MU-MIMO soon and trends

In April Qualcomm announced their forthcoming 802.11ac MU-MIMO chipsets. These include the QCA 9990 and QCA 9992 chipsets for business grade access points with 4 and 3 stream radios respectively. Their client device chipsets provide 1 and 2 streams. All these MU-MIMO chipsets provide up to 80 MHz channel width, not 160 MHz. Their highest link speed is then 1.73 Gbps on 4 stream access point and ‘home router’ chipsets, while their client device chipsets with 2 streams have a highest link speed of 867 Mbps. So, for an all Qualcomm setup the upper limits for access points and ‘home routers’ are more usefully considered as aggregate capacity limits, e.g. two 2 stream clients could in theory transfer at 1.73 Gbps. In practice of course it is more likely to be about half of that or less. As these chipsets were “expected to sample in the second quarter of 2014” we can expect them in the products in the second half of 2014, along with some of their competitors – Broadcom and Quantenna have made similar announcements.

With MU-MIMO access points can service multiple stations simultaneously, so the available streams can be more fully utilised. The most important effect of this is to effectively increase the capacity of the spectrum. Obviously this is good news for WLAN owners and managers who have spectrum operating around capacity. Although MU-MIMO does not make a connection faster than before, it does provide more uncontended air time to clients, so they should also feel the benefit as better transfer times.

As MU-MIMO is compute expensive we are going to see more PoE+ equipment. As more channels are available in the 5 GHz band, and they are being added to, it makes sense for access points with two or more radios with omnidirectional antennas to be deployed where spectrum is highly utilised. This will add further to power requirements so we may see a growing market for mid-span PoE+ injectors.

802.11ac and MU-MIMO is coming at a good time as expectations and use of WiFi are soaring; a trend that will continue as the Internet of Things and wearable devices gain traction. If rumours are correct, the ever growing bandwidth needs of static and moving images will soon be added to by the demands of holographic displays. Obviously with all this data aggregating over WiFi to Ethernet we need 10 GbE at a sensible price soon.

8x8x8 MU-MIMO WiFi in 2015

Quantenna says they plan to release 8x8x8 MU-MIMO chipsets in 2015
This will be a very important development for anyone owning WiFi networks and of course WLAN/LAN professionals.
8 stream MU-MIMO can provide very high aggregate throughput to the LAN, making more efficient use of the WiFi infrastructure but requiring a 10 GbE LAN to make full use of it.

The balance of wired and wireless

Recently I installed a 4G LTE router in a site where there is a poor wired Internet service with no plans for improvement, but a choice of proximate 4G LTE base stations. The resulting wireless throughput is better, the service more reliable, and the prospect of further improvements immanent – partly because of the increasing competition between the wireless Internet service providers (WISPs) offering 4G. Apparently the wired infrastructure is not economically viable to upgrade according to its ISP. This is surprising statement given that the area is very densely populated with consumers and wired infrastructure. Perhaps what they mean is not enough disgruntled customers are leaving for 4G to justify spend on upgrading their service yet. This is not the first area I have come across with that attitude by an ISP. The first time I was told this was also in a build-up area, but it had fewer consumers and more businesses that are probably paying for leased lines anyway, so it was easier to see why there. Anyway, this attitude made me wonder where it is economically viable to put in at least fibre to the cabinet. Obviously the WISP base stations that serve this recent site need to aggregate a lot of data, and at least one of them has no wireless carrier antennas, so I suspect it is using fibre for backhaul. I think this is a case where wired infrastructure can more easily make money. It has the throughput advantage (at the moment) that can justify the cost of digging in a heavily developed area with strong property laws. I expect ISPs to continue to cede customers to WISPs and wired infrastructure to further retrench and focus on highly aggregated throughput.

Now suppose that some clever researcher finds some scrap of information intrinsic in electromagnetic radiation that allows distinct transceivers to be identified, or even just groups of them. This would make a dramatic difference to wireless communication because spectrum becomes less contended. In fact something like that has already been announced in the shape of pCells. I hope for and expect more innovations of this kind. When they arrive they will have a profound effect on wireless communication and wires will retrench further.

Fog computing

Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).

Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.

If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.

Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.

Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.

Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.

Press release by Cisco on their vision for fog computing

Myrmidon Access Points

To make affordable WLANs capable of handling the large numbers of connections and high throughput most of us anticipate, I would like two classes of access point. The smart access points that we already have would work with many more much simpler and therefore much cheaper myrmidon access points that make most of the connections and shift most of the data. These two classes of access point must occupy the same space so that the advantages of each are always available.
To be cheap enough to be deployed in large numbers myrmidons must be very simple, specialising only in high numbers of connections and / or high throughput. Ideally they should also be small and use little power, but importantly they should require no individual human configuration or attention. As they need to coexist in the same space as smart Wi-Fi based APs they would be advantaged in using out-of-band wireless technologies like 802.11ad / WirelessHD / WiGig, DASH7, Zigbee, and Li-Fi. The sophistication they need but lack is delegated to specialised proximate controller devices. Each controller will orchestrate the configuration and behaviour of large numbers of myrmidons according to localised conditions and usage patterns, and in anticipation of events.
Right now I would like to be installing myrmidons. Desks are an obvious place, but the lower price of this capability enables it to be installed in many more locations. For example, it would be much more affordable to fit out large venue halls, sports stadiums, and outdoor locations such as car parks and playgrounds. It would also help low margin businesses such as mainstream hotels and high street shops to offer a better wireless connectivity experience. As the internet-of-things, low cost robotics, WPANs, BANs, wearables, and wireless sensors become more common so we will need this kind of WLAN.



This Wilocity chips sounds like a potential candidate to enable the client side connection to myrmidons

Wireless sensors are coming

The Dash7 Alliance promotes the ISO 18000-7 standard for wireless sensor networking.
On 2013-09-25 it announced the public release of the first version of the DASH7 Alliance Protocol.
Low implementation costs will be important in its competition with Zigbee.
Operating at a lower frequency DASH7 has an inherent range advantage but lower throughput.
Dash7 also specifies lower power usage than Zigbee but lesser security features.
Although Zigbee and Dash7 have overlapping applications their different characteristics should allow both to find a niche.

Wireless Power

Many companies are developing wireless power systems, but one seems to have an advantage.

Ossia Inc. is developing technology they claim can charge devices at up to 30 feet, others claim only millimetres or centimetres.

They say they are in talks to bring their Cota system to market and think it should be in consumer products by 2015.

At Wireless Head our business is the application of wireless technology to business; so if Ossia Inc. can free us from that last wire it is great news to us.

Take a look at this presentation by the founder and CEO of Ossia for TechCrunch.