Fog computing

Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).

Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.

If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.

Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.

Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.

Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.

Press release by Cisco on their vision for fog computing

Myrmidon Access Points

To make affordable WLANs capable of handling the large numbers of connections and high throughput most of us anticipate, I would like two classes of access point. The smart access points that we already have would work with many more much simpler and therefore much cheaper myrmidon access points that make most of the connections and shift most of the data. These two classes of access point must occupy the same space so that the advantages of each are always available.
To be cheap enough to be deployed in large numbers myrmidons must be very simple, specialising only in high numbers of connections and / or high throughput. Ideally they should also be small and use little power, but importantly they should require no individual human configuration or attention. As they need to coexist in the same space as smart Wi-Fi based APs they would be advantaged in using out-of-band wireless technologies like 802.11ad / WirelessHD / WiGig, DASH7, Zigbee, and Li-Fi. The sophistication they need but lack is delegated to specialised proximate controller devices. Each controller will orchestrate the configuration and behaviour of large numbers of myrmidons according to localised conditions and usage patterns, and in anticipation of events.
Right now I would like to be installing myrmidons. Desks are an obvious place, but the lower price of this capability enables it to be installed in many more locations. For example, it would be much more affordable to fit out large venue halls, sports stadiums, and outdoor locations such as car parks and playgrounds. It would also help low margin businesses such as mainstream hotels and high street shops to offer a better wireless connectivity experience. As the internet-of-things, low cost robotics, WPANs, BANs, wearables, and wireless sensors become more common so we will need this kind of WLAN.



This Wilocity chips sounds like a potential candidate to enable the client side connection to myrmidons

Wireless sensors are coming

The Dash7 Alliance promotes the ISO 18000-7 standard for wireless sensor networking.
On 2013-09-25 it announced the public release of the first version of the DASH7 Alliance Protocol.
Low implementation costs will be important in its competition with Zigbee.
Operating at a lower frequency DASH7 has an inherent range advantage but lower throughput.
Dash7 also specifies lower power usage than Zigbee but lesser security features.
Although Zigbee and Dash7 have overlapping applications their different characteristics should allow both to find a niche.