YOU ARE AT:5GReality Check: The inevitable convergence of IoT, analytics and visuals

Reality Check: The inevitable convergence of IoT, analytics and visuals

The internet and greater use of IoT are set to be significant pieces of technology altering innovation and how people interact with the world.

Editor’s Note: The RCR Wireless News Reality Check section is where C-level executives and advisory firms from across the mobile industry share unique insights and experiences.

Have you ever tried to eat a steak with a spoon? Of course not. And not without a knife either, (with origins some two-and-one-half million years ago) which predates both the spoon (ancient Egypt) and the fork (Byzantine Empire era), which came after it. “Aha!” you say, “chopsticks!” (1200 B.C.). But even this technology is reliant upon the knife to cut the food to bite-sized pieces. Early knives were weapons and preparation tools and not even used in the eating process at all.

We now take these everyday items for granted, but they do indeed represent technological advances. Advances that reshaped human life. Where would we be without the fireproof vessel, more simply, a pot, to cook food in?

These devices also reflect a certain sort of inevitability of societal advancement. Knives were born of necessity; eating with hands and fingers could not remain an acceptable practice for civilizations to evolve. “Miss manners” – style advice from 1480 advises that, “it is wrong to grab your food with both hands at once; meat should be taken with three fingers and too much should not be put into the mouth at the same time.”

As in the times of innovation of the knife, spoon, fork and cooking pot, the force of inevitability remains with us today. Consider, in more recent times the ubiquitous mobile telephone, which has more computing power in a single device than NASA had at the time of the Apollo moon landing some 48 years ago. Feature and capability advancements in mobile phones have accelerated in recent years and continue apace, with consumers still lining up to buy the newest releases.

Stone-age era daggers and cutting tools, Neues Museum, Berlin, Germany. (Photo credit: N3N)

Other consumer appliances whose advancement has plateaued are re-awakening with new tremors. It was only a matter of time before someone would connect a microwave oven, refrigerator or thermostat to the internet.

Inevitability as the driver of invention

And so propelled by inevitability we continue along this same curve. How we live, play and operate our businesses is and will continue to be shaped by technological advancements.

Consumer technology gets a lot of press, and understandably so. But business leaders too have an opportunity to harness the force of inevitability to profoundly reshape their operations for the better.

A 2016 mobility survey by Ericsson projected that by 2021, there will be some 16 billion internet of things connected mobile devices. By 2018, the number of connected IoT devices will surpass the number of mobile phones. A 2017 Cisco mobility survey projected similar developments – beginning in 2019, internet of things connections will account for more mobile additions than smartphones, tablets and personal computers. By 2021, Cisco projects 638 million IoT modules will be added, compared to 381 million smartphone, tablet and PC additions. Wireless carriers are investing billions of dollars in “5G” networks between now and 2021 to support the onslaught of devices.

So, what will these new devices be if not smartphones, tablets and personal computers? Business operations equipment. Self-checkout registers, environmental sensors, warehouse picking equipment, fleet vehicles, waste water pumps, steel rolling equipment and on and on. And cameras.

The same forces that have placed high-resolution pinhole cameras in every smartphone, and given rise to picture and video sharing services like Instagram are enabling greater use of video in industry and the public sector. Various estimates place the smart city public safety market, including video surveillance, at $10 billion in 2021.

Data science has been rightly criticized for largely failing to date to enable large organizations to profoundly shape the way they do business. That is due, in part, to the failure of analytics solutions to truly enable enterprise leaders to see how their business operates, day-to-day. For all the talk of “visualization” in data analytics there has been no video.

McKinsey predicts video-analytics applications, expected to grow at a compound annual rate of more than 50% over the next five years, will significantly contribute to the expansion of IoT applications, predicted to have an economic impact of at least $3.9 trillion.

According to McKinsey’s analysis, “demand for video-analytics applications will be greatest in the city, retail, vehicle and worksite settings by 2020. The most common use cases will involve optimizing operations, enhancing public safety, increasing employee productivity and improving maintenance.” Optimizing operations in cities and factories represents the largest total available market.

Massive deployment of low-cost cameras beyond the edge of the network will obviously increase network traffic. Higher capacity, low-latency 5G networks will help, as will direct data compression. Video analytics can help too.

Analytics performs a video compression function when deployed in a fog computing model – the data center is now distributed. Only the analyses and not the entire video stream need be sent across the entire network to the business operations center. Wireless access points can be bolstered with additional compute power to perform analytics at the edge. Analysis at scale can be achieved through distributed computing across multiple access points. Data mining and correlation of video events to other data sources may still be performed at the centralized data center.

Cognitive computing enabling real use cases

This analysis and further correlation of the data will be enabled by application programming interfaces. Machine learning algorithms to mine the data through automated, continuous processing will be accessed by API and ultimately enable predictions of future events based on past behavior. Likewise, video analytics functions will be accessed and data from analysis shared through APIs, processing the unstructured data contained within still image frames.

Of course, the killer app will be the correlation of events, captured through images and processed, to sensor data and other sources. Conventional data science can be applied to sensor data; natural language processing and text analytics can make sense out of unstructured data such as emails, social media content, call center chat logs, service notes and more. Determination of intent, when correlated to actual actions as recorded in video, can be powerful, actionable information. And there’s no reason this cannot be done in real time.

All of this rolled-up together, of course, amounts to cognitive computing, a self-learning, automated solution able to arrive at conclusions and make decisions. Combined with the internet of things, that allows for astounding possibilities. Such a system, for example, might be able to determine based on a CCTV feed the nefarious intent of a person approaching a remote location and turn on lights and an audio feed for the location as well as trigger a call to authorities.

Brick-and-mortar retailers and hospitality providers might use cognitive computing along with facial recognition and edge-computing based video analytics to produce “in-the-moment-offers” for customers on arrival, driven by data in trade promotion systems and loyalty program databases. At this point cognitive computing intersects with edge computing and visual analytics.

The “news” must be seen as well as read

A long-standing axiom in the news industry is the news must be seen as well as read. That means pictures are mandatory. This is as true for laypersons and business people as for journalists.

Ask a millennial which social media app they most use on their phone: invariably, the answer is the Instagram picture-sharing app. With the steadily increasing technological power of handheld devices, it was inevitable that visual imagery would overtake the written word when it comes to bridging cultural, temporal and spatial gaps. (Snap, the parent company of Snapchat, went public in early March 2017 with a valuation of nearly $24 billion, and shares traded as much as 40% higher the next day.)

The brain ingests visual imagery differently than the written word. Images can and must be considered as a whole, while text is read in a straight line. With text, the onus is on the reader to form a mental picture.

Hyperlinks compound the problem with text by encouraging sequential jumps from page to page and from context to context. Just browsing Wikipedia, for example, is all the empirical evidence you need of this. Research from Microsoft has shown that from 2000 to 2015, the average person’s attention span dropped from 12 seconds to eight seconds, attributed in large part to increased mobile device use. (For comparison, a goldfish’s attention span is clocked at nine seconds.) One hasn’t much time to form a mental picture.

Medical science tells us that reading is handled in a different part of the brain from visual processing. The parietal lobe coordinates complex behaviors like reading and cross-modal processing (e.g. listening, writing, reading notes). The occipital lobe handles visual processing, perception, discrimination and spatial skills. The act of linearly reading text shifts brain activity away from visual processing components.

Research from Tufts and San Jose State universities over the past several years suggests that reading on computer or personal device screens – pervasive in the business world – disables the “deep reading” function – the function necessary for deep understanding of the subject matter – of the brain.

Children begin learning to read with texts heavy on pictures complemented by small words and simple sentences. As their reading comprehension skills grow, they graduate to text heavy books. This is how the deep reading function is developed.

The business implications of all of this are profound.

When deep reading is not possible for the leaders of large enterprise, whether it be due to the fast pace of competitive operations or the subtle nuances introduced by a glowing screen, we will have no choice as adults but to rely on the visual to make the best decisions. The circle back to images will then be complete. The ubiquitous presence of whiteboards and liquid-crystal display projectors in the workplace affirms the business need for the visual.

Our futures both as individuals and as enterprises depend on being able to drill down past the “big picture” and explore the complexities of the data, meanings and visuals that lie beneath. It is those underlying visuals and the activities they represent that are agents of change to the big picture. We need to see what we otherwise cannot.

For the consumer, the enabler is the internet. For the enterprise, the enabler is that organization’s very own internet of things. The combination of technology and visual capabilities enable us to break the interwoven shackles of compressed time scales and linear, over-hyperlinked text. It is the correlation of data from the enterprise’s internet of things and visual mapping of those correlations into a holistic view that facilitates action.

The convergence of rapidly advancing technologies – cognitive computing, distributed data centers, data science, low-latency telecommunications, visual analytics – is the foundation for a visually driven enterprise decision-making world. Enterprise “news” in the form of visuals can be produced, delivered and evaluated in real time on the computer screen or on the smartphone. This technological convergence represents an opportunity for forward-thinking organizations to out-execute their competition.

Today, seeing, more than ever, is believing. And acting. Enterprise operations, workflows and processes may all be visually condensed for rapid consumption. The visual will increasingly become the driver of decisions through both traditional and artificial intelligence processing models. Executives need to begin capitalizing on this paradigm shift and permeating the culture through their organization now.

ABOUT AUTHOR

Reality Check
Reality Checkhttps://www.rcrwireless.com
Subject to editorial review and copy edit, RCR Wireless News accepts bylined thought leadership articles, up to 1000 words, from industry executives. Submitted articles become property of RCR Wireless News. Submit articles to engageRCR@rcrwireless.com with "Reality Check" in subject line.