Sorry, this page isn't available for your location.

The page you're viewing is for English (ASIA) region.


Sorry, this page isn't available for your location.

The page you're viewing is for English (ASIA) region.

Your Smartphone is Talking to You: Digital Assistants in the Age of Edge

Daniel Sim •

Artificial intelligence in smartphones is improving significantly with companies like Google, Samsung, Apple, Microsoft, and Amazon having incorporated nifty digital assistants in their devices.

Siri of Apple, Bixby of Samsung, Alexa of Amazon, Cortana of Microsoft, and the simply named Google Assistant are becoming more intelligent, so to speak, as they are able to tune in to users’ voices and execute commands.

Voice recognition technology is nothing new (research and development has been around for at least three decades) but it has been only in the last few years that its full use is seen. What makes it more exciting is that support for voice-enabled applications is increasing and developers are maximizing data center; digital assistants on smartphones, for instance, are no longer tethered to the device but can use the Internet for commands.

Digital Assistants Are Getting Smarter

Tests conducted by the website Android Authority showed that most of the digital assistant performed well with some voice-operated tasks, though somewhat struggled with more complex ones. Still, the tests concluded that things looked promising in the field of voice-prompted digital assistants especially when there is growing support for third-party application development.

Moreover, digital assistants are actually “learning”, which means that the more they are used, the better they become in performing tasks. For instance, Alexa was reported to have learned over 15,000 skills as of 2017, though much of these skills were added manually. Alexa is also expected to be installed in a variety of hardware, including wearables and home automation devices.

Meanwhile, developers at Apple posted in a blog describing Siri’s speech recognition process that guides it to execute actions. Apple developers aren’t stopping there; the new iPhone X features a neural engine in the A11 processor, which is set to deal with future machine learning tasks.

Indeed, the general direction now seems to be making smartphones even smarter. It’s no longer just about penciling in schedules in their calendars, taking down notes, or making phone calls, but more about controlling home appliances, personal security, monitoring health and fitness, to even placing specific orders and making purchases online.

Processing via the Edge

With digital assistants doing more, the volume of processes sent out to data centers is expected to grow.  Research firm Ovum forecasted that the global native digital assistant installed base would exceed 7.5 billion by the end of 2021, which could mean a lot of voice commands and a lot processing power needed in the network. 

In a paper published at the Carnegie Mellon University, computer science professor Mahadev Satyanarayanan said that the proximity between a digital assistant-capable mobile devices and the data center affects latency; the closer the device is to the data center, the better it is to complete the voice command, and less processing volume that major data center hubs need to handle.

One needs to look at the opportunities offered by devices with installed digital assistants. There is already growing support for digital assistants and machine learning development from many technology firms, from Google and Apple, to Microsoft and Intel, who also realize that it is a step to achieving Internet of Things (IoT).

Building on this thought, the creation of new types of services that utilize voice recognition will demand new ways of designing, building, and maintaining IT networks. Legacy networks might have difficulty handling growing Internet traffic and could suffer from disruption as more devices get connected. Edge computing becomes viable, if not, the only direction to guarantee that IT networks remain operational amid increasing use of online devices.

In an edge environment, the process of executing commands are done via micro data centers that are closer to devices, which is at the “edge” of a network. This setup enables the command to be completed within a smaller area, thus freeing up main data centers from having to perform many tasks in a wider range.  As what Prof. Satyanarayanan said in his paper, edge computing can offer benefits especially in a growing mobile environment.

Increasing Demand for Edge

Demand for edge-related products and services is expected to grow, with one report claiming that the global market could be worth US$19.4 billion by the end of 2023. This would be a mix various market segments, from mobile devices, automotive, healthcare, education, gaming, to government.

However, a recent survey by Vertiv highlighted that many companies in Asia are not yet completely familiar with what edge computing is and how it works and are thus apprehensive in investing immediately. Nevertheless, the study also suggested that interest is high among these companies and are gradually learning the benefits of edge computing and the tools needed to start their own edge initiatives. 

There are lots of excitement for IoT especially as more devices that infused with digital assistants are going online. Having a strong edge strategy for this eventuality will ensure that companies are prepared for this eventuality.

Know more about Vertiv’s Edge In Asia initiative and download our Edge Playbook to get you started in your journey to the edge.

Related Articles


Language & Location