
Four big names dominate the AI consumer Smart Speaker market. This isn’t a scenario where IBM and its Watson Cognitive Technology engine has a place, nor is it one where Google’s DeepMind can make inroads.
No, in the grand scheme of things your device will mimic the capabilities of these globally adept machine services, but they’ll do so on a different scale. This is where Siri, Apple’s personal voice assistant comes to the fore. Google Assistant, Cortana, and Alexa are similarly equipped with comparable voice profiling muscle, so they all belong on the same playing field, but there are differences between the big four.
Let’s blow the opening whistle on the match and see where the playing field battle takes this voice-activated game.
A four-way virtually voiced match-up:
Alexa – Tied to the Amazon ecosystem, Alexa makes any shopping experience a hassle-free one. Alexa inhabits the Echo, Tap, and Dot, and she’s also made her way into the Amazon Kindle range and the Fire TV.
Siri – Apple’s elder artificial spokesperson governs the iOS interface. This familiar voice pervades pop culture references while working invisibly in the background to strengthen Apple’s grip on the mobile market.
Google Assistant – The chatty digital assistant is complemented by a two-way conversation feature. It easily translates more than 100 languages. The Android operating system guarantees near universal uptake.
Cortana – Your encounters with Microsoft’s flagship AI persona were once limited to Windows 10. App versions of this congenial OS hostess are available for iOS and Android.
After scanning through this list, you’re maybe asking about Jarvis. Where is Jarvis? Well, Jarvis is Iron Man’s inbuilt sidekick, so it’s difficult to find him without owning a one-of-a-kind Iron Man suit. Curiously, Mark Zuckerberg has conceptualised his own Jarvis AI back in 2016.
Already a major investor in the virtual reality sector, Mr Zuckerberg’s work has led to a primitive AI, something a very few computer scientists can build. The main difference between him and them, of course, is that he possesses the means to turn his dreams into reality.
The Zuckerberg Jarvis interface can control a smart home and learn new tricks via a smartphone App. That might not seem like much, but, on closer inspection, his efforts also included a centralised server, plus a face recognition system, something that’s not included as standard in other AI interfaces.
Well, that’s not exactly true, for a nummber of vendors manufacture security cameras that include a facial recognition feature. Indirect integration is possible between a virtual assistant and a security camera, but a smart hub is required to make this circuitous solution feasible. Jarvis, on the other hand, uses a direct connection between its server and the facial recognition module.
Before recommending this homebrew effort, you’ll need to wait until Jarvis uses standard Smart Home protocols, the API (Application Protocol Interface) formats that enable every gadget type to talk to every network architecture. But this is definitely a technology that’s worth keeping an eye on, for when a tech innovator starts thinking about something, he tends to make it happen. Otherwise, all of those other lead manufacturers, Samsung and LG, Apple Homekit and Google Nest, well, they all have to talk to each other in a common language. But we’ll have more on this product compatibility issue later.
As you’re still deeply immersed in a language-oriented topic, think for a moment about the algorithms and code protocols that create the faux personalities of the established virtual assistants we’ve covered here.
You’ve likely asked a question and gained a fast response from at least one of them, and you’re forced to grudgingly admit that they do each possess unique character traits. Identity creation is a mysterious digital craft. The responses from the AI personalities are processed by special geo-distributed endpoints, places that use speech analysis technology to analyse the context of a user’s spoken sentence.
From here, the broken down sentences are further dissected so that syntax and sentiment are understood. Then, once the building blocks of the voiced inputs are processed, an action takes place. Running alongside that action, an AI-generated response is produced by your virtual attendant.
Alexa, Google Assistant, Siri or maybe it’s Cortana, you’ll know without ever turning your neck to address the source, is responding to your query. Maybe you’ve even started a conversation, of sorts. It’s the machine outsourced workload that makes this natural language response possible. Indeed, Natural Language Processing (NLP) is an essential part of any voice-activated interface.
So, what nuggets of information can you take away from this post?
Siri and Cortana are the two leading voice-activated spokespersons, without a doubt, but they were born of a mobile network framework.
Siri is your Apple best friend, an assistant that’s used to find directions or the location of a good pizza joint. Cortana brings this same perspective to Windows 10 computers. A “Hey Cortana” wake word conjures up the blue pulsing circles of the Microsoft assistant, but she can be a little slow to respond when a low-spec computer is her home. Never mind, Cortana Apps are also available on Apple and Android platforms.
Conversely, Alexa and Google home are virtual assistants with a body. They’re actual gadgets, although they hide out in wireless Smart Speakerย housings. It’s these stylish trimmings that are occupying thousands upon thousands of homes, and not just for playing streaming music.