One of the major announcements to come out of Microsoft’s Build 2016 developer conference today was the bet the company is making on bots. Microsoft believes that bots are the new apps.
Yesterday, they invited developers to build bots for Cortana, the company’s virtual assistant. Cortana, for you non-gamers, is a virtual persona character from Microsoft’s blockbuster first-person shooter Halo franchise.
Microsoft is betting on the notion of “conversational computing,” which is why the company is putting its voice recognizing virtual assistant front and center. Cortana is the computing interface that accepts voice input and replies appropriately. It can reply appropriately because of Microsoft’s cloud computing prowess and the artificial intelligence that processes input from the Cortana interface. APIs allow developers to tap into this system.
[Tweet “Conversational Computing”]
Now all they need is content. That’s where the call to developers comes in.
Microsoft wants developers to use their framework to add features (i.e. content) that will give people compelling reasons to use Cortana, whose direct competitors are Siri and Google Now but you could also add Amazon’s Echo (or Alexa, if you like) to the list.
Significantly, the company that did everything in its power to defend its Windows operating system near-monopoly says its Cortana Intelligence Suite with Bot Framework will work not only with Windows OS but also iOS and Android.
Here’s the full keynote address:
The Effortless Input Trend
Microsoft’s bet on conversational computing is smart but hardly a bet. Consumers are already acclimated to talking to computers, as evidenced in the popularity of its direct competitors. The conversational computing Microsoft refers to has been occurring for several years in one form or another.
Google’s search algorithm has long taken into account natural language and in recent years has listened to the context of search sessions to provide contextually appropriate results. For example, if you start a search at Google for “How tall is Tom Brady” and then follow that up with a second search of simply, “Who is his wife?”, Google will understand that the “His” you are referring to in the second search is Tom Brady and will give you the appropriate answer.
Google Docs recently introduced a new feature called “Voice Typing” that allows you to dictate to, rather than type in, documents. And the Dragon Dictate software that does the same thing has been around of years.
What’s different from voice recognition software and virtual digital assistants is the artificial intelligence and processing power that make them run. Remember when you had to train Microsoft Word to recognize your voice? No? Well there’s the problem: No one wanted to train software. With AI, software no longer needs to be trained. It just learns on its own.
And there’s the bet. Computing is becoming increasingly effortless.
This notion of conversational computing is a micro-trend within the larger maco-trend of effortless input.
A Short History Of Computing
Way back in the Triassic Period of the computer age, information was entered into computers via paper punch cards:
Then computers graduated to magnetic tape as a means of information input and storage:
With the advent of the personal computer came the command line where people entered information (and computer commands) directly from their keyboard:
Command lines, not surprisingly, proved an impediment to the adoption of personal computers, so the graphical user interface was invented and, along with it, the computer mouse!
In the late 1990s computer scanners entered the market to allow consumers to input images into the PCs:
While you can see that over the years, getting information into your computer has become easier, all of the above historic examples still require significant effort to one degree or another.
Enter the age of Ubiquitous Computing. The notion of ubiquitous computing is that you essentially make the computing device invisible, much in the same way eye glasses are invisible to the person wearing them. They extend the power of the user’s sight yet people who wear glasses do not focus on the device to make it work, they look through the device and it just works.
Some examples of how the effortless input trend expresses itself in the age of Ubiquitous Computing.
In 2010, Microsoft introduced the Kinect for Xbox 360 that turned human beings into video game controllers by using camera technology to recognize gestures in real time:
Google Glass’ promise could not overcome its dork factor. The augmented reality device used head gestures, taps and eye-winks as input mechanisms:
Yelp’s app features an augmented reality feature called Monacle that uses your phone’s camera as an input device to scan your location to reveal Yelp listings for local businesses:
While you need to interact with your phone before you can talk to Cortana, Siri or Google Now, Amazon Echo is always listening for you to ask Alexa a question, no hands required:
When you get a new iPhone, the setup process includes identifying your thumbprint so you can use it as a biometric password. Every smartphone is continuously inputting location data based on your movement through space so it can recognize patterns and tell you, for instance, if there will be congestion on your commute or when you can expect your next bus to arrive.
The final frontier in ubiquitous computing is using thought to transfer information to a computing device and the technology actually already exists:
Whether it is a search query, a question posed to a virtual digital assistant, a hand gesture or even a thought, professional communicators need to understand the contexts in which messages are prompted and received in order to be effective. Those contexts will continue to evolve. Rapidly.
Get The Success @ Creative PR Newsletter
The mission of the Success @ Creative PR newsletter is to help you succeed in your communications career by giving you valuable tips, trends, cool tools, insights and inspiration that will help you throughout your career. Get new marketing stats every week! Click the button below to subscribe: