Intel sees "perceptual computing" as the next big wave for personal computing.
That vision, articulated by Executive Vice President Dadi Perlmutter in
a keynote speech at the Intel Developer Forum in San
Francisco, echoes similar futuristic landscapes recently envisioned by
IBM and Intel competitor Advanced Micro Devices.
In Perlmutter's view, we're in the process of leaving behind mice and
keyboards. Touchscreens, which have gotten new life following the
success of touch-sensitive tablets, and new attention with the coming
full release of Microsoft's Windows 8 operating system, are "just the
beginning," he said.
In Intel's view, this new perceptual-based
interactivity will instead be built around voice commands, facial
recognition, eye-tracking and gestural controls.
More Natural and Intuitive
"People love the way" they can interact with machines using voice and
gesture, Perlmutter said, adding that the new perceptual computing "will
be more natural and intuitive."
At the conference, Perlmutter demonstrated new, more powerful
voice-recognition technology from Nuance. Intel has also been
collaborating with a company called SoftKinectic, whose aim is to deploy
gestural recognition on PCs that can distinguish between each of one's
10 fingers. Perlmutter demonstrated a catapult game where fingers could
be held and gestured in front of the laptop to hold a virtual crystal ball.
Intel has also been promoting the coming of the Internet of Things, where all kinds of non-PC , non-smartphone devices have built-in intelligence and Net connectivity
-- resulting in the possibility of a natural interface that would
encompass virtually all of one's environment, not just the desktop,
phones, a laptop or a TV.
Interestingly, a similar view of the Next Big Thing in personal
computing was sketched at the end of last month by AMD's chief
technology officer, Mark Papermaster.
Papermaster shared his vision for what he called Surround Computing at
the Hot Chips semiconductor design conference, held in late August in
Cupertino, Calif. He said the industry's last 10 to 20 years have been
used to develop computing capability to simulate reality, and the next
decade or two will be focused on providing the capabilities to interpret
content and contexts in order to provide better experiences.
Contextual Insight
The Surround Computing Era, he said, is multi-platform, ranging from
eyeglasses to room-sized computing devices. It's also fluid, with
realistic output and natural human input, and it's intelligent,
anticipating human needs.
This kind of computing, he said, "imagines a world without keyboards
or mice, where natural user interfaces based on voice and facial
recognition redefine the PC experience," and where devices deliver
"contextual insight and value," even as they "disappear seamlessly into
the background."
In his view, natural user interfaces will go beyond Microsoft's gestural
and facial recognition controller Kinect, and beyond Siri's intelligent
voice agent, which, by themselves, already fill out many sci-fi
visions. Papermaster is intending to build systems that also employ
fingerprints and tactile information , and that can interpret facial expressions and speech better than current technology.
He told The Wall Street Journal that "it's going to be a tsunami" driven
by "this inexorable growth of all these sending devices around us."
To be truly natural, computing devices will need an intelligence that is
approaching that of humans. In short, a Watson, the IBM-built
supercomputer that defeated two champion Jeopardy players on live TV, at
their own game.
Recently, IBM Vice President Bernie Meyerson said his company was
working on turning Watson into a back-end, intelligent voice agent for
smartphones and other devices, one that could be available from
virtually any connected device . In short, an anywhere version of Apple's Siri, but on steroids.
No comments:
Post a Comment