Capacitive touchscreens changed
the way we interact with mobile devices, but they haven’t evolved much
at a fundamental level since then. Apple is trying to augment
touchscreens with 3D Touch, but Microsoft Research is looking to create a touchscreen you don’t even have to touch. The pre-touch sensing prototype smartphone
can trigger different types of interactions based on how you’re holding
the phone and where your fingers are without actually touching the
glass.
Microsoft isn’t the first to design a screen
that can register input without actually being touched. Samsung does
something similar with inductive technology in its Note styluses, and
Sony had a very similar system for a brief period back in 2012. Sony’s “Floating Touch”
platform was only deployed on one phone, and software support was
limited. Android didn’t have extensive support for hover actions as it’s
not really a mouse-based OS. Microsoft is taking similar technology and
imagining what a platform might be able to do if it was designed with pre-touch sensing in mind.
There are two basic types of capacitive
sensors in touchscreens. There are standard mutual capacitance sensors
that you’d find in other screens, then there are self-capacitance
sensors. Microsoft’s prototype screen uses self-capacitance sensors
because they have extremely high sensitivity that can detect your finger
hovering an inch or two away. In the past, these have only been able to
sense a single input, but Microsoft appears to have addressed that
shortcoming.
The demo video shows some of the interesting interactions that are possible with Microsoft’s
test device. It can do basic things like pull up video controls or
reveal hyperlinks on a web page when you hover. Where things get
interesting is with grip sensing. Because the self-capacitance sensors
in this display can map multiple inputs, the phone can tell how you’re
holding it. That means the phone can bring up different
controls when it senses a hover event based on how you’re holding it.
Standard video controls can be substituted for a subset of controls that
are available on one side or the other, and the interaction with those
controls can be better suited to one-hand use. This system can also
combine touch and hover detection to pull up context menus wherever is
comfortable rather than requiring multiple actions.
Pre-touch sensing as demoed by Microsoft can
do more subtle things as well. By differentiating between rapid and
precise motion prior to a tap, the phone can figure out what you
intended to do with that tap. For example, a precise tap that happens to
land on just next to a small button could be mapped to the button
because it’s likely that’s what you were going for in the first place.
Likewise, precise motion prior to a swipe could be interpreted as text
selection rather than scrolling.
This research is being presented at the
Human-Computer Interaction (CHI) 2016 conference this week. It’s still
just a neat tech demo right now, but maybe in the future someone will
use it in a real device.
Post a Comment