In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

preview_player
Показать описание
The Ultimate Flexible Workstation? In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures ("Air Constellations")

"The Ultimate Flexible Workspace" is a new device concept that uses spatially aware armatures, similar to articulated lamps, to make multiple devices work better together. The user can freely adjust, orient, and juxtapose multiple devices in-air, and the applications on each device react accordingly as displays approach, re-orient, or split apart.

Also known as "Air Constellations," our prototype system supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air – with 2-5 armatures poseable in 7DoF within the same workspace – to suit the demands of their current task, social situation, app scenario, or mobility needs.

This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air.

We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications.

A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing “seeing and being seen” in remote work.

These concepts address the very real user needs of information workers to combine resources from other devices nearby, to collaborate in nuanced and socially fine-grained ways with colleagues whether local or remote, and to create new, more useful experiences that compose multiple device capabilities such as pen, touch, camera, and expanded screen real estate.

This work is part of a larger Microsoft Research project known as SurfaceFleet that explores the distributed systems and user experience implications of a “Society of Devices” in the New Future of Work.
Рекомендации по теме
Комментарии
Автор

This most definitely tickles my fancy. These kinds of dynamic multi-screen environments are what I was hoping widespread consumer adoption of tech like the Wii U might have ushered us towards about a decade ago, but sadly no one cared at the time. Hopefully there might be renewed interest in it now, with devices becoming more connected as well as successors to the Wii U concept like the Intellivision Amico; I really think cross-device computing is the future in how naturally it incorporates into our real-world interactions compared to something like XR. Seeing things like at 3:55 also delightfully reminds me of the old Vista Surface from 2007, with the whole "placing smaller devices on a larger screen connects them" thing.

VinLAURiA
Автор

Splendid 💖 it has more promising future ...

arahman
Автор

@3:03, that's a pretty cool use case.

hiro
Автор

It's like a scene from IRON MAN, where Tony built Mark 2

hiteshkatekhaye
Автор

But we are far away from holographic interfaces controlled by hand with small movement radars. It would be a device each phone does not have. It's a different approach of discovery, beginning from a practical example and not from theory. Please prepare those documents as your assignment, we have achieved it. Motion tracking upgrading is a must if it's being tracked with wires and exoskeleton approach is where it went wrong, but maybe it's impossible due to low 1 metre precision.

Netryon
Автор

I'm sorry. You're telling me that spatial positioning in 2021 is not achievable with all the builtin sensors? We've had AR for a good 10 years now - you can walk around your room, and a 3D Pickachoo you set on your table, stays on your table, even after the table goes out of frame. And here you are, spending God knows how much time and money developing custom extension arms just so that screens know relative position to one another??? I get the software, I get the workflow - it is amazing, but my God. The way you actually did it on a hardware level makes this presentation look like it's been done in the 90s not now :/

kaczorefx