This is a property of type Text.


Pages using the property "Description"

Showing 25 pages using this property.

(previous 25) (next 25)

B

Bumping +Hinckley (2003) proposes the use of a bumping gesture to dynamically connect together 2 tablet computers and form an extended screen area from the screens of individual devices when they are positioned next to each other. Removing one device from proximity reverts both screens to their previous individual state. The devices go into screen extension mode if both of them are resting on a desk. The same gesture triggers information transfer if both devices are instead being held. Bumping both devices together results in mutual sharing of information, however having one device slightly tilted during a bump results in one-way sharing of the titled device’s clipboard to the receiving device.  +

C

Chucking +Hassan et al. describe Chucking - an interaction technique allowing users to share documents between private and public screens with a single-handed gesture. To trigger an interaction the user selects items that needs to be shared and extends her forearm or wrist similar to they way cards are chucked on a table. The recognition of the gesture is performed based on the values of the tilt sensors of the mobile device.  +
Codex +Hinckley et al. designed a set of interactions facilitated by a dual-screen mobile device called Codex. These interactions are based on the position of the screens of the device, which discern between portrait and landscape orientations and postures by considering the proxemic distance to facilitate different use-cases ranging from private to personal to social and collaborative use.  +
Conductor +Hamilton and Wigdor describe the Conductor system, which relies on cues and duets to facilitate functionality chaining across devices. A user can choose to broadcast a cue from a menu on the source device, which then triggers a special cue widget to be shown on the potentially receiving devices. Cues are displayed chronologically and can be hidden if left untouched. Tapping a cue will trigger the most appropriate action, for example launching an email application if the cue was related to broadcasting an email message. Duets are an extension of Cues and facilitate functionality bonding across devices. Duets are formed among currently running applications and can be terminated by quitting the application. Forming Duets is achieved by dragging a Cue from the left side of the screen to the right side to a list of Duets, which is a visual representation of the functionality chaining that can be formed between devices. Adding devices to a group that can be actively used, called a symphony, is achieved by scanning QR-Codes on other devices that need to be added to the group, touching or bumping devices, or by using a proxy for a device positioned out of reach. Such a proxy can be a QR-Code or an NFC tag placed nearby.  +
ConnecTable +The ConnecTable is an interactive screen positioned on a chassis that can change shape depending on the user’s position. 2 ConnecTables can be moved close together to form a joint area to support collaboration. Sharing information among the 2 screens is facilitated by shuffling objects from one display to another. These interactions are inspired by how physical artifacts are used by physically putting objects together to form a whole and pulling them apart to separate.  +
Cross-Device Drag-and-Drop +Simeone et al. (2013) propose the use of a drag-and-drop gesture to move information across 2 connected devices, a PC and a smartphone. Both devices need to support touch input. The user initiates a drag gesture by selecting a piece of information on one device and moves it to the edge of the screen, where a special application recognizes the user’s desire to transfer content and sends it to a connected device. The object appears on the other screen and can be dragged over an icon of an associated application.  +
Cross-Device Pinch-to-Zoom +This set of techniques aims to support transient information management across federated devices. Tilt-to-Preview enables the sender to tilt a tablet to initiate a transient sharing action and it is up to the receiver to hold her tablet up to accept the shared piece of information or leave it down to ignore the gesture. It is also possible for the receiver to make a copy of the shared object by holding it down with a finger. The Face-to-Mirror technique allows the sender to tilt her tablet above 70 degrees to automatically mirror the contents of her screen to the screens of other devices in a group. Tilting the tablet back stops the display mirroring. Portals allow tilting a tablet in proximity to another tablet to show a tinted edge across both screens that can then be used as a means of transferring objects between devices by using a drag gesture. Finally, cross-device pinch-to-zoom allows the user to enlarge a piece of information on a tablet by using a pinch gesture and when the image reaches the boundaries of the screen it automatically expands to other displays in proximity (Marquardt et al. 2012).  +

D

DisplayStacks +Girouard et al. propose 5 techniques for interacting with paper-like displays. The Pile is a technique for loosely grouping displays by allowing the user to freely rearrange the pile, by moving the displays or inserting them throughout. A stack is a set of displays neatly arranged together. It is possible to change the location of individual displays and to bend them for quick flicking through. Displays can be arranged as a fan, which allows the user to rotate individual displays. A Linear Overlap enables displays to be arranged with a partial overlap, allowing the user to partially cover or uncover certain areas. Finally, collation allows placing displays side-by-side along a shared edge.  +
Drag-and-Pick +Drag-and-Pop extends the traditional Drag-and-Drop technique by bringing the icons of applications that can accept a selected file, such as a text document, closer to its icon. This is useful for scenarios where a user is interacting with multiple displays and the bezel is interfering with the dragging gesture. Drag-and-Pick on the other hand temporarily moves all icons in the direction of the mouse cursor, making it easier to interact with them for example by using a stylus. The action is activated by simply moving the cursor over the desired icon (Baudisch et al. 2003).  +
Drag-and-Pop +Drag-and-Pop extends the traditional Drag-and-Drop technique by bringing the icons of applications that can accept a selected file, such as a text document, closer to its icon. This is useful for scenarios where a user is interacting with multiple displays and the bezel is interfering with the dragging gesture. Drag-and-Pick on the other hand temporarily moves all icons in the direction of the mouse cursor, making it easier to interact with them for example by using a stylus. The action is activated by simply moving the cursor over the desired icon (Baudisch et al. 2003).  +

E

EasyGroups +EasyGroups allows binding devices into ad hoc groups. This is carried out by the owner of the group launching a special application and touching the phones of other members of the group with his phone. This results in adding the devices to a group and specifies their order. Once all devices have been added, the owner of the group sets his phone on the table and the group members engage in joint activities, such as passing around photographs. Picking up a group member’s device allows adding new members to a group or leaving an existing group. A new member can be added by the same touching mechanism and he will be positioned next in line to the person who added him. Closing a group is achieved by picking up the owner’s device and flipping it upside down.  +

F

Face-to-Mirror the Full Screen +This set of techniques aims to support transient information management across federated devices. Tilt-to-Preview enables the sender to tilt a tablet to initiate a transient sharing action and it is up to the receiver to hold her tablet up to accept the shared piece of information or leave it down to ignore the gesture. It is also possible for the receiver to make a copy of the shared object by holding it down with a finger. The Face-to-Mirror technique allows the sender to tilt her tablet above 70 degrees to automatically mirror the contents of her screen to the screens of other devices in a group. Tilting the tablet back stops the display mirroring. Portals allow tilting a tablet in proximity to another tablet to show a tinted edge across both screens that can then be used as a means of transferring objects between devices by using a drag gesture. Finally, cross-device pinch-to-zoom allows the user to enlarge a piece of information on a tablet by using a pinch gesture and when the image reaches the boundaries of the screen it automatically expands to other displays in proximity (Marquardt et al. 2012).  +

H

HandLaser +Chernicharo et al. (2013) developed 4 interaction techniques for controlling cursor input in a perspective-aware MDE. These techniques resulted from a combination of a projector with a laser pointer or a mouse. The projector would be used to allow the user to create a new dynamic screen in an environment where the displays are otherwise fixed. The laser pointer would be used to control mouse cursor movements in a settings where the displays are distant and harder to reach. The combinations would include mounting the projector on either the user’s head or being handheld and using it with either a laser pointer or with a mouse. Each approach would have its own benefits, for example the HandMouse would be more natural to users, the HandLaser would enable faster movement of the cursor across the displays, the HeadMouse would generate projected screens always positioned in front of the user’s eyes, and the HeadLaser would allow the user to constantly look at the screens without shifting attention to the mouse.  +
HandMouse +Chernicharo et al. (2013) developed 4 interaction techniques for controlling cursor input in a perspective-aware MDE. These techniques resulted from a combination of a projector with a laser pointer or a mouse. The projector would be used to allow the user to create a new dynamic screen in an environment where the displays are otherwise fixed. The laser pointer would be used to control mouse cursor movements in a settings where the displays are distant and harder to reach. The combinations would include mounting the projector on either the user’s head or being handheld and using it with either a laser pointer or with a mouse. Each approach would have its own benefits, for example the HandMouse would be more natural to users, the HandLaser would enable faster movement of the cursor across the displays, the HeadMouse would generate projected screens always positioned in front of the user’s eyes, and the HeadLaser would allow the user to constantly look at the screens without shifting attention to the mouse.  +
HeadLaser +Chernicharo et al. (2013) developed 4 interaction techniques for controlling cursor input in a perspective-aware MDE. These techniques resulted from a combination of a projector with a laser pointer or a mouse. The projector would be used to allow the user to create a new dynamic screen in an environment where the displays are otherwise fixed. The laser pointer would be used to control mouse cursor movements in a settings where the displays are distant and harder to reach. The combinations would include mounting the projector on either the user’s head or being handheld and using it with either a laser pointer or with a mouse. Each approach would have its own benefits, for example the HandMouse would be more natural to users, the HandLaser would enable faster movement of the cursor across the displays, the HeadMouse would generate projected screens always positioned in front of the user’s eyes, and the HeadLaser would allow the user to constantly look at the screens without shifting attention to the mouse.  +
HeadMouse +Chernicharo et al. (2013) developed 4 interaction techniques for controlling cursor input in a perspective-aware MDE. These techniques resulted from a combination of a projector with a laser pointer or a mouse. The projector would be used to allow the user to create a new dynamic screen in an environment where the displays are otherwise fixed. The laser pointer would be used to control mouse cursor movements in a settings where the displays are distant and harder to reach. The combinations would include mounting the projector on either the user’s head or being handheld and using it with either a laser pointer or with a mouse. Each approach would have its own benefits, for example the HandMouse would be more natural to users, the HandLaser would enable faster movement of the cursor across the displays, the HeadMouse would generate projected screens always positioned in front of the user’s eyes, and the HeadLaser would allow the user to constantly look at the screens without shifting attention to the mouse.  +
Hyperdrag +Hyperdragging is a technique designed to facilitate information management in scenarios where a laptop computer is positioned on an interactive tabletop. In this case the table acts as an extension of the laptop’s workspace. When a user wants to share information with a colleague, she selects an item on her laptop and uses the mouse cursor to drag it towards the edge of the screen and further to the interactive table surface. The dragged object is migrated to the table and can be positioned anywhere the user chooses. It is also possible to hyperdrag objects from the table to a digital wall and drag objects backs from the table back to the laptop’s screen.  +

I

Interface Currents +The purpose of Interface Currents is to aid in forming group spaces, providing personal storage areas, and facilitating item sharing in tabletop interactions. The main properties of a Current are flow and path along which the flow of information travels. A flow has a specific directions and speed, while a path gives a Current location and boundary. The 2 main types of Currents are pools and streams. A pool enables drawing a circular boundary around a set of objects on a tabletop and moving them around the surface as a group. This approach builds on the metaphor of a rotating tray. On the other hand, a stream can help organize objects around the edge of a tabletop and is inspired by a airport conveyor belt.  +

L

Lift-and-Drop +Bader et al. (2010) used a motion-capture system called AirLift to explore an interaction technique Lift-and-Drop meant for moving objects between displays without using touch input, but instead relying on cameras to record the user’s gesture. An interaction would be registered by recording the gesture made and the position of the hand in the environment.  +

M

MobiES +MobiES facilitates extending the display of a mobile device to a public screen and using both displays to form a larger screen area. Bringing a phone next to the public screen triggers a special user interface to be shown, which allows the user to browse content or share content between 2 mobile devices positioned on either side of the public display and dragging content from one phone to another.  +
MultiSpace +Everitt et al. describe the design of a MDE controlled by a tabletop, with a special interface that utilizes portals to move information objects between the devices. The portals in the upper corners of the tabletop refer to personal devices of the users and a thin strip at the top of the table is linked to an interactive wall. Dragging objects to the corresponding parts of the screen copies and displays them on the corresponding device.  +

P

PaperVideo +Lissermann et al. (2012) describe a set of techniques for interacting with videos by using paper displays. A video’s timeline can be navigated by positioning 2 displays as the beginning and end of a video and moving a display across an imaginary line to preview different parts of a video. Piling several displays with different videos results in thumbnails of all videos in a pile being shown on the top display. Putting 2 displays side by side results in showing a list of related videos. Sliding the display on the right triggers a larger preview of each related video to be shown on it and pulling the displays aparts results in the original video being shown on the left display and the selected related video on the right. Bumping 2 displays together results in linking the corresponding videos. Using the top corners of a bottom displays can be used to trim a video shown on the upper display and extracting the trimmed section to the bottom display. Finally, shaking a display clears it. Restoring the cleared contents occurs by selecting a special recycle bin icon, shown in the corner of the empty display, with a stylus.  +
Perspective-Aware Interfaces +Nacenta et al. explore perspective-aware interfaces in their E-conic system, which tracks the locations of displays and the user’s heads in the environment to modify the position and the perspective of windows in order of generating an integrated view across different displays. Each window in the system has a set of controls for adjusting its shape and orientation. It is also possible to assign an owner of the window so that the system knows which user’s view a window should be optimized for.  +
Pick-and-Drop +Rekimoto (Rekimoto:1997) proposed an interaction technique called Pick-and-Drop, which allows users to pick up digital objects with a stylus and drop them onto another screen. The technique is different from tradition drag-and-drop because selecting an object virtually attaches it to the stylus, which can them be moved without physically contacting the screen. This also facilitates transferring objects across devices.  +
Pinch +Ohta and Tanaka describe a technique for stitching content shown on 2 mobile devices. This is achieved by placing the devices together by the long side and making a pinch gesture across both displays until the images meet. This results in an image being shown across 2 devices that form a larger screen area.  +
(previous 25) (next 25)

IDLAB - Institute of Informatics, Tallinn University