default search action
24. UIST 2011: Santa Barbara, CA, USA
- Jeffrey S. Pierce, Maneesh Agrawala, Scott R. Klemmer:
Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, October 16-19, 2011. ACM 2011, ISBN 978-1-4503-0716-1
Crowdsourcing
- Jon Noronha, Eric Hysen, Haoqi Zhang, Krzysztof Z. Gajos:
Platemate: crowdsourcing nutritional analysis from food photographs. 1-12 - Jeffrey M. Rzeszotarski, Aniket Kittur:
Instrumenting the crowd: using implicit behavioral measures to predict task performance. 13-22 - Walter S. Lasecki, Kyle I. Murray, Samuel White, Robert C. Miller, Jeffrey P. Bigham:
Real-time crowd control of existing interfaces. 23-32 - Michael S. Bernstein, Joel Brandt, Robert C. Miller, David R. Karger:
Crowds in two seconds: enabling realtime crowd-powered interfaces. 33-42 - Aniket Kittur, Boris Smus, Susheel Khamkar, Robert E. Kraut:
CrowdForge: crowdsourcing complex work. 43-52 - Salman Ahmad, Alexis J. Battle, Zahan Malkani, Sepandar D. Kamvar:
The jabberwocky programming environment for structured social computing. 53-64
Social information
- Philip J. Guo, Sean Kandel, Joseph M. Hellerstein, Jeffrey Heer:
Proactive wrangling: mixed-initiative end-user programming of data transformation scripts. 65-74 - Sudheendra Hangal, Monica S. Lam, Jeffrey Heer:
MUSE: reviving memories using email archives. 75-84 - Juan David Hincapié-Ramos, Stephen Voida, Gloria Mark:
A design space analysis of availability-sharing systems. 85-96 - Yuki Takahashi, Hiroaki Kojima, Ken-ichi Okada:
Injured person information management during second triage. 97-106 - Andreas Paepcke, Bianca Soto, Leila Takayama, Frank Koenig, Blaise Gassend:
Yelling in the hall: using sidetone to address a problem with mobile remote presence systems. 107-116 - Ronit Slyper, Jill Fain Lehman, Jodi Forlizzi, Jessica K. Hodgins:
A tongue input device for creating conversations. 117-126
Social learning
- Vidya Ramesh, Charlie Hsu, Maneesh Agrawala, Björn Hartmann:
ShowMeHow: translating user interface instructions between applications. 127-134 - Suporn Pongnumkul, Mira Dontcheva, Wilmot Li, Jue Wang, Lubomir D. Bourdev, Shai Avidan, Michael F. Cohen:
Pause-and-play: automatically linking screencast video tutorials with applications. 135-144 - Tom Yeh, Tsung-Hsiang Chang, Bo Xie, Greg Walsh, Ivan Watkins, Krist Wongsuphasawat, Man Huang, Larry S. Davis, Benjamin B. Bederson:
Creating contextual help for GUIs using screenshots. 145-154 - Max Goldman, Greg Little, Robert C. Miller:
Real-time collaborative coding in a web IDE. 155-164 - Daniel Ritchie, Ankita Arvind Kejriwal, Scott R. Klemmer:
d.tour: style-based exploration of design example galleries. 165-174
With a little help
- Justin Matejka, Tovi Grossman, George W. Fitzmaurice:
IP-QAT: in-product questions, answers, & tips. 175-184 - Wei Li, Tovi Grossman, Justin Matejka, George W. Fitzmaurice:
TwitApp: in-product micro-blogging for design sharing. 185-194 - Michael D. Ekstrand, Wei Li, Tovi Grossman, Justin Matejka, George W. Fitzmaurice:
Searching for software learning resources using application context. 195-204
Keynote address
- Ge Wang:
Breaking barriers with sound. 205-206
Development
- Adam Fourney, Richard Mann, Michael A. Terry:
Query-feature graphs: bridging user vocabulary and system functionality. 207-216 - Thorsten Karrer, Jan-Peter Krämer, Jonathan Diehl, Björn Hartmann, Jan O. Borchers:
Stacksplorer: call graph navigation helps increasing code maintenance efficiency. 217-224 - James R. Eagan, Michel Beaudouin-Lafon, Wendy E. Mackay:
Cracking the cocoa nut: user interface programming at runtime. 225-234 - Julia Schwarz, Jennifer Mankoff, Scott E. Hudson:
Monte carlo methods for managing interactive state, action and feedback under uncertainty. 235-244 - Tsung-Hsiang Chang, Tom Yeh, Rob Miller:
Associating the visual representation of user interfaces with their internal structures and metadata. 245-256 - Pierre Dragicevic, Stéphane Huot, Fanny Chevalier:
Animating from markup code to rendered documents and vice versa. 257-262
Tactile/blind
- Felix Xiaozhu Lin, Daniel Ashbrook, Sean White:
RhythmLink: securely pairing I/O-constrained devices by tapping. 263-272 - Shaun K. Kane, Meredith Ringel Morris, Annuska Z. Perkins, Daniel Wigdor, Richard E. Ladner, Jacob O. Wobbrock:
Access overlays: improving non-visual access to large touch screens for blind users. 273-282 - Sean Gustafson, Christian Holz, Patrick Baudisch:
Imaginary phone: learning imaginary interfaces by transferring spatial memory from a familiar device. 283-292 - Sonja Rümelin, Enrico Rukzio, Robert Hardy:
NaviRadar: a novel tactile information display for pedestrian navigation. 293-302 - T. Scott Saponas, Chris Harrison, Hrvoje Benko:
PocketTouch: through-fabric capacitive touch input. 303-308 - Hiroyuki Manabe, Masaaki Fukumoto:
Tap control for headphones without sensors. 309-314
Tangible
- Nicolai Marquardt, Robert Diaz-Marino, Sebastian Boring, Saul Greenberg:
The proximity toolkit: prototyping proxemic interactions in ubiquitous computing ecologies. 315-326 - Jinha Lee, Rehmi Post, Hiroshi Ishii:
ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation. 327-336 - Michelle Annett, Tovi Grossman, Daniel Wigdor, George W. Fitzmaurice:
Medusa: a proximity-aware multi-touch tabletop. 337-346 - Daniel Avrahami, Jacob O. Wobbrock, Shahram Izadi:
Portico: tangible interaction on and around a tablet. 347-356 - Daniel Vogel, Géry Casiez:
Conté: multimodal input inspired by an artist's crayon. 357-366 - Neng-Hao Yu, Sung-Sheng Tsai, I-Chun Hsiao, Dian-Je Tsai, Meng-Han Lee, Mike Y. Chen, Yi-Ping Hung:
Clip-on gadgets: expanding multi-touch interaction area with unpowered tactile controls. 367-372
Sensing form and rhythm
- Jennifer Fernquist, Tovi Grossman, George W. Fitzmaurice:
Sketch-sketch revolution: an engaging tutorial system for guided sketching and application learning. 373-382 - Yannick Thiel, Karan Singh, Ravin Balakrishnan:
Elasticurves: exploiting stroke dynamics and inertia for the real-time neatening of sketched 2D curves. 383-392 - Manolis Savva, Nicholas Kong, Arti Chhajta, Li Fei-Fei, Maneesh Agrawala, Jeffrey Heer:
ReVision: automated classification, analysis and redesign of chart images. 393-402 - David R. Flatla, Carl Gutwin, Lennart E. Nacke, Scott Bateman, Regan L. Mandryk:
Calibration games: making calibration tasks enjoyable by adding motivating game elements. 403-412 - Yusuke Yamamoto, Hideaki Uchiyama, Yasuaki Kakehi:
onNote: playing printed music scores as a musical instrument. 413-422 - Neema Moraveji, Ben Olson, Truc Nguyen, Mahmoud Saadat, Yaser Khalighi, Roy Pea, Jeffrey Heer:
Peripheral paced respiration: influencing user physiology during information work. 423-428
Keynote address 2
- Dan Jurafsky:
Sex, food, and words: the hidden meanings behind everyday language. 429-430
Mobile
- Karl D. D. Willis, Ivan Poupyrev, Scott E. Hudson, Moshe Mahler:
SideBySide: ad-hoc multi-user interaction with handheld projectors. 431-440 - Chris Harrison, Hrvoje Benko, Andrew D. Wilson:
OmniTouch: wearable multitouch interaction everywhere. 441-450 - Jessica R. Cauchard, Markus Löchtefeld, Pourang Irani, Johannes Schöning, Antonio Krüger, Mike Fraser, Sriram Subramanian:
Visual separation in mobile multi-display environments. 451-460 - Frank Chun Yat Li, Richard T. Guy, Koji Yatani, Khai N. Truong:
The 1line keyboard: a QWERTY layout in a single line. 461-470 - I. Scott MacKenzie, R. William Soukoreff, Joanna Helga:
1 thumb, 4 buttons, 20 words per minute: design and evaluation of H4-writer. 471-480 - Shu-Yang Lin, Chao-Huai Su, Kai-Yin Cheng, Rong-Hao Liang, Tzu-Hao Kuo, Bing-Yu Chen:
Pub - point upon body: exploring eyes-free interaction and methods on an arm. 481-488
Sensing
- Jan Zizka, Alex Olwal, Ramesh Raskar:
SpeckleSense: fast, precise, low-cost and compact motion sensing using laser speckle. 489-498 - Hiroshi Chigira, Atsuhiko Maeda, Minoru Kobayashi:
Area-based photo-plethysmographic sensing method for the surfaces of handheld devices. 499-508 - Yuta Sugiura, Kakehi Gota, Anusha I. Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi:
Detecting shape deformation of soft objects using directional photoreflectivity measurement. 509-516 - Raphael Wimmer, Patrick Baudisch:
Modular and deformable touch-sensitive surfaces based on time domain reflectometry. 517-526 - Sean Follmer, Micah K. Johnson, Edward H. Adelson, Hiroshi Ishii:
deForm: an interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch. 527-536 - Chris Harrison, Scott E. Hudson:
A new angle on cheap LCDs: making positive use of optical distortion. 537-540
3D
- Daniel Leithinger, David Lakatos, Anthony DeVincenzi, Matthew Blackshaw, Hiroshi Ishii:
Direct and gestural interaction with relief: a 2.5D shape display. 541-548 - Robert Y. Wang, Sylvain Paris, Jovan Popovic:
6D hands: markerless hand-tracking for computer aided design. 549-558 - Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard A. Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew J. Davison, Andrew W. Fitzgibbon:
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. 559-568 - Alex Butler, Otmar Hilliges, Shahram Izadi, Steve Hodges, David Molyneaux, David Kim, Danny Kong:
Vermeer: direct interaction with a 360° viewable 3D display. 569-576 - Seongkook Heo, Jaehyun Han, Sangwon Choi, Seunghwan Lee, Geehyuk Lee, Hyong-Euk Lee, Sanghyun Kim, Won-Chul Bang, Do-Kyoon Kim, Changyeong Kim:
IrCube tracker: an optical 6-DOF tracker based on LED directivity. 577-586 - Martin Hachet, Benoît Bossavit, Aurélie Cohé, Jean-Baptiste de la Rivière:
Toucheo: multitouch and stereo combined in a seamless workspace. 587-592
Pointing
- Jakob Leitner, Michael Haller:
Harpoon selection: efficient selections for ungrouped content on large pen-based surfaces. 593-602 - Géry Casiez, Nicolas Roussel:
No more bricolage!: methods and tools to characterize, replicate and compare pointing transfer functions. 603-614 - Malte Weiss, Chat Wacharamanotham, Simon Voelker, Jan O. Borchers:
FingerFlux: near-surface haptic feedback on tabletops. 615-620 - Seongkook Heo, Geehyuk Lee:
Force gestures: augmenting touch screen gestures with normal and tangential forces. 621-626 - Chris Harrison, Julia Schwarz, Scott E. Hudson:
TapSense: enhancing finger interaction on touch surfaces. 627-636
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.