Following is an excerpt from Eduardo Mercer's Master thesis titled "Towards a Pattern Language for Describing Ubiquitous Interactions" providing an overview of the procedure for developing a pattern language out of the collection of distributed user interface design patterns available in this wiki. Each pattern in the collection was derived from an article describing a technique for interacting with distributed user interfaces (DUIs). Each article was published in an international peer-reviewed journal or conference and was collected during a literature review conducted in 2014-2015.

Creating a Pattern Language

According to Fincher and Windsor (2000) there are four principles that should guide the creation of a pattern language:

  • It should have a taxonomy to support finding patterns;
  • It should facilitate navigation to related patterns;
  • It should enable evaluation of problems from different standpoints;
  • It should be generative, allowing users to develop new solutions.

The information initially collected in this wiki was sufficient for describing individual design patterns, but fell short of a true pattern language. This was due to the relationships between patterns not being well represented. It lacked, therefore, both a taxonomy and a means of navigating to proximal patterns. To enable the transition from pattern catalogue to pattern language these tools had to be supplied.

Taxonomic Survey

Dearden & Finlay (2006) in their survey of HCI design languages identify 3 possible relationships between patterns:

  • Derivation - where one pattern inherits elements from a higher-level pattern;
  • Aggregation - where one pattern is contained within another pattern;
  • Association - where one pattern uses another.

This leads to two types of relationship organisation between patterns: either a pattern serves as a foundation for other patterns or it completes them. Analysing how one article describing an interaction pattern cites and is cited by other articles in the same corpus would lead to identifying the same type of relationship: earlier articles serve as foundation for future articles citing them, further research by the same team completes past research and so forth. Thus, the initial pattern language taxonomy was based only on bibliographic citations.

The resulting survey of all 595[1] permutations can be found in Figure 1. For clarity, the numbers used to label each article throughout this analysis match their numbers in the #List of Sources section.

Figure 1: Initial Relationship Matrix. Rows represent citing papers, columns represent cited papers, citations are ticked. Papers with no citations are skipped for brevity.

Refining the Taxonomic Analysis

The relationship matrix was then imported into Gephi, an open-source social network analysis tool. This allowed running several Force-directed graph drawing algorithms on the dataset to better visualise the relationships between articles. And while several algorithms were tested, they still failed to provide meaningful understanding of the taxonomy of the articles. Thus, it became apparent that there was a need to move to a deeper taxonomic analysis, instead of simply mapping all citations. To address this, a decision was made to analyse how articles cite each other, separating citations into 2 groups:

  • Related material: the authors based one or more characteristics of their DUI design on a previous article or their work was a continuation of research described in the previous article;
  • Citations in passing: the authors analysed existing literature on the subject, but the article described something different from what they were aiming at.

These criteria satisfy 2 defining tenets of a pattern language - it deepens understanding of derivation, aggregation, and association as described by Dearden & Finlay (2006) and it encompasses the generative principle from Fincher & Windsor (2000) by analysing how one pattern creates further patterns.

Out of the set of citations identified in the taxonomic survey (see Figure 1), 43 were considered related material and 50 were deemed citations in passing. After additional analysis 13 instances of association without citation were also identified. These were situations where 2 interaction patterns were clearly related, but did not cite each other or any related article in the corpus. These were mostly cases where research was done in parallel and reached similar results. Figure 2 illustrates the proportions of each type of citation.

Figure 2: Citations in the corpus grouped by relationship type.

These citations were then weighted by relevance - 1,0 for related material, 0,5 for association without citation, and 0,3 for citation in passing, and fed into Gephi for analysis. A table with a new relationship matrix can be seen in Figure 3.

Figure 3: Rows represent citing papers, columns represent cited papers. ✓ are citations to related material with significant contributions to the citing paper, ❝ are citations in passing, and ✦ are associations without citation. For brevity, article names are suppressed. Numbers represent articles as they appear in #List of Sources.

A fairly straightforward grouping structure arose from running ForceAtlas2 on this weighted relationship matrix, allowing to identify not only what articles were central concepts, but also which ones gravitated around them. The divisions were so clear-cut that very few articles were included in more than one group, all of them describing groups of patterns instead of a single one. The result of the ForceAtlas2 analysis is illustrated by Figure 4.

Figure 4: The result of running the ForceAtlas2 algorithm on the new relationship matrix. Edge weight is visible in arrow scale, weights lower than 0,5 are hidden for clarity.

This new weighted approach led to a change in the data presented in the pattern catalogue as 3 new fields were added to the individual pattern structure:

  • Cites - all articles in the corpus cited by the one where the pattern is described;
  • Cited by - all articles citing it, regardless of the weight;
  • Related to - citations of related material to or from the said article, as well as any instance of association without citation.

These fields were also retroactively added to the patterns already in the wiki, therefore enabling Fincher & Windsor's (2000) principle of hyper-textual navigability between patterns in a language.

Pattern Families

Once the patterns and their relationships were mapped, it became possible to group them into families according to the connections and similarities between them. By delineating them some grammatical sense is given to how patterns relate to each other. In total 9 major interaction pattern families were identified, with very few patterns belonging to more than one family at the same time. And while each of the patterns can be added to one or more families, they can sometimes contain units of interaction that occur time and again, even in patterns of different families.

The Resulting Pattern Language

By mapping micro-patterns (the smallest units of interaction with DUIs that can no longer be broken down into simpler components), their relationships with pattern groups, patterns within groups, and the relationships between these led to a final taxonomy that appears to satisfy Alexander et al.'s (1982) definition of a pattern language and Fincher & Windsor's (2000) principles for a HCI pattern language:

  • It allows analysing a pattern from several standpoints - by motivation, usage setting, enabling technologies, family, relation to other patterns, etc;
  • It provides for hyper-textual navigation and exploration between patterns by linking to all upstream and downstream related patterns;
  • It provides tools for expanding and building upon thanks to the use of a Semantic MediaWiki for collecting and organising patterns;
  • It follows a taxonomy, which can be reproduced and expanded upon.

Dividing the map into generations provides a taxonomy where all patterns are related either by derivation, aggregation, or association. This is in line with the principles identified by Dearden & Finlay (2006) for qualifying HCI pattern language taxonomies. The taxonomy of the resulting pattern language can be seen in Figure 5.

Figure 5: Taxonomy of the pattern language for distributed user interfaces. Pattern families are coded with colour. Individual patterns are divided into groups. The upper level includes micro-patterns and the following levels include patterns that work as combinations, derivations, or alternatives of the patterns from the previous levels.

List of Sources

1. Rekimoto, J. (1997). Pick-and-drop: a direct manipulation technique for multiple computer environments. In Proceedings of the 10th annual ACM symposium on User interface software and technology (pp. 31-39). ACM.

2. Geißler, J. (1998). Shuffle, throw or take it! working efficiently with an interactive wall. In Conference on Human Factors in Computing Systems: CHI 98 conference summary on Human factors in computing systems (Vol. 18, No. 23, pp. 265-266).

3. Rekimoto, J., & Saitoh, M. (1999). Augmented surfaces: a spatially continuous work space for hybrid computing environments. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 378-385). ACM.

4. Tandler, P., Prante, T., Müller-Tomfelde, C., Streitz, N., & Steinmetz, R. (2001). ConnecTables: dynamic coupling of displays for the flexible creation of shared workspaces. In Proceedings of the 14th annual ACM symposium on User interface software and technology (pp. 11-20). ACM.

5. Swindells, C., Inkpen, K. M., Dill, J. C., & Tory, M. (2002). That one there! Pointing to establish device identity. In Proceedings of the 15th annual ACM symposium on User interface software and technology (pp. 151-160). ACM.

6. Baudisch, P., Cutrell, E., Robbins, D., Czerwinski, M., Tandler, P., Bederson, B., & Zierlinger, A. (2003). Drag-and-pop and drag-and-pick: Techniques for accessing remote screen content on touch-and pen-operated systems. In Proceedings of INTERACT (Vol. 3, pp. 57-64).

7. Hinckley, K. (2003). Synchronous gestures for multiple persons and computers. In Proceedings of the 16th annual ACM symposium on User interface software and technology (pp. 149-158). ACM.

8. Rekimoto, J., Ayatsuka, Y., & Kohno, M. (2003). SyncTap: An interaction technique for mobile networking. In Human-Computer Interaction with Mobile Devices and Services (pp. 104-115). Springer Berlin Heidelberg.

9. Iwasaki, Y ., Kawaguchi, N., & Inagaki, Y . (2003). T ouch-and-Connect: A connection request framework for ad-hoc networks and the pervasive computing environment. In Pervasive Computing and Communications, 2003.(PerCom 2003). Proceedings of the First IEEE International Conference on (pp. 20-29). IEEE.

10. Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., & Smith, M. (2004). Stitching: pen gestures that span multiple displays. In Proceedings of the working conference on Advanced visual interfaces (pp. 23-31). ACM.

11. Krötzsch, M., Vrandečić, D., & Völkel, M. (2006). Semantic mediawiki. In The Semantic Web-ISWC 2006 (pp. 935-942). Springer Berlin Heidelberg.

12. Hinrichs, U., Carpendale, S., Scott, S. D., & Pattison, E. (2005). Interface currents: Supporting fluent collaboration on tabletop displays. In Smart Graphics (pp. 185-197). Springer Berlin Heidelberg.

13. Bezerianos, A., & Balakrishnan, R. (2005). The vacuum: facilitating the manipulation of distant objects. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 361-370). ACM.

14. Ayatsuka, Y., & Rekimoto, J. (2005). tranSticks: physically manipulatable virtual connections. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 251-260). ACM.

15. Everitt, K., Shen, C., Ryall, K., & Forlines, C. (2006). MultiSpace: Enabling electronic document micro-mobility in table-centric, multi-device environments. In Horizontal Interactive Human-Computer Systems, 2006. TableTop 2006. First IEEE International Workshop on (pp. 8-pp). IEEE.

16. Sanneblad, J., & Holmquist, L. E. (2006). Ubiquitous graphics: combining hand-held and wall-size displays to interact with large images. In Proceedings of the working conference on Advanced visual interfaces (pp. 373-377). ACM.

17. Nacenta, M. A., Sakurai, S., Yamaguchi, T., Miki, Y., Itoh, Y., Kitamura, Y., ... & Gutwin, C. (2007). E-conic: a perspective-aware interface for multi-display environments. In Proceedings of the 20th annual ACM symposium on User interface software and technology (pp. 279-288). ACM.

18. Lee, H., Jeong, H., Lee, J., Yeom, K. W., Shin, H. J., & Park, J. H. (2008). Select-and-point: a novel interface for multi-device connection and control based on simple hand gestures. In CHI'08 Extended Abstracts on Human Factors in Computing Systems (pp. 3357-3362). ACM.

19. Zigelbaum, J., Kumpf, A., Vazquez, A., & Ishii, H. (2008). Slurp: tangibility spatiality and an eyedropper. In CHI'08 Extended Abstracts on Human Factors in Computing Systems (pp. 2565-2574). ACM.

20. Hassan, N., Rahman, M. M., Irani, P., & Graham, P. (2009). Chucking: A One-Handed Document Sharing T echnique. In Human-Computer Interaction–INTERACT 2009 (pp. 264-278). Springer Berlin Heidelberg.

21. Hinckley, K., Dixon, M., Sarin, R., Guimbretiere, F., & Balakrishnan, R. (2009). Codex: a dual screen tablet computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1933-1942). ACM.

22. Bader, T., Heck, A., & Beyerer, J. (2010). Lift-and-Drop: crossing boundaries in a multi-display environment by airlift. In Proceedings of the International Conference on Advanced Visual Interfaces (pp. 139-146). ACM.

23. Marquardt, N., Hinckley, K., & Greenberg, S. (2012). Cross-device interaction via micro-mobility and f-formations. In Proceedings of the 25th annual ACM symposium on User interface software and technology (pp. 13-22). ACM.

24. Chen, N., Guimbretiere, F., & Sellen, A. (2012). Designing a multi-slate reading environment to support active reading activities. ACM Transactions on Computer-Human Interaction (TOCHI) , 19 (3), 18.

25. Girouard, A., Tarun, A., & Vertegaal, R. (2012). DisplayStacks: interaction techniques for stacks of flexible thin-film displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2431-2440). ACM.

26. Lucero, A., Jokela, T., Palin, A., Aaltonen, V., & Nikara, J. (2012). EasyGroups: binding mobile devices for collaborative interactions. In CHI'12 Extended Abstracts on Human Factors in Computing Systems (pp. 2189-2194). ACM.

27. Lucero, A., Holopainen, J., & Jokela, T. (2012). MobiComics: collaborative use of mobile phones and large displays for public expression. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services (pp. 383-392). ACM.

28. Schneider, D., Seifert, J., & Rukzio, E. (2012). MobiES: extending mobile interfaces using external screens. In Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia (p. 59). ACM.

29. Lissermann, R., Olberding, S., Petry, B., Mühlhäuser, M., & Steimle, J. (2012). PaperVideo: interacting with videos on multiple paper-like displays. In Proceedings of the 20th ACM international conference on Multimedia (pp. 129-138). ACM.

30. Ohta, T., & Tanaka, J. (2012). Pinch: an interface that relates applications on multiple touch-screen by ‘pinching’ gesture. In Advances in Computer Entertainment (pp. 320-335). Springer Berlin Heidelberg.

31. Baldauf, M., Lasinger, K., & Fröhlich, P. (2012). Private public screens: detached multi-user interaction with large displays through mobile augmented reality. In Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia (p. 27). ACM.

32. Chernicharo, J. H. D. S., Takashima, K., & Kitamura, Y. (2013). Seamless interaction using a portable projector in perspective corrected multi display environments. In Proceedings of the 1st symposium on Spatial user interaction (pp. 25-32). ACM.

33. Simeone, A. L., Seifert, J., Schmidt, D., Holleis, P., Rukzio, E., & Gellersen, H. (2013). Technical framework supporting a cross-device drag-and-drop technique. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (p. 40). ACM.

34. Chung, H., North, C., Self, J. Z., Chu, S., & Quek, F. (2014). VisPorter: facilitating information sharing for collaborative sensemaking on multiple displays. Personal and Ubiquitous Computing , 18 (5), 1169-1186.

35. Hamilton, P., & Wigdor, D. J. (2014). Conductor: enabling and understanding cross-device interaction. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 2773-2782). ACM.

36. Díez, D., Tena, S., Romero-Gomez, R., Díaz, P., & Aedo, I. (2014). Sharing your view: A distributed user interface approach for reviewing emergency plans. International Journal of Human-Computer Studies , 72 (1), 126-139.

References

Alexander, C. (1982). A pattern language : towns, buildings, construction.

Dearden, A., & Finlay, J. (2006). Pattern Languages in HCI: A Critical Review. Human–Computer Interaction, 21(1), 49–102. http://doi.org/10.1207/s15327051hci2101_3

Fincher, S., & Windsor, P. (2000). Why patterns are not enough: some suggestions concerning an organising principle for patterns of UI design. Presented at the CHI Workshop on Pattern Languages for Interaction Design Building Momentum.
  1. The total number of cross-citations in a corpus of articles is the triangular number , which is calculated through the formula . This is a case of the handshake problem and can also be calculated by the longer arithmetic divergent series .

IDLAB - Institute of Informatics, Tallinn University