Author: Hugo Huurdeman

Analyze & Experience: Towards a Research Environment for 3D Reconstructions

How to design & evaluate user interfaces as well as interaction paradigms for accessing 2D & 3D datasets involving uncertainty?

This was the question which I addressed in my project stream within Virtual Interiors. My work involved designing, creating and evaluating a demonstrator for a 3D research environment, in a co-design setting with Chiara Piccoli and during embedded research at project partners Netherlands Institute for Sound & Vision and Brill Publishers.

This post summarizes the work pertaining the 3D research environment, points to additional resources and also looks ahead to future opportunities.

1. The challenging task of 3D interface design

It is not straightforward to create user interfaces for accessing 3D reconstructions in a historical setting. User interfaces can be seen as an “in-between device” between the historically interested user and the actual 3D reconstructions and underlying data (see e.g. [1]).

The interaction with 3D environment and assets is challenging in itself and navigation issues and disorientation are commonplace [2]. This affects the usability of a prospective interface [3]. In addition, the usability is influenced by the number and complexity of interface features: the paradox is that more features might make an interface more useful to a researcher, but also harder to use. Striking a balance between functionality and usability is therefore of key importance.

Achieving this balance is also more arduous because there is a lack of documentation of best practices for interface design in a 3D context. Hence, it is hard to determine which “standard” set of features is necessary for interacting with 3D historical content [4].

2. Exploring requirements

To shed more light on these issues, I spent the first months of my work as a postdoc in the Virtual Interiors project (2019) in exploring the requirements for a 3D research environment. This was done via an analysis of previous use cases for 3D, design thinking [5] sessions with the other project members. Some requirements for a 3D research environment that came up were the ability to view the 3D space from different perspectives, to interact with objects, to add analytical layers (uncertainty & lighting), to annotate and to view context information. Furthermore, a user study with digital humanists was conducted [6].

From this user study, it became apparent that historical and spatial context are of key importance for a 3D research environment. From a thematic analysis of session transcripts, we identified the spatial context dimensions object properties, object relationships, and perception of space, pictured in Figure 1.

Figure 1. Spatial context dimensions from Huurdeman & Piccoli, 2020
(● object properties, object relationships & perception of space)

More information:
📖 Hugo Huurdeman & Chiara Piccoli (2020), The Importance of Context in Search User Interfaces for 3D Content (Proceedings CHIIR 2020)

3. The Virtual Interiors 3D research environment

Based on the initial phase of analyzing requirements, a prototype research environment was designed and developed in an iterative way. The prototype addressed the requirements mentioned above. While a full discussion of its functionality, the rationale behind it, the technical structure and the reconstruction process is available elsewhere [16], I discuss some key features here.

Analytical & Experiential user interfaces

Historical 3D environments can be utilized by widely different audiences, ranging from the casually interested to topical experts. For these audiences, different user interface features can be worthwhile. Instead of creating one interface to cater for all of their needs, which may lead to to the aforementioned tensions in the balance between functionality and usability, I decided for a different approach.

To cater for these different audiences and their goals, we constructed a “multi-layer interface” [7], in which two distinct interfaces use the same underlying framework and data. On the one hand, there is an analytical interface layer, aimed at researchers, and on the other hand a more experiential interface layer, which can be used by both researchers and other historically interested individuals:

  • The Analytical interface allows researchers to take a deep-dive into underlying research data about domestic interiors, view uncertainty overlays, explore associated archive documents and to follow leads to connected Linked Data for instance examining artist biographies, related artworks and various historical data sources. Moreover, it is possible to explore hypotheses related to object arrangement and lighting. These analytical interfaces are best experienced via desktop computers or tablets (see Figure 2A & 2B).
  • The Experiential interface makes it possible to experience historical spaces intuitively, by e.g. utilizing Virtual Reality and the motion sensors of mobile phones (see figure 2C & 2D). Casual users and researchers can “jump into” a 3D historical space, look around and interact with the objects around them, in more immersive [8]. Experiential views have been explored in the context of Chiara Piccoli’s 3D reconstructions of domestic interiors, but also during my embedded research at the Netherlands Institute for Sound & Vision.

Read more about analytical interfaces:
📖 Hugo Huurdeman, Chiara Piccoli (2021), 3D Reconstructions as Research Hubs (Open Archaeology journal)
Read more about experiential interfaces:
📖 Hugo Huurdeman, Jesse de Vos (2020). Visualizing a Radio of the Past using Technology of the Future (blogpost, Neth. Inst. for Sound & Vision, R&D blog, with link to early 3D demo)

Integrating Linked Data

Another key differentiation point between the Virtual Interiors research environment and other 3D frameworks [9] is the pivotal role of Linked Data [10] in enriching our content. Linked Data provides a common approach to more easily interconnect data, and has also been used in the context of (digital) history [11]. More scarcely, however, it has been applied to the context of 3D reconstructions [12].

For the Virtual Interiors 3D reconstructions, we use Linked Data to dynamically integrate biographical information from Ecartico, but also additional (and visual) resources pertaining the reconstructions at hand. For instance, selecting Quellinus’ portraits in the entrance hall of the reconstructed house of Pieter de Graeff and Jacoba Bicker reveals a gallery of works by the same creator, and works of a similar type (from Adamlink and Wikidata). Linked Data can have added value for these kinds of interfaces, although the involved SPARQL [13] queries need to be designed carefully, and it is very important to clearly display provenance.

Figure 5: impression of Linked Data functionality in analytical interface (prototype June ’21)

The connection between Linked Data and both experiential and analytical interfaces is also illustrated by Figure 5 and 6. The analytical interface shows biographies, links and galleries based on Linked Data in the sidebar, while the experiential interface displays image galleries via a spatial projection surrounding and immersing the user.

Figure 6. Impression of the spatial projection of Linked Data in experiential interface (prototype June ’21)

Read more:
📖
Hugo Huurdeman, Chiara Piccoli, Leon van Wissen (2021), Linked Data in a 3D Context (DH Benelux 2021).

Analytical searching

Another important element of our research environment demonstrator is the ability to do anlaytical searching. As indicated in [6], 3D scenes and reconstructions can potentially consists of a vast number of objects and details, which may be hard to explore due to issues in navigating 3D spaces. Using the demonstrator’s approach, it is possible to browse a textual list of objects in the scene, but also to highlight singular as well as categories of objects — while keeping the context of the objects surrounding it. For instance, a user might want to see all furniture in the the entrance hall of Pieter de Graeff’s and Jacoba Bicker’s house (Figure 5). These elements are highlighted, while the surrounding objects are still (dimly) kept visible.

4. Evaluation

As mentioned, the design and creation of the research environment took place in a co-design setting with Chiara Piccoli, but also involved frequent feedback from the other team members. Besides this internal feedback, also external feedback is of key importance to resolve usability and system issues. A more formal summative study of the analytical desktop prototype, combined with Chiara’s 3D reconstructions and research about Pieter de Graeff’s entrance hall, was conducted in June 2021 (6 participants).

Participants in the study filled out the System Usability Scale (SUS) questionnaire, which measures usability. The resulting SUS score was 77,9. This means that the usability is clearly above average, with only a need for minor improvements.

The majority of minor issues coming to light in the study could be fixed in the subsequent weeks, which included:

  • Navigation issues. The familiarity with 3D environments of the participants influenced how easy it was to navigate the scene. We resolved this by providing more explicit instructions, and giving a higher prominence of a guided tour which informs about the possibilities of the 3D environment and viewer.
  • Usability issues — these issues could be resolved rather easily, and were for instance related to the selection and deselection of objects in the scene, the textual labels of features and buttons.
  • Technical issues and bugs — the majority of these smaller issues could also be fixed. More substantial issues, such as a lower performance on old computers could be partially alleviated by adding settings for simplified display and custom framerates.

5. Preservation strategy

Generally, cutting-edge 3D technologies are of an unstable nature. Software and platforms continuously evolve, often leaving behind support for prior technologies. This might render apps from the past non-functional [14, 15].

To mitigate this issues, the following approach was taken in Virtual Interiors:

  1. to use only open tools for creating the 3D models (e.g. Blender),
  2. to utilize open file formats (e.g. CSV, GLTF), and
  3. to use open technologies for creating our research environment (e.g. BabylonJS).

A key component is the open-source 3D game engine for the web named BabylonJS. We chose a web technology to provide us with the flexibility to support various platforms (ranging from mobile to desktop) and to be able to use more immersive modalities such as VR via the emerging WebXR standard.

In our turn, we are also publishing the underlying data and tools via an open-source tools license as well, and make these accessible in Zenodo and DANS Easy by the end of the project. Documentation of process, demos and code is also a key focus, augmented by screencasts and screenshots of the demonstrator at different points in time.

6. Conclusion

Despite the end of my tenure at Virtual Interiors, the developments do not end here. Additional 3D reconstructions will be integrated into the research environment. Moreover, the 3D research environment will be refactored and modularized by Saan Rashid, who recently started as a software engineer in Virtual Interiors.

This blogpost discussed the creation of a demonstrator for a 3D research environment. I have outlined the challenges, our approach to tackle them, but also the key importance of evaluation when creating these kinds of novel interfaces. I highlighted some of the numerous exciting opportunities to use 3D reconstructions in a research context, ultimately providing a form of “analytic immersion”.

Try out the experimental prototype (desktop version) with the entrance hall of the 17th-C house of Pieter de Graeff & Jacoba Bicker, reconstructed in 3D by Chiara Piccoli (prototype version: August 2021).

Footnotes*

  1. Gane & Beer (2008)
  2. see e.g. Chittaro & Ieronutti (2004); LaViola et al (2017); Jerald (2016)
  3. See e.g. Jacko (2012); Lewis (2012); Mayhew (1999)
  4. See for instance Champion (2019); Wood et al (2019)
  5. See e.g. https://www.interaction-design.org/literature/topics/design-thinking
  6. See Huurdeman & Piccoli (2020) and this summary blogpost.
  7. See Shneiderman (2002).
  8. E.g. Dede (2009), Immersive Interfaces for Engagement and Learning.
  9. See e.g. the survey of features in current 3D frameworks by Champion & Ramahan (2020).
  10. See e.g. Antoniou & Van Harmelen (2008)
  11. see e.g. De Boer et al. (2016), Linked Data for Digital History; Meroño-Peñuela et al. (2020), Ontologies in CLARIAH; Zamborlini et al. (2017), Toward a Core Conceptual Model for (Im)material Cultural Heritage.
  12. As e.g. done by Kuroczyński et al. (2016)
  13. SPARQL is a query language commonly used for Linked Data repositories
  14. See e.g. Delve et al., 2012, The Preservation of Complex Objects—Volume 1; Lischer-Katz et al., 2017, Supporting Virtual Reality and 3D in Academic Libraries. Also, projects such as Pure3D are looking at infrastructures for 3D visualizations.
  15. More about this topic will be discussed in Huurdeman, Posthumus, Van den Heuvel, Beyond Dynamic Drawings: Restoring and Re-using Interactive 3D Visualizations (in preparation).
  16. Huurdeman & Piccoli (2021), 3D Reconstructions as Research Hubs

Abbreviated citations correspond with Huurdeman & Piccoli (2021), 3D Reconstructions as Research Hubs, full details in paper’s reference list.

The Importance of Context in Search User Interfaces for 3D Content

How to design user interfaces for searching and retrieving 3D models in a digital humanities context? This was the question investigated in our short paper entitled “More than just a Picture” – The Importance of Context in Search User Interfaces for Three-Dimensional Content [1] (PDF), published in the proceedings of the CHIIR conference 2020. Since the conference couldn’t physically take place due to the COVID-19 situation, we briefly present our work in this blogpost.

Background

This paper was inspired by the growing number of historical environments which are reconstructed in three dimensions, for instance the 17th century houses in Amsterdam within the Virtual Interiors project. However, search and retrieval of three-dimensional assets are so far underrepresented in research and practice. Thus, it is unclear how to support complex searches for these types of content, for instance in the context of digital humanities (DH) research.

To investigate interfaces for searching and retrieving 3D models, we looked at three questions: 

1. How do digital humanists conceptualize search in the context of historical reconstructions? 

2. Which spatial support by 2D, enhanced 2D and 3D maps do they prefer for exploratory searches?

3. Which dimensions of spatial context do digital humanists refer to when searching for 3D objects?

The questions were researched via an exploratory user study with six Digital Humanities researchers, using a so-called “think-aloud” protocol, in which participants verbalize their thoughts and actions. We evaluated interactive mockups of a prospective search engine, contextualized by 2D, enhanced 2D and 3D maps.

Figure 1: Basic (‘baseline’) search user interface (first 2 results). Result item: (A) 3D model thumbnail. From archival info.: (B) Title; (C) – (D) descriptions; (E) house address, room; (F) – (H) various metadata. (I) – (J) Degree of uncertainty in proposed 3D reconstruction.

Findings

Our findings are fully described in the paper referenced below [1], but here we summarize the main points:

Conceptualizing a 3D search engine (1)

During our user study, DH researchers imagined how a search engine for 3D objects would work, and what it would look like. They emphasized textual aspects, such as “a list of paintings that were in the house”, item descriptions, links to sources and various enrichments. Also, visual aspects played a key role. For instance, the 3D search engine should be “visually attractive”, one “should know what things looked like” and “not just get the image”, but also the overall spatial and historical context was emphasized. This means not just knowing “where something is”, but also how an object “relates to other objects and environments”, according to the participants.

Comparing 2D & 3D map variants (2)

In a subsequent part of our study, we looked further into which spatial support should ideally be provided in a 3D search engine. Participants tried out interactive mockups containing static 2D, and interactive 2D and 3D maps of a prospective search engine in a randomized order (see Figure 1 and 2). The responses of post-task surveys can best be summarized by participants’ satisfaction ratings: this was clearly highest for the 3D maps (4.2 out of 5) as opposed to enhanced 2D (3.8) and static 2D maps (3). As one participant indicated, the 3D maps “give a better perception of space”. At the same time, a need for combining map modalities came up: the usefulness of the 2D and 3D maps depends on the type of task a researcher is conducting.

Dimensions of spatial context (3)     

Our third question focused specifically at the role of spatial context, via a thematic analysis of the transcribed think-aloud sessions, which derives the main themes and subthemes that occurred for the combined set of participants. The main identified themes were:

  • Object properties ❍, referring to location and size of objects
  • Object relationships , referring to relations between objects with other objects, or the environment 
  • Perception of space , including proportions, structural elements, space functions and lighting

These categories are visualized in Figure 3, which depicts the subcategories of the three main themes, and how many study participants referred to them. The findings represented in this diagram can help to inform design of future search interfaces for 3D objects, and to decide which kind of spatial context is needed.

Conclusion

Summarizing, our exploratory study showed the prime role of context in retrieving historical 3D objects, such as object properties, object relationships and cues in the environment. Participants expressed a preference for 3D contextual maps when presented with different types of maps in a prospective search interface. Our findings suggest a need for hybrid visualization modalities depending on the task at hand, which can be informed by the identified spatial context dimensions. Please refer to our published paper for full details and references.

[1] Hugo Huurdeman & Chiara Piccoli (2020). “More than just a Picture” – The Importance of Context in Search User Interfaces for Three-Dimensional Content. In Proceedings CHIIR 2020. ACM. (PDF).

Geographies & enhanced publications at ECARTICO data sprint

On Thursday 12 March, a data sprint regarding the datasets ECARTICO and ONSTAGE took place at the University of Amsterdam, organized by Amsterdam Time Machine / Golden Agents. Since the used datasets are very relevant for the Virtual Interiors project, several team members joined the sprint. 

As indicated on the data sprint website, “ECARTICO contains biographical data about painters, engravers, publishers, goldsmiths, writers and other creative people working in the Low Countries” in the 16th and 17th centuries, and provides information about their networks. Additionally, ONSTAGE contains the complete performance history in the Amsterdamse Schouwburg from 1638 to 1940. Since these datasets are very extensive (ECARTICO alone contains more than 50,000 entries), and since they also provide extensive links to e.g. Wikidata, DBNL and RKD, it was very worthwhile to experiment with these datasets in the sprint.

During the event, team members of Virtual Interiors joined a workshop focused on Geographies and networks of style and subject matter (Workshop 3), and on Enhanced publications (Workshop 4). Workshop 3 looked at the opportunities to use and combine ECARTICO, Wikidata and RKD data to create geographies of style, and social networks. Using “semantic queries” in SPARQL, we could retrieve painters from certain periods, and get statistics on the cities they worked from. For a full overview and explanation of methods, please refer to the page for workshop 3. Workshop 4 focused on enhanced publications, i.e. single-screen applications which allow for simultaneously accessing biography texts, artwork images and other updated biographical information. Also in this case, the full information on project objectives, approaches and results can be found on the dedicated page for workshop 4.

For Virtual Interiors, the results of the data sprint are particularly interesting since they provide inspiration for our integration of linked data, the potential opportunities and difficulties, and because they can inform our explorations into creating enhanced publications.

Powered by WordPress & Theme by Anders Norén