A Design Space for Visualization with Large Scale-Item Ratios


1 Introduction and Background↩︎

Designing visualizations where the smallest data items can be clearly seen from a high level is challenging. Visualizations typically have a limit to the space they can use, and yet must encode items in the limited space while retaining distinguishability. Sometimes, items may even be smaller than can be represented, for example if they require subpixel positioning.

The challenge arises in a variety of disciplines. Gillmann et al. include the challenge of designing multiscale visualizations in their ten open challenges in medical visualization [1]. They comment on how multiple scales are often not visualized simultaneously in medical imaging, and that the field needs to find techniques for integrating the different scales together, suggesting high-level ideas such as focus and context, zooming, and filtering. Similarly, Ståhlbom et al. describe that working with multiscale data is a challenge faced by those working on DNA sequencing, as they need to analyze the data at varying levels [2]. In our own work, we describe the challenge of designing a digital exhibit called DeLVE for educating museum visitors about the geological and biological history across varying scales of time, where visitors need to be able to relate the various scales to each other [3].

Visualization designers have used many techniques to address this challenge, ranging from interactive zooming to multiple simultaneously-visible scales. However, no framework exists to support designers with low-level design decisions while facing this challenge. To meet this need, we present a design space for these scenarios, motivated by the authors’ work on and challenges with designing DeLVE.

First, we must carefully describe the visualization design scenarios that are the target of this design space to ensure a feasible scope. We define value as the magnitude of data items in data space. We can then construct a mapping, a transformation from a value to a discretized display space position. A scale is a region of the discretized display space that depicts a mapping, and it ranges from minimum to maximum positions in display space. A size is the difference between a scale or an item’s minimum and maximum positions in display space. Finally, we define scale-item ratio to be the ratio between the size of the largest scale and the size of the smallest item, both in display space. This paper covers scenarios in which the scale-item ratio is large.

We identify large scale-item ratio scenarios by assessing whether we face one or both of two challenges when visualizing the full dataset on a single linear scale that fits fully within a human range of vision. The first challenge is when the smallest item is not visible, for example if its size, as determined by the linear scale, results in it being smaller than a pixel, or the smallest manufacturable detail size for physicalizations. The second challenge is when the smallest items are not clearly separable, for example when multiple items must fit within a single pixel or smallest manufacturable detail size. If one or both of these challenges arise under these conditions, then we consider the visualization design scenario to be one with large scale-item ratio. This approach for identifying large scale-item ratio scenarios depends heavily on context, however; differences in factors like screen pixel density or physical material used to display the visualization will directly impact the quantitative limit between large and small scale-item ratios.

The two primary contributions of this paper are a design space for visualization scenarios with large scale-item ratios, and strategies that partition the example set according to shared approaches for design space choices. We also provide the secondary contribution of a corpus of 54 examples of visualizations developed by both researchers and practitioners, coded according to the design space and the strategies.

2 Related Work↩︎

We now discuss related work, first covering work that analyzes collections of large scale-item ratio visualizations and then discussing design spaces.

2.1 Existing Frameworks↩︎

Existing work has investigated visualizations to help users understand scales and the use of multiscale visualization. In their work on concrete scales, Chevalier et al. provide a framework for the varied use of the technique in visualization [4]. While we consider the use of concrete scales as a dimension of our design space, we do not further break down its use and the other dimensions are independent from concrete scales.

Garrison et al. conduct a survey on and construct an overview of visualization in physiology that focuses on multiscale problems [5]. While their work covers many visualization examples, they focus on breaking down their findings by different parts of physiology research and do not provide a model or guidance for visualization design elements.

Jakobsen et al. investigated the relationship between display size, information space, and scale. They conduct two user experiments, varying visualization technique, display sizes, and the mapping between the information space and the display size. They report their findings and discuss how the varying factors impact a user’s ability to complete a task. In contrast to our discussion of the scale-item ratio, they do not name any of the relationships between factors. Further, they do not provide lower-level guidance for visualization design as we aim to do.

The most closely related work to our own is the structured literature analysis of design practices in multiscale visualization research by Cakmak et al. [6]. Their work differs from ours, as where they describe the state of research on multiscale visualization, we aim to provide a framework of lower-level design components used in large scale-item ratio visualizations. Their coding scheme is very different than the dimensions and choices in our own design space, as where they describe high level idiom and interaction choices, we provide lower-level design components which are independent from idiom. Further, their scope is far larger than our own, covering many aspects of multiscale visualization that extend beyond the narrower context of large scale-item ratios that we investigate. However, their coverage is more narrow, because their analysis covers only the academic literature; they exclude non-academic examples of real-world use by practitioners, which we do include. This difference leads to many of their dimensions and choices to be far outside the scope of our work.

Several of their design considerations do touch on our concerns. The most relevant are Understand relations across different scales, Guide users during multiscale navigations, Visualize abstraction measurements across scales, and Design tailored multiscale domain visualizations. Our design space can be seen as a response to these calls for action, providing a structure for analysis to address these very questions. Similarly, our design space responds to one of the open research questions they identified, their call for further quantification of visual scalability. We use their paper as a seed paper for examples from academic literature, and we incorporate the search terms they used in our systematic literature search.

2.2 Design Spaces↩︎

Design spaces impose systematic structure on a set of possibilities for a specific problem, capturing the key variables at play. They provide an actionable structure for systematically reasoning about solutions [7]. Describing and analyzing portions of a design space allows us to understand differences among designs and suggest new possibilities [8]. They increase cognitive efficiency and support inferences, by grouping similar instances together to facilitate reasoning about classes rather than instances [9].

Visualization researchers have developed design spaces for a variety of topics. Goffin et al. created a design space for word-scale visualizations [10], which, similar to our design space, focused on the design components of the visualizations within their scope. However, it is useful to consider design spaces for concepts outside of design components. Schulz et al. use a design space to describe abstract visualization tasks [11]. Elliot et al. construct a design space to describe methods, specifically for vision science research on visualizations [7]. Kim et al. provide a design space around accessible visualization which encompasses a combination of these concepts, including design components and abstract tasks as well types of users and technologies [12]. No previous work provides a design space for visualization with large scale-item rations; this paper addresses that gap.

3 Methods↩︎

The development and validation of our design space took place in three stages: Initialize, Expand, Refine. We iteratively created a corpus of examples to guide the creation and assess the value of the design space, adding new relevant examples and removing those that were no longer in scope at each stage. While collecting examples, we iteratively analyze the corpus to construct the eight dimensions of the design space. We then analyze the dimension choices these examples use to construct a set of five strategies, and use the design space choices to identify missed opportunities in certain examples.

3.1 Corpus Collection and Design Space Construction↩︎

We now discuss the process of each stage of corpus collection and dimension iteration.

3.1.1 Initialize↩︎

In the Initialize stage, we constructed an initial set of 21 examples. We began with a small number of examples that we were already aware of or were provided to us by domain expert co-authors with whom we collaborated in other projects. After collecting the initial set, we paused collection to construct an initial version of the design space dimensions, so that we could collect further examples in a later stage to validate those choices. Our initial design space fully described the initial set.

3.1.2 Expand↩︎

In the Expand stage, we focused on increasing the coverage of the example corpus to validate our initial design space. We collected 16 more examples, called the expansion set, primarily through forward and backward chaining on our existing academic examples but also from Google searches for practitioner work, additional suggestions, and author memory, resulting in a total of 37 examples in the corpus. The additional examples led to refinements which involved both introducing new dimensions to capture more differences and eliminating or merging uninformative dimensions in an effort to improve distinguishability between examples. After modifying the design space, we re-coded the initial set in the new dimensions. Since this stage resulted in changes to the design space, we sought out further collect examples, again to validate the newly-refined dimensions.

3.1.3 Refine↩︎

By the final Refine stage, we had exhausted all examples from author memory and domain expert suggestion, the majority of which were in real-world use rather than from academic literature. We chose to conduct a systematic literature search to collect relevant examples from academic literature, validate the design space, and finalize the dimensions. We began by reviewing each of the 122 articles included in Cakmak et al.’s collection [6], then including only papers that discussed visualization systems which had large scale-item ratio, leaving 19 papers. The result was 20 new examples, as one paper contributed two systems. We then conducted a search for multiple scales visualization examples since 2021, the year of Cakmak et al.’s submission. We used the Bielefeld Academic Search Engine (BASE), which meets quality requirements for academic literature searches [13]. We filtered the search to only include journal and conference articles since 2021 (inclusive), and used the same keywords used by Cakmak et al., namely the keyword “visualization” paired with one of the following terms: “multiple scales”, “multiresolution”, “multiple levels of detail”, “multi-level”, and “multiscale” [6]. This initial search returned 112 results. We then reviewed each article and included only ones that covered large scale-item ratio visualization systems and removed duplicates, leaving six articles. In total, data collection in this stage, including both the backward chaining from Cakmak et al. and the keyword search, led to 26 new examples which we call the systematic set. We coded these new examples in the design space. We also removed 9 previously-collected examples as they no longer met our inclusion criteria of covering a scenario with large scale-item ratio. At the end of the refine stage, our corpus contained 54 examples; 14 examples remained from the initial set, 14 from the expansion set, and 26 from the systematic set.

We then coded all examples in the systematic set in our design space dimensions. During this iteration, we did not identify any meaningful differences between examples which were not already described by our design space, validating it. However, our reflection on the entire corpus at the end of this stage led to refinements to the design space to improve understandability. Finally, we re-coded all 54 examples in the corpus into our final set of dimensions and choices. We present the full and final design space in detail in Section 4.

3.2 Strategies and Missed Opportunities Analysis↩︎

We also sought to identify implications for design using our design space. To this end, we conducted analysis in two ways. First, we analyzed coded examples within the design space to find meaningful groups through iterative coding in three steps. We began by coding similar examples into groups that only differed by one or two dimensions. Then, we determined which of the dimensions remained consistent within each group. We used these sets of consistent dimensions to give meaningful names to the groups and define precisely what makes an example fit within a group. We would then begin this process again, but in the first step would code the groups rather than the individual examples, leading to us merging similar groups into single, larger groups. Once we were no longer able to merge groups together, we applied the definitions of the groups to the entire corpus again to ensure that we had the correct examples in each group and that each example fit into exactly one group each. This process led us to identify five groups, which we call strategies, namely shared approaches with respect to design space choices. We discuss the results of this analysis in Section 5.

We also analysed missed opportunities in our corpus that we identified through usage of the design space in general, and the interaction between the choices for strategy and for the dimension pertaining to purpose. We noted where there were large discrepancies such as heavily used or underused choices across the entire corpus and within specific strategies. We identified examples using the discrepancies and analyzed them with respect to whether using alternative choices could have improved the design. We discuss our findings from this analysis in Section 6.1.

4 The Design Space↩︎

The design space contains 3 independent dimensions, each with between one and four subdimensions, for a total of eight subdimensions. Figure [fig:dimensions] shows an overview of the dimension hierarchy. In this section, we describe each dimension in detail (bold), and explain the choices within it (italics) by referring to examples in the corpus. We also provide four synthetic illustrations that are simplified and stylized evocations of examples found within the corpus, shown in Figure 1. We designed these illustrations to, together, cover the most complex visual concepts described in the design space’s dimensions, enabling us to describe them more clearly. Table 1 shows all examples and their codes for each dimension. See Table 2 for a description of each example in the corpus.

Figure 1: The four synthetic illustrations. a) Static Multilevel is a static visualiation with two scales, one higher-level and one lower-level. The two scales use different encodings, and the point of focus for the lower-level scale is determined by existing structure in the data. The two scales are connected by connection line marks. b) Interactive Multilevel is a visualization with two scales. The user chooses the zoom level by selecting one of four discrete options above to zoom in to below. c) Interactive Zoom is a line chart that the user can both zoom into using a scroll wheel and pan across by dragging with a mouse, based on Multiscale Trace [14]. d) Auto Zoom is a visualization that employs automatic zooming between objects at a constant rate, based on Powers of Ten [15].

Table 1: The 54 corpus examples grouped according to the 5 strategies, and the 4 synthetic illustrations, coded by the hierarchical dimensions of the design space. The Count subdimension uses the format total:simultaneous:separate to display its three components. Abbreviations: init. = initialize; assoc. = association; cont. = continuous; disc. = discrete; inf. = infinite.
Scales Navigation Familiarity
Example (short title) Citation Source Stage count step type encodings assoc. type mode visceral time concrete
Single-View Pan and Zoom
Zoom Line Chart [16] academic init. :1:1 user cont. same both digital no no
Cuttlefish (fig 6) [17] academic expand :1:1 user cont. same both digital no no
EVEVis [18] academic expand :1:1 user mixed different both digital no no
Multilevel Poetry [19] academic expand :1:1 user disc. different both digital no no
Multiscale Trace [14] academic expand :1:1 user cont. different both digital no no
Europe OSM [20] academic refine :1:1 user cont. different both digital no no
Zoomable Treemaps [21] academic refine :1:1 user disc. same both digital no no
Chameleon [22] academic refine :1:1 user cont. same both digital no no
Hierarchical Route Maps [23] academic refine :1:1 user cont. same both digital no no
Large Viewing Vis [24] academic refine :1:1 user cont. same both physical no no
Kyrix-S [25] academic refine :1:1 user cont. same both digital no no
Membrane Mapping [26] academic refine :1:1 user cont. same both digital no no
Execution Trace Vis [27] academic refine :1:1 user cont. same both digital no no
MuSE [28] academic refine :1:1 user cont. same both digital no no
ScaleTrotter [29] academic refine :1:1 user cont. same both digital no no
SpaceFold [30] academic refine :1:1 user cont. same both digital no no
TagNetLens [31] academic refine :1:1 user disc. same both digital no no
Hierarchy Vis [32] academic refine :1:1 user disc. same both digital no no
Chemical Vis [33] academic refine :1:1 user cont. different both digital no no
Simultaneous Occluding Embed
Melange [34] practitioner init. :2:1 user cont. same none both digital no no
FingerGlass [35] academic expand :2:1 user cont. same none both digital no no
Tabletop Gestures [36] academic refine :1:1 user cont. same marks both digital no no
Gimlenses [37] academic refine :3:1 user cont. same marks both digital no no
GrouseFlocks [38] academic refine :2:2 user cont. same marks both digital no no
Digital Earth [39] academic refine :3:1 user cont. same marks both digital no no
AdvEx [40] academic refine :2:1 user disc. same marks both digital no no
Scalable Insets [41] academic refine :2:1 user cont. same marks both digital no no
Multi-Foci COVID Vis [42] academic refine :2:1 user cont. different marks both digital no no
PhysicLenses [30] academic refine :2:1 user cont. same marks both digital no no
TissUUmaps [43] academic refine :2:1 user cont. same marks both digital no no
TrailMap [44] academic refine :2:1 user cont. same none both digital no no
Simultaneous Separate Multilevel
Multiscale Unfolding [45] academic init. :6:6 user cont. same none both digital no no
Rivet (MTSC) [46] academic init. :3:3 user cont. same marks both digital no no
Temp Earth [47] practitioner init. :7:7 data driven same none none no
TraXplorer (fig 2) [48] academic init. :5:5 user cont. same channels both digital no no
Mandelbrot Explorer [49] practitioner expand inf:6:6 constant same none both digital no no
MizBee [50] academic expand :3:3 user disc. different channels both digital no no
PolyZoom [51] academic expand :3:3 user cont. same marks both digital no no
Chromoscope [52] academic refine :2:2 user cont. different none both digital no no
TimeNotes [53] academic refine :4:4 user cont. same both both digital no no
Familiar Zoom
DeLVE [3] academic init. :10:10 data driven same both zoom digital no yes
Here is Today [54] practitioner init. :1:1 data driven same zoom digital no yes
Powers of Ten [15] practitioner init. :1:1 constant same zoom digital yes yes
XKCD Money [55] practitioner init. :5:5 constant same marks none digital no yes
Cell Size and Scale [56] practitioner expand :1:1 user cont. same zoom digital no yes
Scale of the Universe 2 [57] practitioner expand :1:1 user cont. same zoom digital no yes
The Size of Space [58] practitioner expand :1:1 data driven same zoom digital no yes
Universcale [59] practitioner expand :1:1 user cont. same zoom digital no yes
US Debt [60] practitioner expand :3:3 data driven same none zoom digital no yes
Lengthy Pan
Science Museum Timeline [61] practitioner init. :1:1 pan physical yes no
The Deep Sea [62] practitioner init. :1:1 pan digital yes yes
Trail of Time [63] academic init. :1:1 pan physical yes no
University Timeline Walk [64] practitioner init. :1:1 pan physical yes no
Wealth Shown to Scale [65] practitioner expand :1:1 pan digital yes yes
Synthetic Illustrations Set
Static Multilevel :2:2 data driven different marks none no no
Interactive Multilevel :2:2 user disc. same channels both digital no no
Interactive Zoom :1:1 user cont. same both digital no no
Auto Zoom :1:1 constant same zoom digital no yes

4.1 Scales↩︎

The Scales dimension describes a visualization’s encodings in terms of the number of different scales and how any different scales differ in mapping, whether they share low-level encoding choices, and how they are associated with each other. It includes four subdimensions: count, step type, encodings, association.

4.1.1 Count↩︎

The count subdimension includes three quantitative components: total, simultaneous, and separate.

The total component represents the total number of scales, using the definition from Section 1, accessible in a visualization. In the case that the number of possible unique scales is discrete, it is straightforward to count. For example, in the synthetic illustration Static Multilevel shown in Figure 1a, we can simply count the two scales. In the corpus example Multiscale Unfolding [45], which is a visualization of multiple levels of DNA, there are six unique scales so the total component is six.

In the case of multiple scales being chosen from a continuous scale, we count the orders of magnitude using the expression \(round(log_{10}(max) - log_{10}(min) + 1)\). In the synthetic illustration Auto Zoom, shown in Figure 1d, the visualization automatically zooms out through different objects encoded by coloured shapes. In this synthetic illustration, there are four different orders of magnitude, so the total component is four. An example of the continuous case from our example corpus is Scale of the Universe 2 [57], a visualization of various objects in the universe from beach balls to the universe itself to quantum foam the width of one planck. In this example, the user navigates through scales from \(10^{27}\) to \(10^{-35}\) of the original scale. Using these values, we calculate 63 orders of magnitude.

The corpus example Mandelbrot Explorer, a web page where users can explore Mandelbrot sets by zooming in to areas of their choosing, stands out from other examples as the user can zoom an infinite number of times due to its recursive nature [49]. We code this example as having an infinite total count.

The simultaneous component represents the number of different scales that are visible from a viewpoint or on a screen at once. We define visible as where the user can still see the finest detail, whether that is an individual item, the smallest trend, or something else. Auto Zoom only shows a single scale at a time, and the user has to wait for it to zoom out to show a different scale. A corpus example with only one scale visible at once is the large physical timeline of Earth’s biological history at science museum local to the authors (Science Museum Timeline) [61].

Static Multilevel shows two different scales on screen at once. MizBee [50], a visualization tool for analysing genomic data at the genome, chromosome, and block levels, shows three different scales on screen simultaneously.

The number of simultaneous scales cannot be larger than the total component, as any scales which are different and visualized simultaneously must be counted towards the total count.

The separate component represents the number of simultaneously visible scales that do not occlude each other in any way. The corpus example FingerGlass, which is a technique for zooming on multitouch screens, creates a zoomed lens on top of the original view rather than in a separate area of the screen when a user zooms [35].

Both synthetic illustrations of Static Multilevel and Interactive Multilevel, shown in Figures 1a and b, have two separate scales as in both cases neither of the scales occludes another. The corpus example Rivet, which includes a multitier strip chart that shows multiple zoomed levels of computer systems data, shows three separate scales at once without occlusion, so its separate component is three.

The number of separate scales cannot be larger than the simultaneous component, as any scales which are different and visualized simultaneously without occlusion must be counted towards the simultaneous count.

4.1.2 Step Size Type↩︎

Step size type is a subdimension that describes the relationship between steps from scale to scale in multiscale examples, and it has five choices: constant, data driven, user continuous, user discrete, and user mixed. In order for a visualization to have steps between scales, it must have multiple scales, so examples with a total count of one were left with blank step size type cells.

When this dimension is constant, it means that the difference between scales, which is typically multiplicative, does not change. Auto Zoom zooms out at a constant rate, so the step size is constant. A corpus example is XKCD’s Money webcomic (XKCD Money) [55], a unit chart of different quantities of money on different scales with multiplicative increases of 1000 between each scale.

The step size type being data driven means that the scales’ mappings change by a data-based amount. Static Multilevel uses a data driven step size as it is not chosen by the user and it is based on an aspect of the data. Multiscale Unfolding [45] visualizes genomic data on multiple scales, and the scales are based on different levels of structure from the chromosome to the base pair. The differences between these scales, then, come directly from that pre-existing structure.

The user continuous option describes when the user chooses the step size from a range. The synthetic illustration Interactive Zoom, shown in Figure 1c, uses this choice as the user decides on the specific zoom level themself. An example is the zoomable timeline view (Multiscale Trace) from Ezzati-Jivan et al. [14], a visualization of large quantities of trace data that the user can zoom in to.

In some visualizations, designers allow users to choose how to navigate from a discrete set of options, a choice we call user discrete. The synthetic illustration of Interactive Multilevel, shown in Figure 1b, uses this choice as the user chooses the step size by selecting an area to zoom in to from a set of four discrete options. Figure 6 of Waldin et al.’s Cuttlefish paper [17] illustrates this idea with a hierarchical treemap where the user can click on a node and have it expanded out, effectively zooming in.

Finally, some visualizations use both user continuous and user discrete, which together form the final option for this dimension: user mixed. EVEVis from Miller et al. [18], a visualization for evolution data at multiple scales, uses user mixed as the user can first select a region from a continuous space, then further zooming is done via discrete selections.

4.1.3 Encodings↩︎

Encodings is a subdimension that describes whether different scales use the same or different visual encodings. This subdimension relies on a visualization using more than one scale, so it is left blank for corpus examples with a total of one.

Static Multilevel uses different encodings on each scale, with a line chart at the higher level and a bar chart at the lower level. Mittman et al.’s multi-level visualization scheme for poetry (Multilevel Poetry) [19], which visualizes poetry at the four levels of phoneme, full poem, small set of poems, and large set of poems, uses different encodings on each of these four different scales.

In contrast, an example that uses the same encoding on every scale is Auto Zoom. The same is true for Zoom Line Chart from FusionCharts [16], a line chart where the user can choose regions to zoom in to.

4.1.4 Association↩︎

Association is a subdimension that describes how marks representing the same item can be visually linked across simultaneously visible scales. It relies on a visualization including multiple scales visible at once, and was left blank for examples with a simultaneous count of one.

Chromoscope, a visualization of genomic data at multiple scales [52], does not include any marks or channels to show association between its different scales, so we code this subdimension as none.

Association can be done by marks, often using connection marks to show association. An example of this approach is Static Multilevel, where there are lines showing how the lower level and the focused point of the higher level are associated.

The other option we identified in our corpus was association by visual channels such as colour. Interactive Multilevel uses the colour channel to show that the lower-level scale is the zoomed in to the blue part of the higher-level scale. Colour is used to show association in TraXplorer [48], an implementation of stack zooming where there are multiple levels and branches of zoom, as the colour of a selected region’s background matches the colour of its zoomed-in counterpart’s border.

TimeNotes [3], a multiscale visualization technique for time-oriented data inspired by stack zooming, used both marks and channels to show association.

4.2 Navigation↩︎

The Navigation dimension covers the interaction capabilities of the design. It includes three subdimensions: type, mode, and visceral time.

4.2.1 Type↩︎

Type is a subdimension that describes the ways in which users can navigate between and within scales.

Some visualizations, such as Static Multilevel, are none, meaning they do not have any interaction. Temp Earth, a corpus example, visualizes the Earth’s temperature on a set of increasingly-zoomed scales towards the more recent past [47], and also has no user navigation.

Zooming is navigation that changes the mapping of a scale or adds another scale with a different mapping without changing the point of focus. Auto Zoom uses zooming to gradually change the mapping. Cell Size and Scale [56], a digital visualization similar to Scale of the Universe 2 [57] where the user zooms through objects of different scales, is an example of zooming, as the scale changes but the point of focus does not.

Panning is navigation that changes the point of focus without changing the mapping. The Deep Sea [62] is a digital visualization where the user gradually pans to traverse the range of ocean depths from the surface to the bottom of the ocean. The mapping of the scale does not change during the traversal, but the point of focus on the scale does.

Some visualizations incorporate both zooming and panning. Rivet [46], a visualization tool for the analysis of computer system data, has a multi-tier strip chart where the user can select point of focus to zoom in on, effectively panning and zooming at once. The synthetic illustration Interactive Zoom also uses both, but rather than having the user drag the mouse to select the new zoom window in one step, it has the user place the mouse on the desired point of focus and uses the scroll wheel to zoom until they are satisfied.

When visualizations incorporated both panning and zooming, we found that they were intended to have open-ended navigation, where the navigation is driven fully by the user. When only one of zooming or panning were in use, the designs had an intended path to follow through the data, either by panning along a large scale or by zooming along many scales.

4.2.2 Mode↩︎

Mode is a subdimension that describes how navigation is done, whether through physically moving oneself or by digitally navigating via computer input devices. If there is no navigation in an example, we left the mode cell blank.

In some navigable visualizations, users control the navigation by physically moving their body. Visitors to the University Timeline Walk [64], a timeline of Earth’s history embedded in the ground at a university local to the authors, pan across the timeline by physically moving so they can view the next pieces of information. Large Viewing Vis, a technique that uses a large screen with visible features at many levels and that requires users to move closer to to see smaller features, has users both pan and zoom physically.

Virtual visualizations are controlled by an input device such as computer mice, keyboards, or touch screens. Interactive Multilevel is an example of a visualization where navigation is controlled by a mouse. FingerGlass [35], a technique for digital geographic maps with pinch to zoom mechanics, used touch screens for navigation. Universcale [59], a visualization where the user scrolls through different objects on different scales, relies on a device that can scroll, such as a mouse with a scroll wheel.

4.2.3 Visceral Time↩︎

The visceral time subdimension describes whether a visualization relies on the user’s experience of time passing while navigating. If there is no navigation in an example, we left the visceral time cell blank.

Most of our corpus examples do not rely on the experience of time. For example, in Here is Today, a timeline visualization with a single total scale that starts with a single day and zooms out to the age of the universe [54], the user navigates by clicking and the animations are fast, meaning that users can fully navigate through the visualization very quickly.

Some visualizations of fully unfamiliar datasets do rely on visceral time. Trail of Time, a large physical timeline that people hike along in the Grand Canyon where each meter represents one million years [63], is an example of using the significant amount of time it takes for a visitor to complete the hike to help them conceptualize the multi-billion-year timeline. Another corpus example of this is Powers of Ten [15], a video documentary that gradually zooms between different scales. Video playback at a standard speed takes nearly ten minutes, so the sense of the time required to zoom at a constant rate between the scales aids the watcher’s conceptualization of the difference between the scales.

4.3 Familiarity↩︎

The familiarity dimension is a dimension with a single subdimension: concrete. It describes whether visualizations compare familiar objects to unfamiliar one to help users understand the scale of the unfamiliar ones, and is related to the concept of concrete scales [4]. Examples of familiar objects from our corpus are meters, days, and dollars.

Execution Trace Vis [27], a visualization of complex program traces, is an example of a visualization that does not incorporate familiar objects as all items in the dataset are events in a trace log which occur during time spans that are much smaller than those humans are familiar with.

Universcale [59] begins focused on familiar objects near a meter in size, such as turtles or dogs which are familiar to many, but zooms to show very small or very large objects which have sizes that humans do not directly interact with.

5 Strategies↩︎

We identified five groups of examples, which we call strategies, from our iterative coding of examples with respect to their dimension choices: Single-View Pan and Zoom, Simultaneous Occluding Embed, Simultaneous Separate Multilevel, and Familiar Zoom, and Lengthy Pan. The strategies are both concise, in that they are simple to reason about due to their lack of complexity, and disjoint, in that they do not overlap. While each strategy includes a set of dimensions which must use certain choices, other choices, unmentioned below, are unrestricted. We now describe and discuss each strategy.

5.1 Single-View Pan and Zoom↩︎

19 of our corpus examples use the Single-View Pan and Zoom strategy, which involves multiple total scales but only one simultaneous scale and both zooming and panning. Given the large number of examples using this strategy, there are many variations. While all of the examples which use this strategy have only a single simultaneous scale, the number of total scales can vary significantly depending on the number of scales or levels within the data.

Some examples that use this strategy rely on pointing and scrolling with the mouse, while others allow the user to click and drag to choose the next viewing window. Others force the user to choose a discrete zoom option from a set, rather than allowing continuous zooming, such as Multilevel Poetry [19]. Also related to navigation, the Large Viewing Vis stands out within this group for its use of physical navigation, although the overall strategy is the same.

Many of these visualizations use the same encodings at each scale, but some have different encodings at different scales. Europe OSM, a visualization of map data that the user can view at different levels [20], uses pie charts to summarize data when zoomed out but encodes finer-grain detail using overlaid grids and pointmarks when zoomed in.

5.2 Simultaneous Occluding Embed↩︎

12 of our corpus examples use the Simultaneous Occluding Embed strategy, meaning that they have multiple simultaneous scales but only one separate scale as the simultaneous scales occlude each other in some way. They also need both zooming and panning.

Many of these examples use the inset zoom or lens zoom techniques, where a zoomed-in area appears in a window on of a visualization. This strategy limits the zoomed-in window to be smaller than the rest of the visualization, as full occlusion would result in only a single simultaneous scale. Sometimes this window occludes the area being zoomed into, like in Melange [34], a technique that folds a visualization to make the zoomed-in area appear physically closer to the user. In other examples, such as FingerGlass [35], the window occludes a separate part of the visualization, sometimes chosen by the user.

Similar to the Single-View Pan and Zoom strategy, step size type can vary. However, our corpus does not include any examples of this strategy where the user must use physical navigation. Many of these examples use marks to show association between the scales, but three include no association. Multi-Foci COVID Vis, a visualization of COVID data in selected geographic areas [42], is the only example using this strategy to also use different encodings on different scales.

5.3 Simultaneous Separate Multilevel↩︎

9 of our corpus examples use the Simultaneous Separate Multilevel strategy, which describes visualizations where there are multiple separate scales, meaning also that there are multiple simultaneous and total scales, but which do not rely on familiarity. The difference between this strategy and Simultaneous Occluding Embed is that multiple scales appear without occluding one another, meaning that all scales can be of equal size.

Similar to both Single-View Pan and Zoom and Simultaneous Occluding Embed, examples using this strategy can both use different encodings or the same encodings on the varying scales. We find that all options for association are in use by at least one example in this group.

We found that when encoding choices across the separate scales of examples which use this strategy are the same, the different scales are aligned and either stacked on top of each other or placed beside each other with most of them using some form of association between them. In contrast, when the encoding choices are different, the scales are unaligned and placed in different completely separate views with no association.

All but one example that used this strategy employed both panning and zooming, and all of these examples with navigation relied on digital navigation. The one example that did not use both panning and zooming was Temp Earth [47], which divides the axis into pieces, each of which has a different scale which is a multiple smaller than the one on its left.

5.4 Familiar Zoom↩︎

9 of our corpus examples use the Familiar Zoom strategy, which relies on zooming through a series of scales that include at least one familiar scale. All Familiar Zoom examples must also incorporate familiarity. We also categorize an example that has no navigation but shows multiple separate scales with association by marks between them, XKCD Money [55], as Familiar Zoom. Some of these examples focus on helping users conceptualize large scales, some with small scales, and some with both.

These examples rely on the user making comparisons between the different scales with varying step size type, beginning with a familiar scale. Most of these examples only show one simultaneous scale, with the zooming modifying the scale of the single view. However, three of these examples have multiple separate scales. Two of the examples with multiple separate scales include a form of association between the scales, likely to aid the comparison between them. While most of these examples do not rely on visceral time, one of them does: Powers of Ten [15].

5.5 Lengthy Pan↩︎

5 of the examples in our corpus use the Lengthy Pan strategy, which involves only a single total scale that the user pans along and which relies on visceral time. All of these examples focused on helping users conceptualize large scales.

These examples can incorporate both physical navigation, where the user physically moves their body along it, or digital navigation, where the user scrolls along it. All of the examples that use this strategy take a significant amount of time to fully navigate through, as intended by the designer. While many visualizations are designed to reduce human feelings of boredom or exhaustion by making information analysis and communication fast, we conjecture that Lengthy Pan examples rely on these feelings to help convey a sense of scale. Two of these examples additionally include familiar items in their datasets for comparison against the unfamiliar items, showing use of familiarity.

6 Discussion↩︎

We now discuss missed opportunities in our corpus that we identified through usage of the design space, and the strengths and limitations of our design space and strategies.

6.1 Missed Opportunities↩︎

Here, we discuss benefits and limitations of different choices within dimensions and how some examples may have benefited from alternate choices.

6.1.1 Employing Physical Navigation↩︎

Dynamic navigation is well-used in multi-scale visualizations with large scale-item ratio. Navigation is useful for analysis in these visualizations as it allows the user to choose regions to analyze in greater detail. We also see it used in examples intended for presentation, possibly to increase engagement through interactivity or to avoid overwhelming users by gradually revealing information. Most the corpus examples were digital and had to use input devices to control navigation, but a few examples relied on physical movement. Ball et al. found that physical navigation is beneficial and preferred by users in the right conditions [66], so we encourage design teams with sufficient resources, particularly space and materials, to consider employing some use of physical movement by the user.

6.1.2 Simultaneous and Separate Scales↩︎

The majority of the corpus examples used small total, simultaneous, and separate counts. Using a single separate scale or a small number of separate scales can allow for more space to be dedicated to detail within the scale, reduce the complexity of the visual representation, and reduce the complexity of the navigation required. However, increasing the number of total scales that the user can navigate to can help to show more detail, and increasing the number that are simultaneous and separate can be beneficial for comparison across scales or drill-down navigation. Here is Today [54] may have benefited from the use of multiple simultaneously visible scales, as its navigation between scales was sometimes hard to follow due to multiple changes of direction in the direction of motion of the timeline. Many of the examples using the Single-View Pan and Zoom strategy, such as Hierarchical Route Maps, a visualization of routes with zooming and panning capabilities to help navigators in both dense and sparse areas [23], may have benefitted from additional separate scales, as they can support the user in navigation by allowing the user to skip to a different point of focus without needing to pan around or zoom out and back in.

6.1.3 Use of Differing Scale Encodings↩︎

Most examples with multiple scales use the same visual encoding on each scale. In our corpus, most examples which use different encodings on different scales have pre-existing, real-world structure. However, we believe this design choice may also be beneficial for other scenarios. If the visualization encodes large quantities of data across multiple scales, then some scales will likely have much more data to encode than others, which may change what encoding is most effective. Multiscale Trace [14] uses different encodings on different scales for this reason.

In a similar scenario, higher-level scales may be used simply for navigation, to find smaller regions to analyse in more detail. In this case, the user is using different scales for different tasks, and the designer should consider this difference when making visual encoding choices for the different scales. Europe OSM [20] uses different encodings on different scales for this reason.

An example where using different encodings on different scales may have been beneficial is Rivet [46], which encodes larger quantities of data on its highest-level scale than its lowest-level scale and uses different scales for different tasks.

6.1.4 Explicit Association Between Scales↩︎

While association by marks or channels is not underused in our corpus, examples which did not use it may have benefitted from it. Using association is helpful for comparison across multiple scales as it can explicitly show a change in mapping by relating the same item across scales. However, association is also helpful for user tasks other than across-scale comparison. For example, if the user is intended to use the multiple scales to navigate to a smaller region for analysis, using association can help the user keep track of the zoomed-in location on the larger-scale landscape. One example that uses the Simultaneous Separate Multilevel strategy and also incorporate different encodings on different scales, Chromoscope [52], lacks any association, but some association may have helped users to navigate and to align the associated marks on different scales.

6.1.5 Visceral Time and Familiarity↩︎

In our example corpus, the only examples that used visceral time or familiarity were examples intended to help users conceptualize large or small scales or the relationships between them. When designing a visualization for other tasks, the incorporation of visceral time may just slow down the analysis without improving the user’s takeaways, and artificially adding familiar items to allow familiarity would likely require the addition of different scales to navigate through but not aid the user in any way.

However, when designing a visualization for these tasks of conceptualizing and comparing scales, visceral time and familiarity may be beneficial to consider. In particular, three of the examples that use the Lengthy Pan strategy do not have familiarity, but beginning with a familiar object may help to convey the size of the scale.

6.2 Strengths and Limitations↩︎

We assess the design space and the strategies in terms of descriptive, generative, and evaluative power [67].

The design space has strong descriptive power, as all meaningful differences from our analysis are distinguishable in the final version. We have further confidence in its completeness and descriptive power because we have evidence of saturation: no example in the systematic set, which we found and coded during the final Refine stage, required any changes to the design space to describe. (Although we further reflected and refined the dimensions, choices, and strategies after adding the systematic set, we were able to fully describe all examples in the systematic set with the design space prior to the final modifications.) The five strategies also demonstrate the descriptive power of the design space, because it is defined in terms of choices within the design space dimensions. The strategies themselves also have descriptive power, in that they provide a disjoint partition of the set of examples. One limitation of our work is that the set of strategies may not be complete: although they do fully describe our example corpus, future designs may use new strategies.

The design space also has generative power. Analysing it revealed missed opportunities within our example corpus which, if available during their design, could have resulted in changes to some examples. In the future, it could inform designers about design possibilities that they may not have considered without this specific prompting. Our strategies also demonstrate strong generative power, because the set of five strategies is a very concise set of options. Choosing one of these strategies can inform and speed design by immediately constraining some of the design choices. We note that the concise set of just five strategies may be useful for quickly choosing which out-of-the-box solutions to use when novelty is not required; in contrast, the more detailed design space may be useful for generating custom visualizations.

One limitation of this paper is that we have not yet validated this design space or strategies in terms of evaluative power, an effort we leave for future work.

Another limitation of the design space and strategies is that our collection of examples from real-world use for the corpus was opportunistic rather than systematic. While we did systematically search academic literature, a systematic search for practitioner examples would not be straightforward to conduct. In particular, finding examples used for education and communication by practitioners is challenging using visualization search terms; some are not even posted publicly online.

7 Conclusion↩︎

In this paper, we present a design space for visualization scenarios with large scale-item ratios: large disparities between the size of the smallest item and the largest scale. The design space has three dimensions: Scales, Navigation, and Familiarity. These dimensions are split into eight subdimensions: count, step type, encodings, and association for Scales; type, mode, and visceral time for Navigation; and concrete for Familiarity. We also present a set of five strategies, which are shared approaches with respect to design space choices and are a partition such that each example fits into exactly one strategy. We collect and analyze a corpus of 54 examples from both research and practice, and code them according to the hierarchical dimensions of this design space and the five strategies. We used these coded examples to develop, validate, and illustrate the design space through three rounds of data collection. We present an analysis of missed opportunities for several examples that considers alternative dimension choices and strategies. Finally, we evaluate the strengths and limitations of the design space and the strategies.

8 Appendix↩︎

We include a short description of each corpus example in Table 2.

Supplemental Materials↩︎

The supplemental material is available on OSF at https://osf.io/wbrdm/?view_only=04389a2101a04e71a2c208a93bf2f7f2, released under a CC-BY-4.0 license. We provide a CSV file containing the example corpus, coded by the dimensions and by the strategies. In addition to the information shown in Table 1, it contains the full titles of examples, the specific figure we coded for academic examples, and how we found the example.

Table 2: Brief text descriptions for each of the 54 examples in the corpus. Abbreviations: Cit. = Citation.
Example Cit. Description
Single-View Pan and Zoom
Zoom Line Chart [16] a technique for zooming where the user drags with the mouse to select an area to zoom in to
Cuttlefish (fig 6) [17] a hierarchical treemap where the user can click on a node and have it expanded out, effectively zooming in
EVEVis [18] a visualization for evolution data at multiple scales
Multilevel Poetry [19] a tool for visualizing poetry at the four levels of phoneme, full poem, small set of poems, and large set of poems
Multiscale Trace [14] a visualization of large quantities of trace data that the user can zoom in to
Europe OSM [20] a visualization of map data that the user can view at different levels
Zoomable Treemaps [21] a technique for navigating through complex treemaps, including zooming in to user-chosen areas
Chameleon [22] a technique for adjusting colour scheme based on zoom level, implemented into an interactive multi-scale visualization of HIV
Hierarchical Route Maps [23] a visualization of routes with zooming and panning capabilities to help navigators in both dense and sparse areas
Large Viewing Vis [24] a type of visualization that uses a very large screen to display visualizations which users can navigate by physically moving their bodies
Kyrix-S [25] a system for creating scalable scatterplots where the user can navigate, including selecting areas to zoom in to
Membrane Mapping [26] a visualization tool for zooming in to different scales of cell, from mesoscopic to molecular
Execution Trace Vis [27] a visualization of complex program traces
MuSE [28] a tool that utilizes infinite pan and zoom for creating visualizations at multiple scales
ScaleTrotter [29] an interactive, multi-scale visualization of genome data
SpaceFold [30] a technique for zooming where users fold the visualization
TagNetLens [31] an interactive tool for exploring tag data with zooming and panning
Hierarchy Vis [32] a visualization technique for hierarchical data with zooming by selecting links between nodes
Chemical Vis [33] a technique for visualizing bioactive chemical data at multiple levels of detail
Simultaneous Occluding Embed
Melange [34] a technique that folds a visualization to make the zoomed-in area appear physically closer to the user
FingerGlass [35] a technique for zooming on multitouch screens
Tabletop Gestures [36] a set of techniques for multiple users to zoom in on tabletop devices
Gimlenses [37] a technique for navigating 3D models by creating a tree of zoomed insets
GrouseFlocks [38] a system for navigating hierarchical graphs with zooming
Digital Earth [39] a system for visualizing geospatial data with a tree of zoomed insets
AdvEx [40] a visualization of adversarial attacks with inset zooming
Scalable Insets [41] a technique for interactively zooming through the use of zoomed insets
Multi-Foci COVID Vis [42] a visualization of COVID data in selected geographic areas
PhysicLenses [30] a technique for zooming where users use two fingers to create a magnification lens
TissUUmaps [43] a visualization of spatial omics data with inset zooming
TrailMap [44] a visualization of map data with a zoomed view anchored in the corner to show the bookmarked region
Simultaneous Separate Multilevel
Multiscale Unfolding [45] a visualization of multiple levels of DNA
Rivet (MTSC) [46] a visualization tool for the analysis of computer system data
Temp Earth [47] a visualization of the Earth’s temperature on a set of increasingly-zoomed scales towards the more recent past
TraXplorer (fig 2) [48] an implementation of stack zooming where there are multiple levels and branches of zoom
Mandelbrot Explorer [49] a website for exploring the Mandelbrot set where the user recursively selects a spot to zoom in
MizBee [50] a visualization tool for analysing genomic data at the genome, chromosome, and block levels
PolyZoom [51] a visualization technique for constructing a tree of zoomed views
Chromoscope [52] a visualization of genomic data at multiple scales
TimeNotes [53] a multiscale visualization technique for time-oriented data inspired by stack zooming
Familiar Zoom
DeLVE [3] a visualization of historical events across many scales, showing all the scales simultaneously
Here is Today [54] a timeline visualization with a single total scale that starts with a single day and zooms out to the age of the universe
Powers of Ten [15] a video documentary that gradually zooms between different scales
XKCD Money [55] a unit chart of different quantities of money on different scales with multiplicative increases of 1000 between each scale
Cell Size and Scale [56] a digital visualization where the user zooms through objects of different scales
Scale of the Universe 2 [57] a visualization of various objects in the universe from beach balls to the universe itself to quantum foam the width of one planck
The Size of Space [58] a visualization were the user steps through a series of objects of exponentially increasing size, comparing them with those that came before
Universcale [59] a visualization where the user scrolls through different objects on different scales
US Debt [60] a visualization where the user scrolls through increasing quantities of money visualized by referring to the previous quantity
Lengthy Pan
Science Museum Timeline [61] a large physical timeline of Earth’s biological history at science museum local to the authors
The Deep Sea [62] a digital visualization where the user gradually pans to traverse the range of ocean depths from the surface to the bottom of the ocean
Trail of Time [63] a large physical timeline that people hike along in the Grand Canyon where each meter represents one million years
University Timeline Walk [64] a timeline of Earth’s history embedded in the ground at a university local to the authors
Wealth Shown to Scale [65] a visualization where the user scrolls along an extremely long axis which shows monetary values from one thousand to trillions of dollars

References↩︎

[1]
C. Gillmann, N. N. Smit, E. Gröller, B. Preim, A. Vilanova, and T. Wischgoll. Ten open challenges in medical visualization. IEEE Computer Graphics and Applications, 41(5):7–15, 2021.
[2]
E. Ståhlbom, J. Molin, C. Lundström, and A. Ynnerman. Visualization challenges of variant interpretation in multiscale ngs data. In EuroVis 2022 Posters. The Eurographics Association, 2022.
[3]
Anonymized. into earth’s past: A visualization-based exhibit deployed across multiple museum contexts. In submission to VIS24 (see supplemental), 2024.
[4]
F. Chevalier, R. Vuillemot, and G. Gali. Using concrete scales: A practical framework for effective visual depiction of complex measures. IEEE Trans. Visualization and Computer Graphics, 19(12):2426–2435, 2013.
[5]
L. A. Garrison, I. Kolesar, I. Viola, H. Hauser, and S. Bruckner. Trends & opportunities in visualization for physiology: A multiscale overview. Computer Graphics Forum (Proc. EuroVis), 41(3):609–643, 2022.
[6]
E. Cakmak, D. Jäckle, T. Schreck, D. A. Keim, and J. Fuchs. Multiscale visualization: A structured literature analysis. IEEE Trans. Visualization and Computer Graphics, 28(12):4918–4929, 2021.
[7]
M. A. Elliott, C. Nothelfer, C. Xiong, and D. A. Szafir. A design space of vision science methods for visualization research. IEEE Trans. Visualization and Computer Graphics, 27(2):1117–1127, 2021.
[8]
S. K. Card and J. Mackinlay. The structure of the information visualization design space. In Proc. IEEE Symp. Information Visualization (InfoVis)), pp. 92–99, 1997.
[9]
P. Ralph. Toward methodological guidelines for process theories and taxonomies in software engineering. IEEE Trans. Software Engineering, 45(7):712–735, 2019.
[10]
P. Goffin, W. Willett, J.-D. Fekete, and P. Isenberg. Exploring the placement and design of word-scale visualizations. IEEE Trans. Visualization and Computer Graphics, 20(12):2291–2300, 2014.
[11]
H.-J. Schulz, T. Nocke, M. Heitzler, and H. Schumann. A design space of visualization tasks. IEEE Trans. Visualization and Computer Graphics, 19(12):2366–2375, 2013.
[12]
N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim. Accessible visualization: Design space, opportunities, and challenges. Computer Graphics Forum (Proc. EuroVis), 40(3):173–188, 2021.
[13]
M. Gusenbauer and N. R. Haddaway. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of GoogleScholar, PubMed, and 26 other resources. Research Synthesis Methods, 11(2):181–217, 2019. https://doi.org/10.1002/jrsm.1378.
[14]
N. Ezzati-Jivan and M. R. Dagenais. Multiscale navigation in large trace data. In Proc. Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–7. IEEE, 2014.
[15]
C. Eames and R. Eames. Powers of ten. https://www.youtube.com/watch?v=0fKBhvDjuy0, 1968. Accessed: 2023-09-03.
[16]
D. Tiwari. Zoom line chart. https://www.fusioncharts.com/dev/chart-guide/standard-charts/zoom-line-charts, 2020. Accessed: 2023-11-21.
[17]
N. Waldin, M. Waldner, M. Le Muzic, E. Gröller, D. S. Goodsell, L. Autin, A. J. Olson, and I. Viola. Cuttlefish: Color mapping for dynamic multi-scale visualizations. Computer Graphics Forum (Proc. EuroVis), 38(6):150–164, 2019.
[18]
R. Miller, V. Mozhayskiy, L. Tagkopoulos, and K.-L. Ma. : A multi-scale visualization system for dense evolutionary data. In IEEE Symp. Biological Data Visualization (BioVis)., pp. 143–150, 2011.
[19]
A. Mittmann, A. von Wangenheim, and A. L. dos Santos. A multi-level visualization scheme for poetry. In Intl. Conf. Information Visualisation (IV), pp. 312–317. IEEE, 2016.
[20]
D. Zacharopoulou, A. Skopeliti, and B. Nakos. Assessment and visualization of osm consistency for european cities. ISPRS Intl. Journal of Geo-Information, 10(6):361, 2021.
[21]
R. Blanch and É. Lecolinet. Browsing zoomable treemaps: Structure-aware multi-scale navigation techniques. IEEE Trans. Visualization and Computer Graphics, 13(6):1248–1253, 2007.
[22]
N. Waldin, M. Le Muzic, M. Waldner, E. Gröller, D. Goodsell, A. Ludovic, and I. Viola. Chameleon: dynamic color mapping for multi-scale structural biology models. In Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM), vol. 2016. NIH Public Access, 2016.
[23]
F. Wang, Y. Li, D. Sakamoto, and T. Igarashi. Hierarchical route maps for efficient navigation. In Proc. Conf. Intelligent User Interfaces (IUI), pp. 169–178, 2014.
[24]
P. Isenberg, P. Dragicevic, W. Willett, A. Bezerianos, and J.-D. Fekete. Hybrid-image visualization for large viewing environments. IEEE Trans. Visualization and Computer Graphics, 19(12):2346–2355, 2013.
[25]
W. Tao, X. Hou, A. Sah, L. Battle, R. Chang, and M. Stonebraker. Kyrix-s: Authoring scalable scatterplot visualizations of big data. IEEE Trans. Visualization and Computer Graphics, 27(2):401–411, 2020.
[26]
T. Waltemate, B. Sommer, and M. Botsch. Membrane mapping: Combining mesoscopic and molecular cell visualization. In Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM), pp. 89–96, 2014.
[27]
J. Trümper, J. Döllner, and A. Telea. Multiscale visual comparison of execution traces. In Intl. Conf. Program Comprehension (ICPC), pp. 53–62. IEEE, 2013.
[28]
G. W. Furnas and X. Zhang. : a multiscale editor. In Proc. ACM Symp. User interface software and Technology (UIST), pp. 107–116, 1998.
[29]
S. Halladjian, H. Miao, D. Kouřil, M. E. Gröller, I. Viola, and T. Isenberg. Scale Trotter: Illustrative visual travels across negative scales. IEEE Trans. Visualization and Computer Graphics, 26(1):654–664, 2019.
[30]
S. Butscher, K. Hornbæk, and H. Reiterer. SpaceFold and physicLenses: simultaneous multifocus navigation on touch surfaces. In Proc. Intl. Working Conf. Advanced Visual Interfaces (AVI), pp. 209–216, 2014.
[31]
L. Gou, S. Zhang, J. Wang, and X. Zhang. TagNetLens: multiscale visualization of knowledge structures in social tags. In Proc. Intl. Symp. Visual Information Communication, pp. 1–9, 2010.
[32]
D. Holten and J. J. van Wijk. Visual comparison of hierarchically organized data. 27(3):759–766, 2014.
[33]
M. Yamazawa, T. Itoh, and F. Yamashita. Visualization and level-of-detail control for multi-dimensional bioactive chemical data. In Intl. Conf. Information Visualisation (IV), pp. 11–16, 2008.
[34]
N. Elmqvist, N. Henry, Y. Riche, and J.-D. Fekete. Melange: space folding for multi-focus interaction. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 1333–1342, 2008.
[35]
D. P. Käser, M. Agrawala, and M. Pauly. Fingerglass: efficient multiscale interaction on multitouch screens. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 1601–1610, 2011.
[36]
V. Rusnák, C. Appert, O. Chapuis, and E. Pietriga. Designing coherent gesture sets for multi-scale navigation on tabletops. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 1–12, 2018.
[37]
C. Pindat, E. Pietriga, O. Chapuis, and C. Puech. Drilling into complex 3D models with gimlenses. In Proc. ACM Symp. Virtual Reality Software and Technology (VRST), pp. 223–230, 2013.
[38]
D. Archambault, T. Munzner, and D. Auber. GrouseFlocks: Steerable exploration of graph hierarchy space. IEEE Trans. Visualization and Computer Graphics, 14(4):900–913, 2008.
[39]
M. J. Sherlock, M. Hasan, and F. F. Samavati. Interactive data styling and multifocal visualization for a multigrid web-based digital earth. Intl. Journal of Digital Earth, 14(3):288–310, 2021.
[40]
Y. You, J. Tse, and J. Zhao. Panda or not panda? understanding adversarial attacks with interactive visualization. arXiv preprint arXiv:2311.13656, 2023.
[41]
F. Lekschas, M. Behrisch, B. Bach, P. Kerpedjiev, N. Gehlenborg, and H. Pfister. Pattern-driven navigation in 2D multiscale visualizations with scalable insets. IEEE Trans. Visualization and Computer Graphics, 26(1):611–621, 2019.
[42]
M. MacTavish, L. Wecker, and F. Samavati. Perspective charts in a multi-foci globe-based visualization of covid-19 data. ISPRS International Journal of Geo-Information, 11(4):223, 2022.
[43]
N. Pielawski, A. Andersson, C. Avenel, A. Behanova, E. Chelebian, A. Klemm, F. Nysjö, L. Solorzano, and C. Wählby. Tissuumaps 3: Improvements in interactive visualization, exploration, and quality assessment of large-scale spatial omics data. Heliyon, 9(5), 2023.
[44]
J. Zhao, D. Wigdor, and R. Balakrishnan. Trailmap: facilitating information seeking in a multi-scale digital map via implicit bookmarking. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3009–3018, 2013.
[45]
S. Halladjian, D. Kouřil, H. Miao, M. E. Gröller, I. Viola, and T. Isenberg. Multiscale unfolding: Illustratively visualizing the whole genome at a glance. IEEE Trans. Visualization and Computer Graphics, 28(10):3456–3470, 2021.
[46]
R. Bosch, C. Stolte, D. Tang, J. Gerth, M. Rosenblum, and P. Hanrahan. Rivet: A flexible environment for computer systems visualization. ACM SIGGRAPH Computer Graphics, 34(1):68–73, 2000.
[47]
A. Bredenberg. Climate change, nothing new? how has earth’s temperature changed in the past? https://www.thomasnet.com/insights/imt/2012/02/13/climate-change-nothing-new-how-has-earths-temperature-changed-in-the-past/, 2012. Accessed: 2023-11-21.
[48]
W. Javed and N. Elmqvist. Stack zooming for multi-focus interaction in time-series data visualization. In IEEE Symp. Pacific Visualization (PacificVis), pp. 33–40. IEEE, 2010.
[49]
D. Bau. Mandelbrot explorer. https://mandelbrot.page/, 2009. Accessed: 2023-11-21.
[50]
M. Meyer, T. Munzner, and H. Pfister. MizBee: a multiscale synteny browser. IEEE Trans. Visualization and Computer Graphics, 15(6):897–904, 2009.
[51]
W. Javed, S. Ghani, and N. Elmqvist. PolyZoom: multiscale and multifocus exploration in 2D visual spaces. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 287–296, 2012.
[52]
S. L’Yi, D. Maziec, V. Stevens, T. Manz, A. Veit, M. Berselli, P. J. Park, D. Głodzik, and N. Gehlenborg. Chromoscope: interactive multiscale visualization for structural variation in human genomes. Nature Methods, 20(12):1834–1835, 2023.
[53]
J. Walker, R. Borgo, and M. W. Jones. TimeNotes: a study on effective chart visualization and interaction techniques for time-series data. IEEE Trans. Visualization and Computer Graphics, 22(1):549–558, 2015.
[54]
L. Twyman. Here is today. https://theuselessweb.site/hereistoday/, 2020. Accessed: 2023-11-21.
[55]
R. Munroe. Money chart. https://xkcd.com/980/huge/, 2011. Accessed: 2023-09-03.
[56]
University of Utah Genetic Science Learning Center. Cell size and scale. https://learn.genetics.utah.edu/content/cells/scale/, 2014. Accessed: 2023-11-21.
[57]
C. Huang. The scale of the universe 2. https://htwins.net/scale2/, 2012. Accessed: 2023-11-21.
[58]
N. Agarwal. The size of space. https://neal.fun/size-of-space/, 2019. Accessed: 2023-11-21.
[59]
Nikon. Universcale. https://www.nikon.com/company/corporate/sp/universcale/scale.html, 2018. Accessed: 2023-11-21.
[60]
O. Godfrey. debt visualized in $100 bills. https://demonocracy.info/infographics/usa/us_debt/us_debt.html, 2022. Accessed: 2023-11-21.
[61]
Anonymized. Science museum earth timeline. URL anonymized, 2021. Accessed: 2023-11-21.
[62]
N. Agarwal. The deep sea. https://neal.fun/deep-sea/, 2019. Accessed: 2023-11-21.
[63]
K. Karlstrom, S. Semken, L. Crossey, D. Perry, E. D. Gyllenhaal, J. Dodick, M. Williams, J. Hellmich-Bryan, R. Crow, N. B. Watts, et al. Informal geoscience education on a grand scale: The trail of time exhibition at grand canyon. Journal of Geoscience Education, 56(4):354–361, 2008.
[64]
Anonymized. University timeline walk. URL anonymized, 2022. Accessed: 2023-11-21.
[65]
M. Korostoff. Wealth, shown to scale. https://mkorostoff.github.io/1-pixel-wealth/, 2021. Accessed: 2023-11-21.
[66]
R. Ball, C. North, and D. A. Bowman. Move to improve: promoting physical navigation to increase user performance with large displays. In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 191–200, 2007.
[67]
M. Beaudouin-Lafon. Designing interaction, not interfaces. In Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 15–22, 2004.