PacificVAST   PacificVis
April 23
April 24
April 25
April 26
08:30 - 9:00 Registration
Registration Registration
09:00 - 9:30
Ross Maciejewski
Huamin Qu
Fast Forward
Papers & Notes
Narratives, Surveys, and Historical Visualizations
09:30 - 10:30 Keynote
Han-Wei Shen
10:30 - 11:00 Break
11:00 - 12:30 Papers & Notes
Multiple Views and Virtual Reality
Papers & Notes
Text Analysis and Visualization Applications
Papers & Notes
Machine Learning and High-Dimensional Data
12:30 - 13:30 Lunch Closing (10 mins)
13:30 - 15:00 Papers & Notes
Graphs and Plots
Papers & Notes
SciVis and Simulations
15:00 - 15:30 Break
 15:30 - 17:00 
Papers & Notes
Evaluation (Understanding Users and Their Behavior)
18.00  Reception  Banquet

Reception (Apr 23 - 18:00): Blue Elephant

Banquet (Apr 25 - 18:00): Jim Thompson House


Han-Wei Shen   [The Ohio State University]
An end-to-end in situ data processing and analytics workflow
Huamin Qu   [Hong Kong University of Science and Technology]
Visual Analytics for Explainable AI and Automated Machine Learning


Day 2 - Apr 24 - 11.00 -12:30
Multiple Views and Virtual Reality
Chair: Yingcai Wu [Zhejiang University, China]
CorFish: Coordinating Emphasis across Multiple Views using Spatial Distortion
Gaëlle Richer (University of Bordeaux)
Romain Bourqui (University of Bordeaux)
David Auber (University of Bordeaux)

In the context of multiple views, coordination is essential to navigate and grasp the relationships lying behind the different juxtaposed views. Linked highlighting is a typical example of coordination where a subset of the data points is emphasized simultaneously on all views. The strength of this approach is that the selected data can be studied within its context. Other approaches have been used to implement coordination such as using varying levels of transparency or visual links. We propose to use spatial distortion to contribute a similar effect in multiple views. It is particularly suited to the context of multiple views since it alleviates the lack of screen space by reallocating it based on a certain definition of user interest. The proposed method targets coordination between views that represent the same entities and readily adapts to various visualization forms. It is based on a user degree-of-interest function, defined on these entities, that acts as a common ground for the distortion of all views. Views are distorted such that empty areas and areas holding entities of lesser interest are compressed to the benefit of areas holding entities of higher interest. To demonstrate its feasibility and versatility, we describe how to technically apply our approach to several common visualization techniques.

Collaborative Visual Analysis with Multi-level Information Sharing Using a Wall-size Display and See-Through HMDs
Tianchen Sun (University of California, Davis)
Chris Ye (University of California, Davis)
Issei Fujishiro (Keio University)
Kwan-Liu Ma (University of California, Davis)

Solving complex data analysis problems can often benefit a collaborative effort. For synchronous co-located collaboration, a well-recognized challenge is to deliver different contents to people with different privileges and different responsibilities. This challenge is becoming more obvious with the use of a shared display space such as a wall-size display. In particular, scenarios often arise that a privileged participant needs to access sensitive information that other participants are not permitted to view. This is nearly impossible to achieve with only a single display. As a result, it becomes clear that additional devices are needed to provide some of the participants the capability to access and manage certain information in a private space. In this work, we investigate incorporating optical see-through head-mounted displays (OST-HMDs) with a wall-size display to deliver sensitive information in a synchronous co-located, collaborative setting. With our prototype system, we conduct a user study to observe the collaboration styles under this unique setup. We also present the lessons learned by reflecting on the iterative design process of our prototype system.

Object-In-Hand Feature Displacement with Physically-Based Deformation
Cheng Li (The Ohio State University)
Han-Wei Shen (The Ohio State University )

Data deformation has been widely used in visualization to obtain an improved view that better helps the comprehension of the data. It has been a consistent pursuit to conduct interactive deformation by operations that are natural to users. In this paper, we propose a deformation system following the object-in-hand metaphor. We utilize a touchscreen to directly manipulate the shape of the data by using fingers. Users can drag data features and move them along with the fingers. Users can also press their fingers to hold other parts of the data fixed during the deformation, or perform cutting on the data using a finger. The deformation is executed using a physically-based mesh, which is constructed to incorporate data properties to make the deformation authentic as well as informative. By manipulating data features as if handling an object in hand, we can successfully achieve less occluded view of the data, or improved feature layout for better view comparison. We present case studies on various types of scientific datasets, including particle data, volumetric data, and streamlines.

Comparison of Projective Augmented Reality Concepts to Support Medical Needle Insertion
Florian Heinrich (Otto-von-Guericke-University)
Fabian Joeres (Department of Simulation and Graphics)
Kai Lawonn (University of Koblenz - Landau)
Christian Hansen (Otto von Guericke University)

Augmented reality (AR) is a promising tool to improve instrument navigation in needle-based interventions. Limited research has been conducted regarding suitable navigation visualizations. In this work, three navigation concepts based on existing approaches were compared in a user study using a projective AR setup. Each concept was implemented with three different scales for accuracy-to-color mapping and two methods of navigation indicator scaling. Participants were asked to perform simulated needle insertion tasks with each of the resulting 18 prototypes. Insertion angle and insertion depth accuracies were measured and analyzed, as well as task completion time and participants' subjectively perceived task difficulty. Results show a clear ranking of visualization concepts across variables.Less consistent results were obtained for the color and indicator scaling factors. Results suggest that logarithmic indicator scaling achieved better accuracy, but participants perceived it to be more difficult than linear scaling. With specific results for angle and depth accuracy, our study contributes to the future composition of improved navigation support and systems for precise needle insertion or similar applications.

Challenges for Brain Data Analysis in VR Environments
Sabrina Jaeger (University of Konstanz)
Karsten Klein (University of Konstanz)
Lucas Joos (University of Konstanz)
Johannes Zagermann (University of Konstanz)
Jinman Kim (The University of Sydney)
Jean Yang (University of Sydney)
Michael de Ridder (University of Sydney)
Ulrike Pfeil (University of Konstant)
Harald Reiterer (University of Konstanz)
Falk Schreiber (University of Konstanz )

Analysing and understanding brain function and disorder is the main focus of neuroscience. Due to the high complexity of the brain, directionality of the signal and changing activity over time, visual exploration and data analysis are difficult. For this reason, a vast amount of research challenges are still unsolved. We explored different challenges of the visual analysis of brain data and the design of corresponding immersive environments in collaboration with experts from the biomedical domain. We built a prototype of an immersive virtual reality environment to explore the design space and to investigate how brain data analysis can be supported by a variety of design choices. Our environment can be used to study the effect of different visualisations and combinations of brain data representation, as for example network layouts, anatomical mapping or time series. As a long-term goal, we aim to aid neuro-scientists in a better understanding of brain function and disorder.

Day 2 - Apr 24 - 13:30 - 15:00
Graphs and Plots
Chair: Rita Borgo [King’s College London, United Kingdom]
Jacob's Ladder: The User Implications of Leveraging Graph Pivots
Alex Bigelow (University of Utah)
Megan Monroe (Tufts University)

This paper reports on a simple visual technique that boils extracting a subgraph down to two operations—pivots and filters—that is agnostic to both the data abstraction, and its visual complexity scales independent of the size of the graph. The system’s design, as well as its qualitative evaluation with users, clarifies exactly when and how the user’s intent in a series of pivots is ambiguous—and, more usefully, when it is not. Reflections on our results show how, in the event of an ambiguous case, this innately practical operation could be further extended into “smart pivots” that anticipate the user’s intent beyond the current step. They also reveal ways that a series of graph pivots can expose the semantics of the data from the user’s perspective, and how this information could be leveraged to create adaptive data abstractions that do not rely as heavily on a system designer to create a comprehensive abstraction that anticipates all the user’s tasks.

Relaxing Dense Scatter Plots with Pixel-Based Mappings
Renata Georgia Raidou (TU Wien)
Eduard Gröller (Institute of Computer Graphics and Algorithms)
Martin Eisemann (TH Köln)

Scatter plots are the most commonly employed technique for the visualization of bivariate data. Despite their versatility and expressiveness in showing data aspects, such as clusters, correlations, and outliers, scatter plots face a main problem. For large and dense data, the representation suffers from clutter due to overplotting. This is often partially solved with the use of density plots. Yet, data overlap may occur in certain regions of a scatter or density plot, while other regions may be partially, or even completely empty. Adequate pixel-based techniques can be employed for effectively filling the plotting space, giving an additional notion of the numerosity of data motifs or clusters. We propose the Pixel-Relaxed Scatter Plots, a new and simple variant, to improve the display of dense scatter plots, using pixel-based, space-filling mappings. Our Pixel-Relaxed Scatter Plots make better use of the plotting canvas, while avoiding data overplotting, and optimizing space coverage and insight in the presence and size of data motifs. We have employed different methods to map scatter plot points to pixels and to visually present this mapping. We demonstrate our approach on several synthetic and realistic datasets, and we discuss the suitability of our technique for different tasks. Our conducted user evaluation shows that our Pixel-Relaxed Scatter Plots can be a useful enhancement to traditional scatter plots.

Stippling of 2D Scalar Fields
Jochen Görtler (University of Konstanz)
Marc Spicker (University of Konstanz)
Christoph Schulz (University of Stuttgart)
Daniel Weiskopf (University of Stuttgart)
Oliver Deussen (University of Konstanz)

We propose a technique to represent two-dimensional data using stipples. While stippling is often regarded as an illustrative method, we argue that it is worth investigating its suitability for the visualization domain. For this purpose, we generalize the Linde–Buzo–Gray stippling algorithm for information visualization purposes to encode continuous and discrete 2D data. Our proposed modifications provide more control over the resulting distribution of stipples for encoding additional information into the representation, such as contours. We show different approaches to depict contours in stipple drawings based on locally adjusting the stipple distribution. Combining stipple-based gradients and contours allows for simultaneous assessment of the overall structure of the data while preserving important local details. We discuss the applicability of our technique using datasets from different domains and conduct observation-validating studies to assess the perception of stippled representations.

The Role of Working Memory Capacity in Graph Reading Performance
Ciara Fletcher (Western Sydney University)
Weidong Huang (Swinburne University of Technology)
David Arness (Western Sydney University)
Quang Vinh Nguyen (Western Sydney University)

We process information in memory and different people have different memory capacity. It is therefore important to understand possible impact of memory capacity when it comes to graph comprehension. In an attempt towards this direction, we conducted a user study investigating the impact of working memory capacity on graph reading task performance. Forty-six university students participated in the study performing a graph reading task with one hundred graph drawings of different complexity levels. Their working memory capacity and task performance (accuracy and time) were measured and recorded. The results of regression analyses indicated that working memory capacity was a significant predictor of performance accuracy, but not for response time. In this paper, we present the details of the study and discuss our findings and limitations of the study. Possible future research directions are also suggested.

Scatterplot Summarization by Constructing Fast and Robust Principal Graphs from Skeletons
José Matute (Westfälische Wilhelms-Universität)
Marcel Fischer (Westfälische Wilhelms-Universität)
Alexandru Telea (University of Groningen)
Lars Linsen (Westfälische Wilhelms-Universität Münster)

Principal curves are a long-standing and well-known method for summarizing large scatterplots. They are defined as self-consistent curves (or curve sets in the more general case) that locally pass through the middle of the scatterplot data. However, computing principal curves that capture well complex scatterplot topologies and are robust to noise is hard and/or slow for large scatterplots. We present a fast and robust approach for computing principal graphs (a generalization of principal curves for more complex topologies) inspired by the similarity to medial descriptors (curves locally centered in a shape). Compared to state-of-the-art methods for computing principal graphs, we outperform these in terms of computational scalability and robustness to noise and resolution. We also demonstrate the advantages of our method over other scatterplot summarization approaches.

Day 2 - Apr 24 - 15:30 - 17:00
Evaluation (Understanding Users and Their Behavior)
Chair: Issei Fujishiro [Keio University, Japan]
Defamiliarization, Representation Granularity, and User Experience: a Qualitative Study with Two Situated Visualizations
Luiz Augusto de Macêdo Morais (Federal University of Campina Grande)
Nazareno Andrade (Universidade Federal de Campina Grande)
Dandara Maria Costa de Sousa ( Federal University of Campina Grande)
Lesandro Ponciano (Pontifical Catholic University of Minas Gerais)

This work explores the user experience with two situated visualizations that lie on different points of design space. The first visualization --- the Activity Clock --- displays the aggregate presence of laboratory members into a wall clock. The second --- Personal Activities --- represents the same persons individually, in a conventional poster media. We interviewed 17 participants and leverage a theoretical lens of Continuous Engagement and Sense-Making to study how design decisions impact the user experience with respect to (1) which design factors attract users, (2) how design features affect users’ understanding of the visualization, and (3) what kind of reflections are evoked by design. We discuss how the defamiliarizing effect of the Activity Clock plays a dual role in attracting users while also hindering their understanding of the data. We also consider the evidence that fine representation granularity in the Personal Activities evokes deeper reflections.

Emoji and Chernoff - A Fine Balancing Act or are we Biased?
Rita Borgo (Kings College London)
Ricardo Colasanti (Swansea University)
Mark W. Jones (Swansea University)

We seek to answer the question on whether different geometrical attributes within a glyph can bias interpretation of data. We focus on a specific visual encoding, the Emoji, and evaluate its effectiveness at encoding multidimensional features. Given the anthropomorphic nature of the encoding we seek to quantify the amount of bias the encoding itself introduces, and use this to balance the Emoji glyph to remove that bias. We perform our analysis by comparing Emoji with Chernoff faces, of which they can be seen as direct descendant. Results shed light on how this new approach of feature-tuning in glyph design can influence overall effectiveness of novel multidimensional encodings.

Visual Analytics of Dynamic Interplay Between Behaviors in MMORPGs
Junhua Lu (Zhejiang University)
Xiao Xie (Zhejiang University)
JI LAN (Zhejiang University)
Tai-Quan Peng (Michigan State University)
Wei Chen (Zhejiang University)
Yingcai Wu (Zhejiang University)

The rapid development of massively multiplayer online role-playing games (MMORPGs) has led operators to record huge amounts of fine-grained data from the in-game activities of players. These data provide considerable opportunities with which to study the dynamic interplay among player behaviors and investigate the roles of various social structures that underlie such interplay. However, modeling and visualizing these behavioral data remain a challenge. In this study, we propose a novel influence-susceptible model to measure the dynamic interplay among multiple behaviors. Based on this model, we introduce a new visual analytics system called BeXplorer. BeXplorer enables analysts to interactively explore the dynamic interplay between player purchase and communication behaviors and to examine the manner in which this interplay is bound by social structures where players are embedded.

What-Why Analysis of Expert Interviews: Analysing Geographically-Embedded Flow Data
Yalong Yang (Monash University)
Sarah Goodwin (Monash University)

In this paper, we present our analysis of five expert interviews, each from a different application domain. Such analysis is crucial to understanding the real-world scenarios of analysing geographically- embedded flow data. The results of our analysis show that similar high-level tasks were conducted in different domains. To better describe the targets of these tasks, we proposed three flow-targets for analysing geographically-embedded flow data: single flow, total flow and regional flow.

User Evaluation of Group-in-a-box Variants
Nozomi Aoyama (Kyoto University)
Yosuke Onoue (Nihon University)
Yuki Ueno (Kyoto University)
Hiroaki Natsukawa (Kyoto University)
Koji Koyamada (Kyoto University)

Group-in-a-box (GIB) is a graph-drawing method designed to facilitate the visualization of the group structure of a graph. GIB allows the user to simultaneously view group sizes and inter- and intra-group structures. Several GIB variants have been proposed in the literature; however, their advantages and disadvantages have not been studied from the perspective of human cognition. Therefore, herein, we used eye tracking analysis and user surveys to evaluate the user experience of four GIB variants: Squarified-Treemap GIB(ST-GIB), Croissant-and-Doughnut GIB (CD-GIB), Force-Directed GIB (FD-GIB), and Tree-Reordered GIB (TR-GIB). We found some trade-offs among the methods for each type of user task and that FD-GIB and TR-GIB are superior than the other variants. Although ST-GIB's results were good, links were difficult to read in this graph layout. Eye-tracking data was gathered to determine which elements in each visualization significantly affected user experience. The results of this study will promote the effective use of GIB to analyze networks such as social networks or web graphs.

Day 3 - Apr 25 - 11:00 - 12:30
Text Analysis and Visualization Applications
Chair: Stefan Jänicke [Leipzig University, Germany]
Visual Exploration of Neural Document Embedding in Information Retrieval: Semantics and Feature Selection
Xiaonan Ji (The Ohio State University)
Han-Wei Shen (The Ohio State University )
Alan Ritter (The Ohio State University)
Raghu Machiraju (The Ohio State University)
Po-Yin Yen (Washington University School of Medicine in St. Louis)

Neural embeddings are widely used in language modeling and feature generation with superior computational power. Particularly, neural document embedding - converting texts of variable-length to semantic vector representations - has shown to benefit widespread downstream applications, e.g., information retrieval (IR). However, the black-box nature makes it difficult to understand how the semantics are encoded and employed. We propose visual exploration of neural document embedding to gain insights into the underlying embedding space, and promote the utilization in prevalent IR applications. In this study, we take an IR application-driven view, which is further motivated by biomedical IR in healthcare decision-making, and collaborate with domain experts to design and develop a visual analytics system. This system visualizes neural document embeddings as a configurable document map and enables guidance and reasoning; facilitates to explore the neural embedding space and identify salient neural dimensions (semantic features) per task and domain interest; and supports advisable feature selection (semantic analysis) along with instant visual feedback to promote IR performance. We demonstrate the usefulness and effectiveness of this system and present inspiring findings in use cases. This work will help designers/developers of downstream applications gain insights and confidence in neural document embedding, and exploit that to achieve more favorable performance in application domains.

An Interactive Visual Analytics System for Incremental Classification Based on Semi-supervised Topic Modeling
Yuyu Yan (Zhejiang University)
Yubo Tao (Zhejiang University)
Sichen Jin (Zhejiang University)
Jin Xu (Zhejiang University)
Hai Lin (Zhejiang University)

Text labeling for classification is a time-consuming and unintuitive process. Given an unannotated text collection, it is difficult for users to determine what label to create and how to label the initial training set for classification. Thus, we present an interactive visual analytics system for incremental text classification based on a semi-supervised topic modeling method, modified Gibbs sampling maximum entropy discrimination latent Dirichlet allocation (Gibbs MedLDA). Given a text collection, Gibbs MedLDA generates topics as a summary of the text collection. We design a scatter plot to display documents and topics simultaneously to show the topic information, and this helps users explore the text collection structurally and find labels for creating. After labeling documents, Gibbs MedLDA is applied to the text collection with labels again, and it generates both the topic and classification information. We also provide a scatter plot with the classifier boundary and a matrix view to present weights of classifiers. Users can iteratively label documents to refine each classifier. We evaluate our system via a user study with a benchmark corpus for text classification and case studies with two unannotated text collections.

Visual Quality Guidance for Document Exploration with Focus+Context Techniques
Qi Han (University of Stuttgart)
Dennis Thom (University of Stuttgart)
Markus John (University of Stuttgart)
Steffen Koch (University of Stuttgart)
Florian Heimerl (UW-Madison)
Thomas Ert (University of Stuttgart)

Magic lens based focus+context techniques are powerful means for exploring document spatializations. Typically, they only offer additional summarized or abstracted views on focused documents. As a consequence, users might miss important information that is either not shown in aggregated form or that never happens to get focused. In this work, we present the design process and user study results for improving a magic lens based document exploration approach with exemplary visual quality cues to guide users in steering the exploration and support them in interpreting the summarization results. We contribute a thorough analysis of potential sources of information loss involved in these techniques, which include the visual spatialization of text documents, user-steered exploration, and the visual summarization. With lessons learned from previous research, we highlight the various ways those information losses could hamper the exploration. Furthermore, we formally define measures for the aforementioned different types of information losses and bias. Finally, we present the visual cues to depict these quality measures that are seamlessly integrated into the exploration approach. These visual cues guide users during the exploration and reduce the risk of misinterpretation and accelerate insight generation. We conclude with the results of a controlled user study and discuss the benefits and challenges of integrating quality guidance in exploration techniques.

Visual Analytics of Taxi Trajectory Data via Topical Sub-trajectories
Sichen Jin (Zhejiang University)
Yubo Tao (Zhejiang University)
Yuyu Yan (Zhejiang University)
Jin Xu (Zhejiang University)
Hai Lin (Zhejiang University)

GPS-based taxi trajectories contain valuable knowledge about movement behaviors for transportation and urban planning. Topic modeling is an effective tool to extract semantic information from taxi trajectories. However, previous methods generally ignore the direction of trajectories. In this paper, we employ the bigram topic model instead of traditional topic models to analyze textualized trajectories to take into account the direction information of trajectories. We further propose a modified Apriori algorithm to extract frequent sub-trajectories and use them to represent each topic as topical sub-trajectories. Finally, we design a visual analytics system with several linked views to facilitate users to interactively explore topics, sub-trajectories, and trips. We demonstrate the effectiveness of our system via case studies with Chengdu taxi trajectory data.

GitViz: An Interactive Visualization System for Analyzing Development Trends in the Open-Source Software Community
Chanhee Park (Ajou university)
SUNGJUN DO (Ajou university)
eunjeong lee (Ajou university)
Hanna Jang (Ajou university)
SEUNGCHAN JEONG (Ajou University)
Hyunwoo Han (Ajou university)
Kyungwon Lee (Ajou university)

This study proposes a visualization that can assist computer scientists and data scientists to make decisions by exploring technology trends. While it is important for them to understand the technology trends in the rapidly changing computer science and data science fields, it takes considerable time and knowledge to acquire good information about these trends. Particularly, data/computer scientists with little experience in the field find it difficult to obtain information on such trends. Therefore, we propose a visualization system that can easily and quickly explore the technology trends in computer and data science. This study aims to identify the key technologies and developers in a specific field, and other technologies deeply related to specific technologies, and explore the changes in popularity of technologies, languages, and libraries over time. This study includes two case studies to obtain information using the proposed visualization. We demonstrate our system with GitHub repositories data.

EngineQV: Investigating External Cause of Engine Failures Based on Geo-temporal Association
Yanchao Wang (Nanyang Technological University)
Qian Zhang (Nanyang Technological University)
Feng Lin (NTU)
Hock Soon Seah (NTU)

The heart of every vehicle is its engine. Many factors, either internal or external, contribute to the aircraft engine's durability and lifespan. In this paper, we aim to assist the analyst with a qualitative analysis of the possible external cause of engine failures. We work closely with domain experts to study the domain knowledge, analyze challenging tasks, and abstract user requirements. We present EngineQV, a visualization system that integrates multiple geo-temporal engine-associated records. It provides intuitive exploration and understanding of the data from various aspects. The system features a dynamic query on the datasets and incorporates several customized interactive visualizations. A user may query a certain group of engines or compare multiple engine groups, identify an issue, and find its potential causes. The functionality and usability of EngineQV are evaluated by two case studies, through knowledge discovery from records of a single engine and visual comparison of multiple engines. The validity of the system is confirmed by expert feedback.

Day 3 - Apr 25 - 13:30 - 15:00
SciVis and Simulations
Chair: Renata Georgia Raidou [TU Wien, Austria]
Analysis of coupled thermo-hydro-mechanical simulations of a generic nuclear waste repository in clay rock using fiber surfaces
Christian Blecha (Leipzig University)
Felix Raith (Leipzig University)
Gerik Scheuermann (Leipzig University)
Thomas Nagel (Technische Universität Bergakademie Freiberg)
Olaf Kolditz (Helmholtz Center for Environmental Research)
Jobst Maßmann (Federal Institute for Geosciences and Natural Resources (BGR))

The use of clean and renewable energy and the abandoning of fossil energy have become goals of many national and international energy policies. But even when once accomplished, mankind has to take charge of the relics of the current energy supply system. For example, due to its harmful effects, nuclear waste has to be isolated from the biosphere safely and for sufficiently long times. The geological subsurface is considered as a promising option for the deposition of such by- or end products. In order to investigate the long-term evolution of a repository system, a multiphysics simulation was performed. It combines the structural mechanics of the host rock, the fluid dynamics of formation fluids, and the thermodynamics of all materials resulting in a highly multivariate data set. A visualization of such multiphysics data challenges the current methodology. In this article, we demonstrate how an analysis of a carefully selected subset of the variables in attribute space allows to visualize and interpret the simulation data. We apply a fiber surface extraction algorithm to explore the relationships between these variables. Studying the temporal evolution in attribute space, we found a regionally bulge that could be identified as an effect of the nuclear waste repository because it can be clearly separated from the natural geophysical state prior to waste disposal. Furthermore, we used the extracted fiber surface as a starting point to examine the distribution of other variables inside this area of the physical domain. We conclude this case study with lessons learned from the visualization as well as the geotechnical side.

Visual Exploration of Circulation Rolls in Convective Heat Flows
Alex Frasson (Technical University of Munich)
Mathias Kanzler (Technical University of Munich)
Martin Ender (Technical University of Munich)
Sebastian Weiss (Technical University of Munich)
Ambrish Pandey (Technische Universität Ilmenau)
Jörg Schumacher (Technische Universität Ilmenau)
Rüdiger Westermann (Technical University of Munich)

We present techniques to improve the understanding of pattern forming processes in Rayleigh-B\'{e}nard-type convective heat transport, through visually guided exploration of convection features in time-averaged turbulent flows. To enable the exploration of roll-like heat transfer pathways and pattern-forming anomalies, we combine feature extraction with interactive visualization of particle trajectories. To robustly determine boundaries between circulation rolls, we propose ridge extraction in a z-averaged temperature field, and in the extracted ridge network we automatically classify topological point defects hinting at pattern forming instabilities. An importance measure based on the circular movement of particles is employed to automatically control the density of 3D trajectories and, thus, enable insights into the heat flow in the interior of rolls. A quantitative analysis of the heat transport within and across cell boundaries, as well as investigations of pattern instabilities in the vicinity of defects, is supported by interactive particle visualization including instant computations of particle density maps. We demonstrate the use of the proposed techniques to explore direct numerical simulations of the 3D Boussinesq equations of convection, giving novel insights into Rayleigh-B\'{e}nard-type convective heat transport.

Visual Analysis of Ligand Trajectories in Molecular Dynamics
Adam Jurčík (Masaryk University)
Katarina Furmanova (Masaryk University)
Jan Byška (University of Bergen)
Vojtěch Vonásek (Czech Technical University)
Ondřej Vávra (Masaryk University)
Pavol Ulbrich (Masaryk University)
Helwig Hauser (University of Bergen)
Barbora Kozlikova (Masaryk University )

In many cases, protein reactions with other small molecules (ligands) occur in a deeply buried active site. When studying these types of reactions, it is crucial for biochemists to examine trajectories of ligand motion. These trajectories are predicted with in-silico methods that produce large ensembles of possible trajectories. In this paper, we propose a novel approach to the interactive visual exploration and analysis of large sets of ligand trajectories, enabling the domain experts to understand protein function based on the trajectory properties. The proposed solution is composed of multiple linked 2D and 3D views, enabling the interactive exploration and filtering of trajectories in an informed way. In the workflow, we focus on the practical aspects of the interactive visual analysis specific to ligand trajectories. We adapt the small multiples principle to resolve an overly large number of trajectories into smaller chunks that are easier to analyze. We describe how drill-down techniques can be used to create and store selections of the trajectories with desired properties, enabling the comparison of multiple datasets. In appropriately designed 2D and 3D views, biochemists can either observe individual trajectories or choose to aggregate the information into a functional boxplot or density visualization. Our solution is based on a tight collaboration with the domain experts, aiming to address their needs as much as possible. The usefulness of our novel approach is demonstrated by two case studies, conducted by the collaborating protein engineers.

A Linear Time BVH Construction Algorithm for Sparse Volumes
Stefan Zellmann (University of Cologne)
Matthias Hellmann (University of Cologne)
Ulrich Lang (University of Cologne)

While fast spatial index construction for triangle meshes has gained a lot of attention from the research community in recent years, fast tree construction algorithms for volume data are still rare and usually do not focus on real-time processing. We propose a linear time bounding volume hierarchy construction algorithm based on a popular method for surface ray tracing of triangle meshes that we adapt for direct volume rendering with sparse volumes. We aim at interactive to real-time construction rates and evaluate our algorithm using a GPU implementation.

Uncertainty-aware Ramachandran Plots
Robin Georg Claus Maack (University of Kaiserslautern)
Hans Hagen (University of Kaiserslautern)
Christina Gillmann (University of Kaiserslautern)

Ramachandran Plots are an important tool for researchers in biochemistry to examine the stability of a molecule. In these plots, dihedral (torsion) angles of the protein’s backbone are visualized on a plane, where different areas are known to be stable configurations. Unfortunately, the underlying atom positions are affected by uncertainty, which is usually captured and expressed using the b-value. For classic Ramachandran Plots, this uncertainty is not propagated when computing the dihedral angles and neglected when visualizing a Ramachandran Plot. To solve this problem, this paper presents an extended version of the Ramachandran Plot, which allows to communicate the uncertainty of atom positions along the computation of dihedral angles and an intuitive visualization. We show the effectiveness of the presented approach by examining different Ramachandran Plots for molecules and show how the inclusion of uncertainty helps biochemistry researchers to determine the stability of a protein with higher accuracy.

Interactive Spatiotemporal Visualization of Phase Space Particle Trajectories using Distance Plots
Tyson Neuroth (University of California, Davis)
Kwan-Liu Ma (University of California, Davis)

The distance plot (or unthresholded recurrence plot) has been shown to be a useful tool for analyzing spatiotemporal patterns in high-dimensional phase space trajectories. We incorporate this technique into an interactive visualization with multiple linked phase plots, and extend the distance plot to also visualize marker particle weights from particle-in-cell (PIC) simulations together with the phase space trajectories. By linking the distance plot with phase plots, one can more easily investigate the spatiotemporal patterns, and by extending the plot to visualize particle weights in conjunction with the phase space trajectories, the visualization better supports the needs of domain experts studying particle-in-cell simulations. We demonstrate our resulting visualization design using particles from an XGC Tokamak fusion simulation.

Day 4 - Apr 26 - 09:00 - 10:30
Narratives, Surveys, and Historical Visualizations
Chair: Yun Jang [Sejong University, Korea]
Designing Narrative Slideshows for Learning Analytics
Qing Chen (the Hong Kong University of Science and Technology)
Zhen Li (Hong Kong University of Science and Technology)
Ting-Chuen Pong (Hong Kong University of Science and Technology)
Huamin Qu (The Hong Kong University of Science and Technology)

The practical power of data visualization is currently attracting much attention in the e-learning domain. A growing number of studies have been conducted in recent years to help instructors better analyze learner behavior and reflect on their teaching. However, current e-learning dashboards and visualization systems usually require a lot of time and effort into the exploration process. Moreover, the lack of communication power of existing systems constrains users from organizing the narrative of information pieces into a compelling data story. In this paper, we have proposed a narrative visualization approach with an interactive slideshow that helps instructors and education experts explore potential learning patterns and convey data stories. This approach contains three key components: guided-tour concept, drill-down path, and dig-in exploration dimension. The use cases further demonstrate the potential of employing this visual narrative approach in the e-learning context.

A Visual Approach for the Comparative Analysis of Character Networks in Narrative Texts
Markus John (Institute for Visualization and Interactive Systems)
Martin Baumann (University of Stuttgart)
David Schütz (University of Stuttgart)
Steffen Koch (University of Stuttgart)
Thomas Ertl (University of Stuttgart)

The analysis of a novel's plot and characters are challenging and time-consuming tasks in literary criticism. Typically, humanities scholars want to describe and compare characters' personality traits, their roles, their relationships, and the evolution of these aspects over the course of a novel. Nowadays, due to the digitization of literature, humanities scholars can be supported in these endeavors with computational methods. In this paper, we present an approach that offers several means to analyze the plot and characters of a novel visually. Analysts can easily switch between an adjacency matrix and a node-link representation, which provide an overview of the characters and the relationships between them. Both views enable analysts to select different text ranges of the novel for studying the commonalities and differences of the character constellations within these ranges. We offer interactive visual representations to help investigate the relationships between the characters in more detail. Additionally, we link the visual representations with the novels' texts to support the inspection and verification of previously generated ideas and hypotheses. To demonstrate the benefits and limitations of our approach, we present two usage scenarios. The first one is based on a fictitious analysis and the second one discusses applications that were carried out during joint workshops with humanities scholars. Finally, we present and discuss the insights gained by an expert study and the design decisions of our approach.

An Interactive Chart of Biography
Richard Khulusi (Leipzig University)
Jakob Kusnick (Leipzig University)
Josef Focht (Leipzig University)
Stefan Jänicke (Leipzig University )

Joseph Priestley’s Chart of Biography is a masterpiece of handdrawn data visualization. He arranged the lifespans of around 2,000 individuals on a timeline, and the chart obtained great value for teaching purposes. We present a generic, interactive variant of the chart adopting Priestley’s basic design principles. Our proposed visualization allows for dynamically defining person groups to be visually compared on different zoom levels. We designed the visualization in cooperation with musicologists having multifaceted research interests on a biographical database of musicians. On the one hand, we enable deriving new relationships between musicians in order to extend the underlying database, and on the other hand, our visualization supports analyzing time-dependent changes of musical institutions. Various usage scenarios outline the benefit of the Interactive Chart of Biography for research in musicology.

Smart Survey Tool: A Multi Device Platform for Museum Visitor Tracking and Tracking Data Visualization
Paul Craig (Xian Jiaotong Liverpool University)
Yiwen Wang (Xian Jiaotong Liverpool University)
Joon Sik Kim (Xi’an Jiaotong-Liverpool University)
Gang Chen (Nanjing Museum)
Yu Liu (Xian Jiaotong Liverpool University)
Jiabei Li (Xian Jiaotong Liverpool University)
Zhiqiang Gao (Xian Jiaotong Liverpool University)
Gao Du (Xian Jiaotong Liverpool University)

This paper describes the Smart Survey Tool, a novel multi-device application for museum visitor tracking and tracking data visualization. The application allows museum staff to capture detailed information describing how visitors move around an exhibition and interact with individual exhibits. They can then visualize the results of tracking either on a single mobile device or with multiple mobile devices connected to a large display. The platform uses orthogonal views of the exhibition space for tracking and visualization, with a ‘chess-piece’ icon to represent visitors during tracking, and curved semi-transparent lines with animated semi-circles to communicate the path and direction of visitor movement. Our visualization is novel in its use of an orthogonal projection for pedestrian tracking and animation to communicate the flow of visitors around the exhibition space, as well as allowing users to dynamically switch between views representing different groups of visitors. The design of our application was informed through an extensive requirements analysis study conducted with Nanjing Museum and evaluated by conducting expert interviews with museum managers who considered that the application allowed for more effective and efficient recording and analysis of visitor tracking data.

A System for Exploring Historical Fire Data
Maksim Gomov (UC Davis)
Tarik Crnovrsanin (University of California)
Keshav Dasu (University of California, Davis)
Kwan-Liu Ma (University of California, Davis)

Wildfires cause immense costs to human life, property, and the environment. As the impact of climate change increases the frequency and severity of wildfires, a renewed effort to understand these phenomena and their catalysts has increased. In this paper, we introduce a system that couples multiple sources of data and visualization to enable analysts to study historical fire data. We show two use cases to demonstrate the effectiveness of our system.

Day 4 - Apr 26 - 11:00 - 12:30
Machine Learning and High-Dimensional Data
Chair: Steffen Koch [University of Stuttgart]
DNN-VolVis: Interactive Volume Visualization Supported by Deep Neural Network
Fan Hong (Peking University)
Can Liu (Peking University)
Xiaoru Yuan (Peking University)

In this work, we propose a novel approach of volume visualization without explicit traditional rendering pipeline. In our proposed method, volumetric images can be interactively `reversed' given the volumetric data and a static volume rendered image under the desired rendering effect. Our pipeline enables 3D-navigation on it for exploring the given volumetric data without explicit transfer function. In our approach, deep neural networks, combined usage of Generative Adversarial Networks (GANs) and Convolutional Neural Networks (CNN) are employed to synthesize high-resolution and perceptually authentic images directly, inheriting the desired transfer function and viewing parameter implicitly given by the input images respectively.

DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation
Junpeng Wang (The Ohio State University)
Liang Gou (VISA)
Wei Zhang (Visa Research)
Hao Yang (Visa Research)
Han-Wei Shen (The Ohio State University )

Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.

Statistical Super Resolution for Data Analysis and Visualization of Large Scale Cosmological Simulations
Ko-Chih Wang (The Ohio State University)
Jiayi Xu (The Ohio State University)
Jonathan Woodring (Los Alamos National Laboratory)
Han-Wei Shen (The Ohio State University)

Cosmologists build simulations for the evolution of the universe using different initial parameters. By exploring the datasets from different simulation runs, cosmologists can understand the evolution of our universe and approach its initial conditions. A cosmological simulation nowadays can generate datasets on the order of petabytes. Moving datasets from the supercomputers to post data analysis machines is infeasible. We propose a novel approach called statistical super-resolution to tackle the big data problem for cosmological data analysis and visualization. It uses datasets from a few simulation runs to create a prior knowledge, which captures the relation between low- and high-resolution data. We apply in situ statistical down-sampling to datasets generated from simulation runs to minimize the requirements of I/O bandwidth and storage. High-resolution datasets are reconstructed from the statistical down-sampled data by using the prior knowledge for scientists to perform advanced data analysis and render high-quality visualizations.

ComDia+: An Interactive Visual Analytics System for Comparing, Diagnosing, and Improving Multiclass Classifiers
Chanhee Park (Ajou university)
Jina Lee (Ajou university)
Hyunwoo Han (Ajou university)
Kyungwon Lee (Ajou university)

Performance analysis is essential for improving classification models. However, existing performance analysis tools do not provide actionable insights such as the cause of misclassification. Machine learning practitioners face difficulties such as prioritizing model, looking over confusion between classes. In addition, existing performance analysis tools that provide feature-level analysis are difficult to apply to image classification problems. This study has been proposed to solve these difficulties. In this paper, we present an interactive visual analytics system for diagnosing the performance of multiclass classification models. Our system is able to compare multiple models, find weaknesses, and obtain actionable insights for improving models. Our visualization consists of three views for analyzing performance at the class, confusion, and instance levels. We demonstrate our system using MNIST handwritten digits data.

Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences
Teng-Yok Lee (Mitsubishi Electric Research Laboratories)
Kent Wittenburg (Mitsubishi Electric Research Laboratories)

Development of object detectors for video in applications such as autonomous driving requires an iterative training process with data that initially requires human labeling. Later stages of development require tuning a large set of parameters that may not have labeled data available. For each training iteration and parameter selection decision, insight is needed into object detector performance. This work presents a visualization method called Space-Time Slicing to assist a human developer in the development of object detectors for driving applications without requiring labeled data. Space-Time Slicing reveals patterns in the detection data that can suggest the presence of false positives and false negatives. It may be used to set such parameters as image pixel size in data preprocessing and confidence thresholds for object classifiers by comparing performance across different conditions.

Poster [Day 3 - Apr 25 - 15:30 - 17:00]
Animated Drag and Drop Interaction for Dynamic Multidimensional Graphs
Benjamin Renoust (Osaka University)
haolin ren (Institut National de l'Audiovisuel)
Guy Melancon (University of Bordeaux )

In this paper, we propose a new drag and drop interaction technique for graphs. We designed this interaction to support analysis in complex multidimensional and temporal graphs. The drag and drop interaction is enhanced with an intuitive and controllable animation, in support of comparison tasks.

Pattern Extraction and Visualization of Eye Tracking Scan Paths on Hierarchical AOIs
Yuri Miyagi (Ochanomizu University)
Daniel Weiskopf (University of Stuttgart)
Takayuki Itoh (Ochanomizu University )

Visualizing transitions of eye tracking scan-paths between the areas of interest (AOIs) is a popular issue to uncovering the things people pay attention to. However, such analysis is often difficult due to the following two problems. First, suitable fineness of AOIs depends on what users want to find from eye tracking data. Second, direct comparison of a large number of scan-paths is not efficient to find common behaviors in many people. We propose a technique for efficient comparison of visualization results of eye tracking scan-paths on different AOIs on a particular stimulus. Our technique automatically generates fine AOIs upon a static stimulus, and users can set groups of related AOIs so as to define rough AOIs. Then, we convert eye tracking data to simple codes based on the fine definition of AOIs, and apply N-gram to find common behavior in multiple participants. Finally, our technique generates a directed graph that indicates transitions between AOIs, and a bar chart to depict the extracted patterns. As a use case, we visualized eye tracking data of eight participants who observed a web page from Wikipedia. The result shows that our visualization technique can display characteristic behaviors on a different AOIs, and is effective to find common or unique movements of multiple participants efficiently.

Visualization of Twitter Trends using a Co-occurrence Network
Jun Iio (Chuo University)
Poppe Stevie (Chuo University)
Yuta Aoki (Chuo University)
Eriko Nakamura (Chuo University)
Suna Kim (Chuo University)
Tongjin Lee (Chuo University )

Twitter can identify trending topics by examining massive amounts of tweets posted by enormous numbers of users. Thus, users can efficiently determine which issues are well-attended by Twitter users in a given region. In this study, to capture the substance of the topics instantly, we developed a system that visualizes the issues' structure by collecting several tweets regarding the trends via the Twitter-trends application programming interface (API) and the Twitter search API. After providing an overview, this paper discusses the system structure and the algorithm for creating a co-occurrence network graph to visualize the topic structure. Also, some visualization patterns are presented, and their interpretation is discussed.

NeuralDivergence: Exploring and Understanding Neural Networks by Comparing Activation Distributions
Haekyu Park (Georgia Institute of Technology)
Fred Hohman (Georgia Institute of Technology)
Duen Horng Chau (Georgia Tech )

As deep neural networks are increasingly used in solving high- stake problems, there is a pressing need to understand their internal decision mechanisms. Visualization has helped address this problem by assisting with interpreting complex deep neural networks. However, current tools often support only single data instances, or visualize layers in isolation. We present NeuralDivergence, an interactive visualization system that uses activation distributions as a high-level summary of what a model has learned. NeuralDivergence enables users to interactively summarize and compare activation distributions across layers, classes, and instances (e.g., pairs of adversarial attacked and benign images), helping them gain better understanding of neural network models.

A Visual Analysis Approach for Electromagnetic Situation Awareness
Xiaobo Luo (Central South University)
Xiaoru Lin (Central South University)
Hairong Wang (Central South University)
Zitong Yang (Central South University)
Ying Zhao (Central South University)
Fangfang Zhou (Central South University )

This paper proposes a visual analysis approach for electromagnetic situation awareness using both radio spectrum data and radio signal data. Firstly, a new signal sorting method is presented to sort the signals exist in monitoring frequency band. Then a qualitative and quantitative situation assessment model is designed based on the signal sorting results and radio signal data to comprehensively describe the electromagnetic situation. Lastly, a visual analysis system is proposed to enhance the users’ abilities of situation perception and understanding for insightful anomaly root cause reasoning and efficient decision making.

TPMAP: a Data Analytics and Visualization Platform to Support Thailand Target Poverty Alleviation Programs
Navaporn Surasvadi (National Electronics and Computer Technology Center (NECTEC))
Puripant Ruchikachorn (Chulalongkorn University)
Chaiyaphum Siripanpornchana (National Electronics and Computer Technology Center (NECTEC))
Suttipong Thajchayapong (National Electronics and Computer Technology Center (NECTEC))
Anon Plangprasopchok (National Electronics and Computer Technology Center (NECTEC) )

This paper presents Thai People Map and Analytics Platform (TPMAP:, a data analytics and visualization platform to support target poverty alleviation programs in Thailand. TPMAP aims at enabling Thailand policy-makers to identify the poor, locate them, and understand their basic needs. We use a star as a multidimensional data glyph throughout various charts navigated through a geographical hierarchy. The online platform is being used by a few local government officials to tackle the poverty problem.

A Review on Quality Assessment Metrics for Edge Bundling Techniques
Ken Sakamoto (Tokyo Institute of Technology)
Ryosuke Saga (Osaka Prefecture University)
Ken Wakita (Tokyo Institute of Technology )

Edge-bundling techniques used in graph drawing simplify the graph structure and thereby offers an image easier to comprehend the structure for the human. The article reports metrics that were either used to quantitatively assess the edge-bundling results and/or was employed as the objective functions by the bundling algorithms. The study was conducted by reviewing 56 edge-bundling papers mainly published in VIS, EuroVIS, PacificVIS, and TVCG. Metrics for clutter reduction measure amount of ink usage, moving distances and the lengths of the control points, and curvature factor. Faithfulness is another type of measure that grasps loss of information in the bundled and therefore simplified image. The report compares and argues the advantage and disadvantage of the proposal.

A Study and Design of Transit Map based on Visual Perception
Chanisorn Kiratiwiriyaporn (Chulalongkorn University)
Puripant Ruchikachorn (Chulalongkorn University )

Transit maps can be designed in infinitely many different styles but not all of the designs are equally effective. To design an efficient transit map, it is important to understand route selection of passengers based on their visual perception. This paper developed six different transit maps and conducted a route selection experiment to explore the impact of using transit map as a planning tool to influence passengers’ travel decisions to find the fastest route in the Bangkok Mass Transit System (BTS) and Metropolitan Rapid Transit (MRT). The experiment with 90 participants was conducted through a paper-based survey of six travel decisions per map design. The results show that different map designs significantly affect the participants’ accuracy and time to find the fastest route.

Visual Causal Exploration with Transfer Entropy Applied to a Severe Rainfall Event
Naohisa Sakamoto (Kobe University)
Jorji Nonaka (RIKEN Center for Computational Science)
Yasumitsu Maejima (RIKEN Center for Computational Science)
Koji Koyamada (Kyoto University )

Sudden severe weather events, such as powerful violent winds and torrential rainfalls have been causing severe material damages as well as unfortunate human losses. Some of these disasters can be caused indirectly by these weather events such as landslides and flash flood which can be occurred after severe rainfalls. For the disaster mitigation and adaptation planning and decision makers, the time lag estimation between the causing weather phenomena and the related disaster event is highly desirable. In this work, we utilized a volume data (Causal Volume Data) with the time lag information which can be calculated with the transfer entropy, for assisting interactive visual exploration of the causality. As a case study, we utilized a flash flood event, occurred in the Kobe-city after a severe rainfall, where resulted in some human losses. From a practical evaluation with a domain expert, we could verify that the proposed Causal Volume Data approach can facilitate the visual analysis of the cause-and-effect relationship regarding to the lag time between related events.

Multi-scale Comparison Visualization System of Mouse’s Brain
PUWEN LEI (Kyoto University)
Hiroaki Natsukawa (Academic Center for Computing and Media Studies)
Masanori Shimono (Graduate School of Medicine)
Koji Koyamada (Academic Center for Computing and Media Studies )

The brain relies on information processing at various spatial scales from a microscale of chemical transmitters to a macroscale of brain regions. However, how the different scale systems efficiently work together is not well understood yet. Especially, there are few studies declaring detailed behaviors including microcircuits with observing the whole brain simultaneously. Moreover, neuronal microcircuit has various non-random network architectures, which can represent various functionality. In this paper, we aim to explore how the visual analytic approach can help domain expert figure out the connection and functionality of brain network. In order to investigate structural and functional architecture in the mouse brain, we visualized func- tional brain network calculated by neuronal recordings from slices of mice. Brain scan before and after slicing gives us the additional information of where the slices were taken from. Therefore, we also visualized whole brain surface and slice location. We are go- ing to develop an interactive visual analytics system that provide multi-level comparison methods to interpret the effective functional networks, taking the anatomical information and the topology of the network into account. To address this, we developed the prototype visualization system that enables researchers to explore the neuronal network interactively using anatomical information and network topology.

Web-based Visual Analytics System for Dynamical Network Exploration using Empirical Dynamic Modelling
Ting Wang (Kyoto University)
Hiroaki Natsukawa (Kyoto University)
Koji Koyamada (Kyoto University )

There is a strong need to investigate a time-varying relation from time series data measured in various fields. A time-varying relation can be calculated by an emerging method based on the nonlinear state space reconstruction called Empirical Dynamic Modelling (EDM). In this study, we developed a web-based visual analytic system to support the exploration of a dynamical network constructed by EDM. This system enables us to identify and interpret the system state of dynamical network and to compare the identified states between networks. We conducted a use case study applying the proposed system to the natural science data set.

BiClustering and Transfer Entropy for the Visual Analysis of Critical Hardware Failures on the K computer
Kazuki Koiso (Kobe University)
Naohisa Sakamoto (Kobe University)
Jorji Nonaka (RIKEN Center for Computational Science)
Fumiyoshi Shoji (RIKEN Center for Computational Science)
Keiji Yamamoto (RIKEN Center for Computational Science )

Along with the massive amounts of data sets produced from la rge-scale simulation runs on the HPC systems, the HPC sites ca n also produce large amounts of system and sensor data such a s those used for monitoring the HPC system health during the r egular daily operation. For instance, R-CCS (RIKEN Center for C omputational Science) has generated such HPC facility related data sets, since the beginning of its operation in 2012, and stor ed at the GFS (Global File System) of the K computer. Visual dat a analysis of such data sets becomes important for the operatio nal staffs for improving the operational efficiency and facilitati ng operational decision makings. Focusing on the critical failur es which requires hardware substitution, we have been develo ping a Transfer Entropy based Visual Analytics System for assis ting the visual causal analysis of such failures. In this poster, w e propose the use of Spectral Biclustering technique for filterin g the data thus delimiting a region of interest for applying the t ransfer entropy technique for visual causal analysis. In this init ial evaluation, we could verify some cases where the causality r elationship become more reasonable.

Automatic Visualization Answer Generation for Tabular Data
Yun Han (Peking University)
Wentao Zhang (Peking University)
Sihang Li (Peking University)
Can Liu (Peking University)
Xiaoru Yuan (Peking University )

In this paper, we propose a system to automatically generate corresponding visualization and answers when specific questions are raised for a tabular data. In our approach, the questions are classified into different task types, and related data items in the table are extracted by semantic parsing with the deep neural network, which can be visualized in different ways automatically. Tasks including questions on the comparison, trend, and aggregation are supported. We demonstrate a system which is capable of answering questions with reasonable accuracy.

Automatic Caption Generation for SVG Charts
Can Liu (Peking University)
Liwenhan Xie (Peking University)
Yun Han (Peking University)
Xiaoru Yuan (Peking University )

Captions act an important role in guiding people to interpret the chart and conveying messages from the designer. But it requires labor efforts to make a proper caption. In this paper, we propose a novel automatic approach to generate captions from visualization charts powered by deep learning. The model learns to recognize significant features of the chart, which are mainly represented by subsets of its visual elements. Through a carefully designed summary template, each subset is converted into a descriptive sentence, i.e. data fact, and compose a complete caption for the chart.

Interactive Movie Recommendation System Based on Local Model Fusion
Ying Tang (Zhe Jiang University of Technology)
Xiao Li (Zhe Jiang University of Technology)
Michael Johnson (University of Limerick )

Traditional recommendation system using a single algorithm is not efficient enough for accurate and diverse recommendation tasks. The system can believe that two identical items have the same similarity in any user subgroups, leading to the algorithm missing local difference information. Some multi-model fusion systems based on user-item matrixes can’t accurately capture the preferences of users who seldom or never score. In addition, such recommendation systems can’t show the recommendation process to the user. This paper addresses these problems by proposing a novel system model for movie recommendation. Addressing the first problem, the model extracts the preference features of each user from the user history view tag sets, allowing all users to be clustered into subgroups based on these features. To solve the second problem, the proposed system uses a novel interactive recommendation system, RecVis. RecVis allows the visualization of the recommendation process and user portraits, providing recommendations and interactive feedback to the system user(s).

Comparative Visualization of Mode Water Regions among Observation, Assimilation and Simulation
Midori Yano (Ochanomizu University)
Takayuki Itoh (Ochanomizu University)
Yuusuke Tanaka (Japan Agency for Marine-Earth Science and Technology)
Daisuke Matsuoka (Japan Agency for Marine-Earth Science and Technology)
Fumiaki Araki (Japan Agency for Marine-Earth Science and Technology )

Mode water is a seawater mass defined by particular water properties and one of the criteria for evaluation of the North Pacific Ocean data. Conventional comparative studies visualized mode water on cutting planes at the specific position of the ocean; however, it is not easy to recognize differences of 3D shapes of the mode water among ocean datasets. We have developed the comparative 3D visualization tool for mode water regions in the previous work. This poster introduces our study on shape comparison of mode water regions of three ocean datasets of observation, assimilation, and simulation. As a result, we found the assimilation dataset was more similar than the simulation to the observation dataset from the aspect of shape similarity. We applied a view-based method to compare pairs of isosurfaces of the observation and the assimilation and observe their similar portions.

A Case Study of Open Data Visualization System for Government Transparency in Thailand
Puripant Ruchikachorn (Chulalongkorn University)
Rapee Suveeranont (Boonmee Lab)
Thitiphong Luangaroonlerd (Boonmee Lab )

Open data is essential to government transparency. However, simply opening access to the government data files does not guarantee data accessibility to the general population. An efficient way is to digest the data and visually present its insights. We designed and implemented a visualization system to communicate the data stories of Thai government procurement to both the public and journalists. Our iterative design process consulted two political reporters. The final design incorporated a scroll-based storytelling and several visualization elements which had not been available such as a hexagonal grid map of Thailand. We demonstrated the usefulness of this system with a scenario regarding government budget transparency.

Synchrotron Radiation-based Three Dimensional Volume Rendering Images Analysis of Nerve Injured Soleus Muscles for Objective Diagnosis
Jiwon Lee (University of Soonchunhyang)
Cho-i Moon (University of Soonchunhyang)
subok Kim (University of Soonchunhyang)
HyunJong Yoo (University of Soonchunhyang)
ONSEOK LEE (University of Soonchunhyang )

Muscle is a body organs that maintains physical balance in daily life and is essential for prevention of physical disability. Conventional electron microscopic examination of the existing muscle internal structure imaging method has difficulties in performing 3D reconstruction and quantitative analysis. Therefore, we try to visualize the nerve injury models of mouse soleus muscles by the synchrotron radiation imaging of complementary method. As a result, we performed an objective diagnostic imaging analysis of the injured muscle model.

Screen Space Viewpoint Guide Model for Direct Volume Rendering
Seokyeon Kim (Data Visualization Lab, Sejong University)
Min Ook Kim (Sejong Univ)
Sangbong Yoo (Sejong University)
Dong gun Kim (sejong University)
Yun Jang (Sejong University )

Volume rendering is a technology for visualizing a 3D discretely sampled data set, typically a 3D scalar field. In 3D data visualization, rendering techniques should enable us to understand the structure of the target data while minimizing the data feature occlusion. In this paper, we propose a screen space viewpoint guide model that utilizes information entropy based on the rendered image in real-time. The proposed model can perform the viewpoint adjustment continuously while changing the transfer function.

Movement Pattern Classification and Visualization Using Bike Sharing Data
Seongmin Jeong (Data Visualization Lab, Sejong University)
Mingyu Pi (Data Visualization Lab, Sejong University)
Hanbyul Yeon (Sejong University)
son hye sook (Sejong University)
Seokbong Jeon (Sejong University)
Yun Jang (Sejong University )

This paper is an early study using bicycle sharing data to deduce the purpose of movements. We use the OSM map information to classify area features of the bike sharing stations. We, then, infer the purpose of movements according to the area features and apply them to the actual cases to visualize the purpose changes while people use the bicycles over time.

Storytelling [Day 3 - Apr 25 - 15:30 - 17:00]
Diabetes Diet Plan Visualized
Guojun Han
Youngeun Olivia Kang
Ting Pan
Zhenyu Cheryl Qian
Yingjie Victor Chen

This data visualization project aims to reveal the connection between food and human body. As we know, different types of food contain different nutrition. We manually collect the nutrition data and then use Adobe Illustrator to present the connection between food type and nutrition. Then, we sort the collected food according to the effects they can bring to human body. We find food can contribute to our brain, eyes, skin, and etc. However, If we get too much food, or food that gives our bodies the wrong instructions, we can become overweight, undernourished, and at risk for the development of diseases and conditions, such as arthritis, diabetes, and heart disease. To better illustrate the side effects, we choose diabetes to do further investigation. In the end, we use visualized data to help diabetes people better organize their daily meal.

What data can tell about dimensionality reduction techniques
Lorenzo Amabili
Jiri Kosinka

The three different datasets, the Semeion Handwritten Digit Dataset, the Breast Cancer Wisconsin (Diagnostic) Dataset and the Swiss Roll Dataset were first reduced to two-dimensional datasets by applying Sammon Mapping, SNE and t-SNE in MatLab using the code made publicly available by Laurens van der Maaten. Some initial data cleaning was performed by using R. Afterwards, the visualisations were created in D3.js. The iterative design process of the visual story consisted of main concept development (i.e., main idea and story plot), data selection, story structure definition, story body production (in which the visualisations were the core), first draft of the entire video creation, video editing (using iMovie), choice of the soundtrack, and feedback-based refinements.

Patterns in Nobel Nomination
Xingyu Cai
Xiaoru Yuan

A body of visualizations has been focusing on the laureates. While in this visualization, we will examine Nobel nominees and nominators. We will not only look at the prizewinners, but also the Nobel also-runs. The Royal Swedish Academy of Sciences has made the nomination information from 1901 to 1966 publicly available. Although it is a pity that we cannot have more recent data, but we can still find interesting stories about the mysterious Nobel nominations from decades ago. From the visualization, we can see the underlying nomination patterns. The representation of nomination is complicated. Triangle is chosen as the main visual mark as a metaphor for the behavior of nomination. Then stroke-style, filled-style, half triangle are used to represent different nomination status.

Train, plane or car?
María Jesús Lobo
Christophe Hurter

We propose a visual data story about air traffic in France, using the dataset given for the competition. We conducted an analysis to understand the trends of traffic and CO2 emissions evolution, by studying the relationship between the flights' routes and sizes and their CO2 emissions between the years 2000 and 2017. After identifying the principal trends, we designed the video story based mainly on animations. First, we use an animation that bundles the different origin/destination paths and animates them through the years. This animation enables the viewer to see general trends in the traffic network by reducing edges clutter. Then, to present visual summaries of the data, we use abstract visualizations and pictograms. We chose to reduce the complexity of the visualizations to emphasize in trends, and to use consistent pictograms across the video to keep the viewer engaged and to make the message understandable.

Periodic Temperature Effects on Biodiversity
Keshav Dasu
Paulina Lei
Erin Satterthwaite
Elliott Hazen
Jarrod Santora
Kwan-Liu Ma

The National Oceanic and Atmospheric Administration (NOAA) collects near real-time data on the oceans and atmosphere. We analyzed this data in the region spanning from the Monterey Bay up to the San Francisco Bay Area in the United States from 2009 to 2015 and created a visualization to help illustrate the relationship between climate and biodiversity to the public. We designed a glyph, inspired by Slingsby’s work, to represent chlorophyll volume, wind direction, and species diversity. Our glyph is applied to a tile-map of the region, where tile color is sea surface temperature. The implementation of the glyph and tile-map was done using Three and exported as images. Using Photoshop we manually illustrated species abundance data. The combination of our glyph and our tile-map allows us to better see how these variables influence one another and shed light on how climate affects biodiversity. (View our presentation at

The Impatient List: A Visual Storytelling of Kidney Donation and Transplantation in United States
Kuhu Gupta
Junjie Xu
Yanfeng Jin
Het Piyush Sheth

"The Impatient List" is a storytelling piece that calls the general public 's attention to patients of the kidney transplant in the United States. Organ transplantation is a highly collaborative task, involving the patient, the donor, hospitals and organizations. The organization has been collecting this data for three decades which was the primary source of data for our visualization. With empathy to those patients, we want our designs to tell a story about those data to the general public. Therefore, to personalize the analysis for a group of varied users, our design allows users to filter the data based on their location, blood-group, and BMI. This way, the user can guide the story to their suited interests. To present the big picture, we adopted the design of a choropleth for waiting list information over the geo-location, and animation for waiting list change over the last two decades. You can view the data story at -

Of Catastrophes and Rescues: Making the Invisible Visible
Peter Mindek
Tobias Klein
Ludovic Autin
Haichao Miao
Theresia Gschwandtner

We demonstrate how data-driven visualization can be used to explain processes that are not directly observable. While the classic approach is to manually illustrate these processes, we created a procedural model which can be directly parametrized by the underlying data. We used this approach to explain the process of dynamic instability of microtubules. Our model of a microtubule is parametrized by the growth speed data provided by Wittmann et al. [6], while the models of the molecules are taken from the Protein Data Bank [2]. We used our real-time molecular visualization software marion [5] together with our procedural generator of molecular scenes [3] to create an intercellular environment. The dynamics of the molecules were informed by fundamental biological research [1, 7]. To put our visualization in perspective with reality, we included microscopy data of microtobule dynamics by Matov et al. [4].

A Visual Storytelling for Progress in Characterizing Brain Activity of Autism Spectrum Disorder Children
Xuetong Zhao
Yuchen Zhang

Diagnosis for Autism Spectrum Disorder (ASD) has always relied on behavioral observations. However, recent studies with brain imaging show significant alterations in the brain structure and function associated with ASD, which opens a new channel for imaging-based diagnosis. Accurate diagnosis depends on accurate characterization of the alterations. In this regard, multi-contrast imaging can provide more complete and complementary information for characterizing autistic brain. Recent experimental data show that the hemodynamic responses in prefrontal cortex are different between children with autistic trait and normal controls in a task of the joint attention. Future comprehensive study with optical brain imaging on autistic brain activity and its inherent characteristics may provide reliable evidences for early autism diagnosis. This video gives the background to the development of this approach and the data obtained from the current research.

Day 1 - Apr 23 - 09:50 - 10:30
Visual Analytics Systems
Clone-World: A Visual Analytic System for Large Scale Software Clones
debajyoti mondal (University of Saskatchewan)
Manishankar Mondal (University of Saskatchewan)
Chanchal Roy (University of Saskatchewan)
Kevin Schneider (University of Saskatchewan)
Yukun Li (University of Saskatchewan)
Shisong Wang (University of Saskatchewan )

With the era of big data approaching, the number of software systems, their dependencies, as well as the complexity of the individual system is becoming larger and more intricate. Understanding these evolving software systems is thus a primary challenge for cost-effective software management and maintenance. In this paper we perform a case study with evolving code clones. The programmers often need to manually analyze the co-evolution of clone fragments to decide about refactoring, tracking, and bug removal. However, manual analysis is time consuming, and nearly infeasible for a large number of clones, e.g., with millions of similarity pairs, where clones are evolving over hundreds of software revisions. We propose an interactive visual analytics system, Clone-World, that leverages big data visualization approach to manage code clones in large software systems. Clone-World, gives an intuitive yet powerful solution to the clone analytic problems. Clone-World combines multiple information-linked zoomable views, where users can explore and analyze clones through interactive exploration in real time. User studies and experts’ reviews suggest that Clone-World may assist developers in many real-life software development and maintenance scenarios. We believe that Clone-World will ease the management and maintenance of clones, and inspire future innovation to adapt visual analytics to manage big software systems.

aflak: Visual Programming Environment Enabling End-to-End Provenance Management for the Analysis of Astronomical Datasets
Malik Olivier Boussejra (Keio University)
Rikuo Uchiki (Keio University)
Yuriko Takeshima (Tokyo University of Technology)
Kazuya Matsubayashi (Kyoto University)
Shunya Takekawa (Nobeyama Radio Observatory, National Astronomical Observatory of Japan (NAOJ), National Institutes of Natural Sciences (NINS))
Makoto Uemura (Hiroshima University)
Issei Fujishiro (Keio University )

This paper describes an extendable graphical framework, aflak, which provides a visualization and provenance management environment for the analysis of multi-spectral astronomical datasets. Via its node editor interface, aflak allows the astronomer to compose transforms on input datasets queryable from public astronomical data repositories, then to export the results of the analysis as Flexible Image Transport System (FITS) files, in a manner such that the full provenance of the output data be preserved and reviewable, and that the exported file be usable by other common astronomical analysis software. FITS is the standard of data interchange in astronomy. By embedding aflak’s provenance data into FITS files, we both achieve interoperability with existing software and full reproducibility of the process by which astronomers make discoveries.

Day 1 - Apr 23 - 11:40 - 12:20
AI and VIS
Interactive Labelling of a Multivariate Dataset for Supervised Machine Learning using Linked Visualisations, Clustering, and Active Learning
Mohammad Chegini (Graz University of Technology)
Jürgen Bernard (TU Darmstadt)
Philip Berger (University of Rostock)
Alexei Sourin (Nanyang Technological University)
Keith Andrews (Graz University of Technology)
Tobias Schreck (Graz University of Technology )

Supervised machine learning techniques require labelled multivariate training datasets. Many approaches address the issue of unlabelled datasets by tightly coupling machine learning algorithms with interactive visualisations. Using appropriate techniques, analysts can play an active role in a highly interactive and iterative machine learning process to label the dataset and create meaningful partitions. While this principle has been implemented either for unsupervised, semi-supervised, or supervised machine learning tasks, the combination of all three methodologies remains challenging. In this paper, a visual analytics approach is presented, combining a variety of machine learning capabilities with four linked visualisation views, all integrated within the mVis multivariate Visualiser system. The available palette of techniques allows an analyst to perform exploratory data analysis on a multivariate dataset and divide it into meaningful labelled partitions, from which a classifier can be built. In the workflow, the analyst can label interesting patterns or outliers in a semi-supervised process supported by active learning. Once a dataset has been interactively labelled, the analyst can continue the workflow with supervised machine learning to assess to what degree the subsequent classifier has effectively learned the concepts expressed in the labelled training dataset. Using a novel technique called automatic dimension selection, interactions the analyst had with dimensions of the multivariate dataset are used to steer the machine learning algorithms. A real-world football dataset is used to show the utility of mVis for a series of analysis and labelling tasks, from initial labelling through iterations of data exploration, clustering, classification, and active learning to refine the named partitions, to finally producing a high-quality labelled training dataset suitable for training a classifier. The tool empowers the analyst with interactive visualisations including scatterplots, parallel coordinates, similarity maps for records, and a new similarity map for partitions.

An Association Rule based Approach to Reducing Visual Clutter in Parallel Sets
Chong Zhang (UNCC)
Yang Chen (I4 data)
Jing Yang (UNCC)
Zhengcong Yin (Texas A&M University )

Although Parallel Sets, a popular categorical data visualization technique, intuitively reveals the frequency based relationships in details, a high-dimensional categorical dataset brings a cluttered visual display that seriously obscures the relationship explorations. Association rule mining is a popular approach to discovering relationships among categorical variables. It could complement Parallel Sets to group ribbons in a meaningful way. However, it is difficult to understand a larger number of rules discovered from a high-dimensional categorical dataset. In this paper, we integrate the two approaches into a visual analytics system for exploring high-dimensional categorical data with dichotomous outcome. The system not only helps users interpret association rules intuitively, but also provides an effective dimension and category reduction approach towards a less clustered and more organized visualization. The effectiveness and efficiency of our approach are illustrated by a set of user studies and experiments with benchmark datasets.

Day 1 - Apr 23 - 14:10 - 14:50
Visual Encodings for Analytics
Exploration behavior of group-in-a-box layouts
Yuki Ueno (Kyoto University)
Hiroaki Natsukawa (Kyoto University)
Nozomi Aoyama (Kyoto University)
Koji Koyamada (Kyoto University )

To improve visualization, it is necessary to optimize the design by analyzing the behavior of users as well as by improving the evaluation index of the computational experiment and the task performance (e.g., the correct answer rate and completion time) in the user experiment. Although various studies have investigated the influence of user behavior on the evaluation of visualization, majority of these studies focused on simple visualization tasks. A simple task does not indicate a simple visualization comprising a few visualization elements but a task in which the information obtained from visualization is the only clue for completing the task. However, a few studies have targeted complicated tasks in which multiple information obtained from visualization is considered to be a clue for completing the task regardless of the number of elements that are contained in the visualization. Therefore, in this study, we investigated the behavior of the participants who have performed complicated tasks. We selected two types of group-in-a-box (GIB) layouts, which can be considered to be a complicated visualization method, as the target of the user experiment. In the user experiment, participants were asked to perform an exploration task specific to GIB layouts; which group has the maximum number of intra-edges? We also collected the eye-tracking data in addition to task performance. The results showed that the correct answer rate is considerably affected by the visualization factor; whether the correct answer, the box with maximum number of intra-edges, is the box with the largest area. Furthermore, an analysis of the collected eye-tracking data revealed that this visualization factor affected the exploration behavior of the participants; however, it did not affect the location at which the participants were focused on. The obtained results indicated that the visualization elements that were not considered by the visualization designer can influence the task of extracting information from the data. Therefore, designers have to configure the visualization by considering the visual cognitive behavior of the users.

Interactive Map Reports Summarizing Bivariate Geographic Data
Shahid Latif (Universitat Duisburg Essen)
Fabian Beck (University of Duisburg-Essen )

Bivariate map visualizations use different encodings to visualize two variables but comparison across multiple encodings is challenging. Compared to a univariate visualization, it is significantly harder to read regional differences and spot geographical outliers. Especially targeting inexperienced users of visualizations, we advocate the use of natural language text for augmenting map visualizations and understanding the relationship between two geo-statistical variables. We propose an approach that selects interesting findings from data analysis, generates a respective text and visualization, and integrates both into a single document. The generated reports interactively link the visualization with the textual narrative. Users can get additional explanations and have the ability to compare different regions. The text generation process is flexible and adapts to various geographical and contextual settings based on small sets of parameters. We showcase this flexibility through a number of application examples.

Day 1 - Apr 23 - 16:00 - 16:30
Poster Presentations
Copyrights © IEEE Pacific Visualization Symposium 2019