<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Visualization Design Lab</title>
    <description>Data visualization research lab at Graz University of Technology and University of Utah</description>
    <link>https://vdl.sci.utah.edu/</link>
    <atom:link href="https://vdl.sci.utah.edu/feed.xml" rel="self" type="application/rss+xml" />
    <pubDate>Fri, 20 Feb 2026 18:29:20 +0000</pubDate>
    <lastBuildDate>Fri, 20 Feb 2026 18:29:20 +0000</lastBuildDate>
    <generator>Jekyll v4.4.1</generator>
    
      <item>
        <title>ReVISit 2 Paper Wins Best Paper Award at IEEE VIS</title>
        <description>&lt;p&gt;Running online visualization studies is now standard practice in VIS and HCI research. Yet the process remains fragmented: researchers stitch together survey tools, custom web code, logging scripts, analysis pipelines, and ad hoc debugging workflows. Users of &lt;a href=&quot;https://revisit.dev&quot;&gt;reVISit&lt;/a&gt; already know this story: reVISit consolidates this ecosystem into a single open framework that supports the &lt;strong&gt;entire experiment life cycle&lt;/strong&gt; – from design to dissemination.&lt;/p&gt;

&lt;p&gt;To inform the academic community about the new developments in reVISit 2 – which we already described in these &lt;a href=&quot;https://revisit.dev/blog/2025/01/20/release-2.0/&quot;&gt;blog&lt;/a&gt; &lt;a href=&quot;https://revisit.dev/blog/2025/10/27/release-2.3/&quot;&gt;posts&lt;/a&gt;  we wrote an &lt;a href=&quot;https://www.visdesignlab.net/publications/2025_vis_revisit/&quot;&gt;academic paper&lt;/a&gt; about it.&lt;/p&gt;

&lt;h2 id=&quot;positioning-revisit-in-the-ecosystem&quot;&gt;Positioning reVISit in the Ecosystem&lt;/h2&gt;

&lt;p&gt;In the paper, we first situate reVISit among existing study platforms. We compare it to commercial survey systems, domain-specific research tools, and library-based frameworks. While survey platforms excel at rapid deployment, they rarely support sophisticated interaction logging or fine-grained experimental control. Academic tools often address specific domains or slices of the workflow, but lack long-term maintainability or broad adoption.&lt;/p&gt;

&lt;p&gt;ReVISit 2 is designed differently: it treats experiment design as programmable infrastructure. A JSON-based domain-specific language (DSL) models sequences, blocks, counterbalancing strategies, interruptions, skip logic, and dynamic control flow. On top of that, reVISitPy provides Python bindings that allow researchers to generate complex study configurations directly from notebooks. &lt;strong&gt;The result is a framework that emphasizes expressiveness, reproducibility, and ownership over one’s experimental stack.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We also describe technical advances in reVISit 2, including first-class Vega support, automated provenance tracking, participant replay, and improved debugging tools such as the study browser. These features aim to tighten feedback loops during piloting while preserving transparency during dissemination.&lt;/p&gt;

&lt;h2 id=&quot;putting-it-to-the-test-replication-studies&quot;&gt;Putting it to the Test: Replication Studies&lt;/h2&gt;

&lt;p&gt;To demonstrate that these capabilities are not merely architectural, we conducted a series of replication studies.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2026_bubble_chart.png&quot; alt=&quot;Screenshot of the revisit analysis interface showing a replay of an interactive study with think aloud and provenance tracking. A large bubble chart is the main stimulus.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Each study highlights a different capability of the system. In one, we implement adaptive and staircase-style designs to &lt;a href=&quot;https://revisit.dev/replication-studies/?tab=JND&quot;&gt;evaluate visualizations of correlations&lt;/a&gt; using &lt;a href=&quot;https://revisit.dev/docs/typedoc/interfaces/DynamicBlock/&quot;&gt;dynamic sequencing logic&lt;/a&gt;, showing how complex control flow can be expressed directly in the study configuration. In another, we integrate &lt;a href=&quot;https://revisit.dev/docs/designing-studies/think-aloud/&quot;&gt;think-aloud protocols&lt;/a&gt; by embedding audio recording and transcription into browser-based experiments, allowing &lt;a href=&quot;https://revisit.dev/replication-studies/?tab=Search&quot;&gt;researchers to capture reasoning during interaction&lt;/a&gt; rather than only after the fact. Finally, we demonstrate &lt;a href=&quot;https://revisit.dev/replication-studies/?tab=Pattern&quot;&gt;provenance tracking and replay&lt;/a&gt; by instrumenting interactive visualizations to capture detailed interaction histories, enabling fine-grained participant replay and qualitative analysis. reVISit 2 also provides deep linking to specific trials or moments in user studies, to aid in dissemination and transparency. For example, &lt;a href=&quot;https://revisit.dev/replication-studies/bubblechart-study/LzE2MTl4ZVRMTk5nSFlNYmd1ZDhjZz09?participantId=936e6c58-fc6e-4e1f-9af9-9c9ce2a65952&quot;&gt;this link&lt;/a&gt; takes you to the exact state you see in the above image.&lt;/p&gt;

&lt;p&gt;Across these replications, we recruited hundreds of participants and reproduced key findings from prior visualization studies. Just as importantly, the studies surfaced practical lessons about counterbalancing, recruitment logistics, and ongoing challenges in deploying sophisticated designs online. The replication work serves as both validation and stress test for the framework, and as real live examples for studies (including the data) for the community to learn from.&lt;/p&gt;

&lt;p&gt;Across these replications, we recruited hundreds of participants and reproduced key findings from prior visualization studies. Just as importantly, the studies surfaced practical lessons about counterbalancing, recruitment logistics, and the realities of deploying sophisticated designs online. Together, they function both as validation and as a stress test of the framework. &lt;strong&gt;They also remain publicly accessible – complete with study configurations and data – serving as concrete, real-world examples that the community can inspect, reuse, and learn from.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;what-users-told-us&quot;&gt;What Users Told Us&lt;/h2&gt;

&lt;p&gt;We also interviewed experienced reVISit users to better understand how the system performs in practice. Several themes emerged:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Tighter development loops.&lt;/strong&gt; Users appreciated the integrated development environment and the study browser, which makes it possible to jump directly to specific trials without stepping through an entire study.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Expressiveness over convenience.&lt;/strong&gt; While the DSL requires programming knowledge, users valued the flexibility it affords – especially for mixed designs and adaptive sequencing.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Learning curve trade-offs.&lt;/strong&gt; ReVISit inherits complexity from modern web tooling (e.g., React, TypeScript). This can be a barrier for less technical researchers, but it also enables deeper customization and extensibility.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Open infrastructure.&lt;/strong&gt; The ability to fork studies, inspect core code, and maintain version stability was frequently cited as a strength, particularly for reproducibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, feedback confirmed that reVISit is most effective for technically oriented research teams who need more than a survey builder.&lt;/p&gt;

&lt;h2 id=&quot;recognition-at-ieee-vis&quot;&gt;Recognition at IEEE VIS&lt;/h2&gt;

&lt;p&gt;We were very honored to receive an &lt;a href=&quot;https://ieeevis.org/year/2025/info/awards/best-paper-awards#:~:text=ReVISit%202%3A%20A%20Full%20Experiment%20Life%20Cycle%20User%20Study%20Framework&quot;&gt;&lt;strong&gt;IEEE VIS Best Paper Award&lt;/strong&gt;&lt;/a&gt; for this work. Zach Cutler presented the paper on the main stage in Vienna.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2026_zach_presenting.jpg&quot; alt=&quot;Zach Cutler presenting the paper at IEEE VIS in Vienna.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This recognition reflects years of iterative development, community feedback, tutorials, documentation work, and, most importantly, the researchers who have trusted ReVISit in their own studies.&lt;/p&gt;

&lt;p&gt;ReVISit 2 is not the endpoint. It is infrastructure for a research community that increasingly relies on sophisticated, reproducible, browser-based experiments. We look forward to continuing to build it together.&lt;/p&gt;

</description>
        <pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2025/11/10/ieee-vis-award/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2025/11/10/ieee-vis-award/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Learning with Hands: Making Complex Chart Types Accessible</title>
        <description>&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id=&quot;tactile-charts-as-templates-for-chart-learning&quot;&gt;Tactile Charts as Templates for Chart Learning&lt;/h2&gt;

&lt;p&gt;Tactile charts—charts you can explore with your hands by touch—are not new. They have long been used to represent data, helping BLV readers grasp spatial patterns and relationships. But they have limitations: they take time and resources to produce, can’t easily be updated, and are often harder to access compared to digital alternatives such as alt text.&lt;/p&gt;

&lt;p&gt;So why do we use them? Because when it comes to &lt;strong&gt;learning a chart type&lt;/strong&gt;—building that mental model that helps future interpretations—touch can be a powerful sense. And unlike data-specific tactile charts, template charts don’t require frequent updates as they are learning tools, not meant to communicate a specific dataset.&lt;/p&gt;

&lt;p&gt;Working closely with our blind collaborators and following an iterative design process, we created educational &lt;strong&gt;tactile template charts&lt;/strong&gt; for four complex chart types frequently found in scientific publications: &lt;strong&gt;UpSet plots, clustered heatmaps, violin plots, and faceted line charts&lt;/strong&gt; (inspired by genome browsers). Each design is accompanied by &lt;strong&gt;digital exploration instructions&lt;/strong&gt; to support independent learning. You can see our final tactile model designs and accompanying exploration instructions on our accessible &lt;a href=&quot;https://vdl.sci.utah.edu/tactile-charts/&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;do-tactile-charts-help&quot;&gt;Do Tactile Charts Help?&lt;/h2&gt;

&lt;p&gt;We tested our designs of violin plot and clustered heatmap in an interview study with 12 BLV participants. Each participant learned two advanced chart types—one using our Tactile+Text method (a tactile chart with a textual description) and one with Text-Only. Afterwards, we gave them new datasets described only with alt text to see if they could apply what they learned to new datasets differently between the conditions.&lt;/p&gt;

&lt;p&gt;The results were striking. Participants not only preferred the hands-on learning experience of tactile charts, but also reported that tactile charts helped them build a much clearer mental model of the chart types they learned. Tactile charts allow them to “picture” layouts and chart element shapes more vividly. Several described the experience as like being able to see the chart. Participants also use tactile charts as a reference in their minds to interpret new alt text descriptions more confidently.&lt;/p&gt;

&lt;h2 id=&quot;why-does-this-matter&quot;&gt;Why Does This Matter?&lt;/h2&gt;
&lt;p&gt;In our interviews, participants spoke openly about the barriers BLV individuals face in visualization education and communication, and their desire for equal access to information. They saw tactile template charts as a way to bridge that gap, offering a more effective and inclusive approach to visualization education.&lt;/p&gt;

&lt;p&gt;In a world increasingly driven by data, data literacy means a lot, such as job opportunities or the ability for equal information acquisition. By using tactile template charts as educational tools, we can equip BLV learners with transferable knowledge—knowledge that carries over to new contexts. Our hope is to empower BLV individuals to be better able to interpret unfamiliar visualizations and to participate fully in discussions with sighted collaborators in academic, professional, and civic discussions where data plays a central role.&lt;/p&gt;

&lt;p&gt;We make our models available on the &lt;a href=&quot;https://vdl.sci.utah.edu/tactile-charts/&quot;&gt;project website&lt;/a&gt;, so that anyone with access to a 3D printer can make their own template charts.&lt;/p&gt;

&lt;p&gt;Sometimes, the best way to learn a chart is to feel it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This work was generously supported by the Chan Zuckerberg Initiative.&lt;/em&gt;&lt;/p&gt;
</description>
        <pubDate>Fri, 08 Aug 2025 06:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2025/08/08/tactile-charts/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2025/08/08/tactile-charts/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>VDL is Moving to Austria</title>
        <description>&lt;p&gt;&lt;br /&gt;
After an incredible decade at the University of Utah, the Visualization Design Lab (VDL) is entering a new chapter: the lab is &lt;strong&gt;partially relocating to Graz University of Technology (TU Graz)&lt;/strong&gt; in Austria. This will be a gradual transition: only Alex will be moving, while the rest of the team remains in Utah. For the foreseeable future, the lab will operate &lt;strong&gt;across two continents&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;VDL’s new home in Austria will be the &lt;a href=&quot;https://hcc.tugraz.at/&quot;&gt;Institute of Human-Centred Computing&lt;/a&gt; at &lt;a href=&quot;https://tugraz.at/&quot;&gt;TU Graz&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alex will be hiring a PhD student and a Postdoctoral Fellow in Graz. Check out the &lt;a href=&quot;/positions/&quot;&gt;positions page&lt;/a&gt; for details.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;a-brief-history-of-the-visualization-design-lab&quot;&gt;A Brief History of the Visualization Design Lab&lt;/h2&gt;

&lt;p&gt;The Visualization Design Lab was founded when &lt;a href=&quot;/team/lex&quot;&gt;Alex&lt;/a&gt; joined the University of Utah and teamed up with &lt;a href=&quot;https://miriah.github.io/&quot;&gt;Miriah&lt;/a&gt; in 2015. From then until 2021, Miriah and Alex co-directed the lab, leading numerous projects and mentoring many students.&lt;/p&gt;

&lt;p&gt;In 2021, Miriah moved to Linköping University in Sweden, where she founded the &lt;a href=&quot;https://visidlab.github.io/&quot;&gt;Visualization and Interaction Design Group&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In 2025, Alex returned to his alma mater in Graz to continue VDL’s research mission at TU Graz.&lt;/p&gt;

&lt;p&gt;This website will reflect the &lt;strong&gt;new phase of the lab in Austria&lt;/strong&gt;, while continuing to &lt;strong&gt;preserve the history, people, and publications&lt;/strong&gt; from the Utah chapter. For archival purposes, the final version of the Utah VDL site will remain available at&lt;br /&gt;
&lt;a href=&quot;https://vdl.sci.utah.edu&quot;&gt;https://vdl.sci.utah.edu&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Wed, 30 Jul 2025 06:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2025/07/30/vdl-moves/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2025/07/30/vdl-moves/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Making Data Visualizations Talk: Accessible Text Descriptions for UpSet Plots</title>
        <description>&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Scientists love data. In fact, we love data so much that when you open up any journal article,
chances are many of the pages are scattered with data visualizations. That is not just because we like pretty pictures (although we do!). Data visualizations are a powerful tool to show trends or patterns in the numbers that may be difficult to explain in words alone.&lt;/p&gt;

&lt;h2 id=&quot;but-what-if-you-cant-see-the-data-visualization&quot;&gt;But what if you can’t see the data visualization?&lt;/h2&gt;

&lt;p&gt;Most charts are entirely inaccessible to people who are blind or have low vision. These users often rely on screen readers to read the contents on a computer screen aloud. However, those tools usually come up short regarding data visualizations. Instead of describing the chart, they might just say something like “Image of Figure 1.” That leaves the figure caption as the only source of information; yet captions almost never explain what the data is actually showing. Imagine trying to understand a chart with a title and maybe a vague summary. That is the reality for many blind readers.&lt;/p&gt;

&lt;p&gt;One chart type where this really matters are &lt;a href=&quot;https://upset.app&quot;&gt;UpSet plots&lt;/a&gt;, shown in the figure above. It is a popular way to visualize how different sets overlap. Think of it as a more organized way to show set overlaps than a traditional Venn diagram. These plots are used across disciplines, especially in computer science, bioinformatics, computational biology… practically any data-heavy field. But UpSet plots are visually complex and nearly impossible to interpret without sight.&lt;/p&gt;

&lt;p&gt;That is where our work comes in. We developed a system that automatically generates screen- reader-accessible text descriptions for UpSet plots. But not just generic summaries like “there are bars and dots.” These descriptions highlight key structural features of the plot, including the largest intersections, distribution of elements, and other patterns typically interpreted visually. The goal is to &lt;strong&gt;give screen-reader users meaningful access to the same insights a sighted reader would glean from the visualization&lt;/strong&gt;. It basically provides the plot with a narrative voice.&lt;/p&gt;

&lt;p&gt;What makes our system unique? This is not just a one-off tool for a specific dataset. We
analyzed 80 UpSet plots from published papers and identified recurring data patterns, such as
skewed intersection sizes or dominant sets that shaped the narrative focus of our text
descriptions. These insights informed a modular, rule-based pipeline that produces both brief
summaries and extended descriptions. As shown in the annotated example below, the generated text incorporates multiple layers of information, from basic chart elements to statistical summaries.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2025_eurovis_text-description.png&quot; alt=&quot;A long piece of text, one of the upset plot descriptions, with colorful annotations. At the top is a legend describing what the colors mean. It reads Level 1 (element and encoded), Level 2 (Statistical and relational), Level 3 (perceptual and cognitive), and Level 4 (Contextual and domain-specific). Each piece of the text is colored by one of these tags. On the left-hand side, it is indicated which piece of the text is the long description and which part is the short description. On the right-hand side are indications which of our developed patterns are relevant to each piece of text.&quot; width=&quot;400px&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;but-do-these-text-descriptions-actually-work&quot;&gt;But do these text descriptions actually work?&lt;/h2&gt;

&lt;p&gt;To find out, we conducted in-depth interviews with 11 blind or low-vision screen reader users. And these were not quick usability tests! They were in-depth conversations where participants walked through the descriptions, shared what made sense and what did not, and suggested changes that made the description stronger.&lt;/p&gt;

&lt;p&gt;These interviews revealed valuable insights into how experienced screen-reader users navigate complex text. For example, our participants emphasized the importance of structure, recommending using bullet points to help navigate longer descriptions. They also highlighted the need for high-level takeaways before discussing fine-grained details.&lt;/p&gt;

&lt;p&gt;After this process, &lt;a href=&quot;https://scholar.google.com/citations?user=Yq2he8sAAAAJ&amp;amp;hl=en&quot;&gt;Maggie McCracken&lt;/a&gt;, one of the lead authors of our study, realized how powerful participation itself could be. As a psychology researcher, she has worked with many participant groups, but this one stood out for how genuinely enthusiastic they were. They were not just excited about the tool but about the fact that this type of research was happening at all. Several told us they were entirely used to being left out of data visualizations and that being part of the process was meaningful. And that says a lot, considering the data they were reading was about COVID symptoms, which is not exactly edge-of-your-seat material.&lt;/p&gt;

&lt;p&gt;So yes, making charts more accessible helps blind readers. But more than that, it improves how we all think about communicating data.&lt;/p&gt;

&lt;p&gt;If you are curious, you can try it out for yourself at &lt;a href=&quot;https://upset.multinet.app&quot;&gt;https://upset.multinet.app&lt;/a&gt;. You can also upload your own data and generate descriptions instantly.&lt;/p&gt;

&lt;p&gt;This project serves as a reminder that accessibility is not just a technical challenge. It is about asking who gets to be included in scientific conversations. Accessibility is not just a bonus feature but part of that conversation. Charts included.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This work was generously supported by the Chan Zuckerberg Initiative.&lt;/em&gt;&lt;/p&gt;
</description>
        <pubDate>Sat, 07 Jun 2025 06:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2025/06/07/upset-text-descriptions/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2025/06/07/upset-text-descriptions/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Max Lisnic Successfully Defends Dissertation</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://mlisnic.github.io/&quot;&gt;Max Lisnic&lt;/a&gt; successfully defended his dissertation on “Designing Resilient Visualizations Toward More Accurate Data Discourse”. Max was co-advised by Marina Kogan and Alex Lex. The committee was completed by Kate Isaacs, Vineet Pandey, and Crystal Lee.&lt;/p&gt;

&lt;p&gt;Max will join &lt;a href=&quot;https://www.wpi.edu/academics/departments/computer-science&quot;&gt;WPI&lt;/a&gt; as an assistant professor this fall!&lt;/p&gt;

&lt;p&gt;Congrats, and good luck with your next steps!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2025-05-15_Max-Speaking.jpg&quot; alt=&quot;Max Speaking&quot; /&gt;
&lt;img src=&quot;/assets/images/posts/2025-05-15_Group.jpg&quot; alt=&quot;Max and the Group&quot; /&gt;
&lt;img src=&quot;/assets/images/posts/2025-05-15_Cake.jpg&quot; alt=&quot;Max&apos; Cake&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Thu, 15 May 2025 11:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/event/2025/05/15/lisnic_defense/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/event/2025/05/15/lisnic_defense/</guid>
        
        
        <category>event</category>
        
      </item>
    
      <item>
        <title>Reading Between the Lines: The US Computer Science Graduate Admission Process</title>
        <description>&lt;p&gt;The time between submitting your application and hearing back can be stressful, in particular if you don’t hear back for a long time. This post is meant to help you read between the line, so that you can judge what a non-response from a grad program means. 
I’ll talk mostly about PhD admissions, but sprinkle in some tidbits for MS admissions. 
I’ve previously written about &lt;a href=&quot;https://vdl.sci.utah.edu/blog/2020/11/21/grad-school/&quot;&gt;what you can do to get into grad school&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;its-march-or-april-i-havent-heard-back-at-all-is-there-still-a-chance-that-i-will-get-accepted&quot;&gt;It’s March or April. I haven’t heard back at all. Is there still a chance that I will get accepted?&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;To be blunt: it’s unlikely.&lt;/strong&gt; US PhD programs often have three piles: definite admits, students on the waitlist, and rejects.&lt;/p&gt;

&lt;p&gt;Unfortunately, only the definite admits will hear about a decision early, everybody else is kept in purgatory for a while. Our CS program informs definitely admitted candidates as early as &lt;strong&gt;late January or the first half of February&lt;/strong&gt;. Domestic students are invited to visit campus in mid-February; international students are invited to participate in a virtual event at the same time.&lt;/p&gt;

&lt;p&gt;So, if you haven’t heard back from a PhD program by the end of February, it means that you’re not in the “definitely admit” pile. There might still be hope though: you might be on the waitlist.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;MS students&lt;/strong&gt;, the timeline is usually a bit longer: you should expect to hear in February or March.&lt;/p&gt;

&lt;p&gt;Candidates on the &lt;strong&gt;waitlist&lt;/strong&gt; are meant to fill slots that free up as students in the “definite admit” pile reject an offer of admission. Waitlisted candidates are considered promising students, but ranked slightly below others. Not every school has a waitlist; Utah only introduced one a few years ago. You might be told that you’re on a waitlist, or you might not hear back at all. Sometimes, waitlisted candidates may even hear back the latest: definite accepts go out first, rejects go out next, but wait-listed candidates may be in purgatory for a while. Whether a slot opens up when you’re on the waitlist is difficult to predict – it typically depends on whether other candidates who are vying for a slot in a particular area / with a particular professor accept or reject an offer.&lt;/p&gt;

&lt;p&gt;Most schools don’t have a waitlist for MS students.&lt;/p&gt;
&lt;h3 id=&quot;can-i-e-mail-admissions-to-learn-more-about-whether-ive-been-accepted&quot;&gt;Can I e-mail admissions to learn more about whether I’ve been accepted?&lt;/h3&gt;

&lt;p&gt;Typically, an e-mail to a generic admissions contact or the director of graduate studies will not help you learn about your status. We usually don’t respond to these inquiries because we wait for official channels to give you the news instead.&lt;/p&gt;
&lt;h3 id=&quot;can-i-e-mail-the-professor-i-want-to-work-with-to-learn-whether-ive-been-accepted&quot;&gt;Can I e-mail the professor I want to work with to learn whether I’ve been accepted?&lt;/h3&gt;

&lt;p&gt;You have better chances (though still slim ones) to get a response about your status from a professor you’ve interacted with, or who you listed in your letter as a potential advisor. Generally, individual faculty have a lot of influence over the admissions process. If they want to work with you (and have the money / time), you’re almost certainly going to be admitted.&lt;/p&gt;

&lt;p&gt;This doesn’t hold for MS students: since most MS students typically focus and coursework and don’t interact closely with individual faculty members, it’s unlikely that a particular professor will know anything about your status, and professors (outside of the admissions committee) also don’t intervene on MS admissions decisions.&lt;/p&gt;
&lt;h3 id=&quot;ive-been-rejected-by-a-program-can-i-ask-for-feedback-on-how-to-improve-my-application&quot;&gt;I’ve been rejected by a program. Can I ask for feedback on how to improve my application?&lt;/h3&gt;
&lt;p&gt;You typically won’t get individualized responses to an inquiry about what you can do to improve your application. That’s partially because it would take a lot of effort to respond (Utah’s CS program gets more than 1000 grad applications), and partially because people worry that you might use any advice in litigation against them. If you’d like feedback on your application, you should reach out to your letter writers who know you personally.&lt;/p&gt;
&lt;h3 id=&quot;ive-been-accepted-by-a-program-but-havent-heard-back-from-my-dream-school-yet-when-do-i-have-to-make-a-decision&quot;&gt;I’ve been accepted by a program, but haven’t heard back from my ‘dream school’ yet. When do I have to make a decision?&lt;/h3&gt;

&lt;p&gt;All major grad schools have &lt;a href=&quot;https://cgsnet.org/resources/for-current-prospective-graduate-students/april-15-resolution&quot;&gt;agreed to use the same deadline&lt;/a&gt; by which students must accept an offer of financial support (that usually comes with a PhD offer): &lt;strong&gt;April 15&lt;/strong&gt;. It’s strongly advisable to accept any offer you may have by that deadline. You might still be admitted if you respond later, but there are no guarantees anymore. Typically an offer letter lists a specific deadline.&lt;/p&gt;

&lt;p&gt;We (at Utah) try to give offers to students on waitlist sooner, as we learn from rejections by other students.&lt;/p&gt;
&lt;h3 id=&quot;ive-accepted-an-offer-but-now-i-received-a-better-one-can-i-walk-my-acceptance-back&quot;&gt;I’ve accepted an offer but now I received a better one. Can I walk my acceptance back?&lt;/h3&gt;
&lt;p&gt;It’s not great but it happens. No one can force you to attend a PhD program, so yes, go ahead and rescind your acceptance, but do it as soon as possible.&lt;/p&gt;

&lt;h3 id=&quot;ive-been-accepted-to-multiple-programs-should-i-decline-offers-i-wont-take-immediately&quot;&gt;I’ve been accepted to multiple programs. Should I decline offers I won’t take immediately?&lt;/h3&gt;

&lt;p&gt;Yes! The sooner the better. You might give a waitlisted student a chance to get admitted.&lt;/p&gt;

&lt;h3 id=&quot;professors-keep-ignoring-my-e-mail-what-can-i-do&quot;&gt;Professors keep ignoring my e-mail. What can I do?&lt;/h3&gt;

&lt;p&gt;I’m sorry, it sucks to be ghosted. But most professors delete a lot of e-mail from prospective students. Here’s how to get a response:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Write only targeted e-mail to people you want to really work with. Don’t mass e-mail, we can spot mass e-mails (even if you paste a paper title of ours).&lt;/li&gt;
  &lt;li&gt;Keep it short and to the point. Five sentences at most.&lt;/li&gt;
  &lt;li&gt;Don’t chat-gpt us. We don’t care for a wall of very well written but empty text.&lt;/li&gt;
  &lt;li&gt;Don’t get our research area wrong. This will almost certainly get your e-mail deleted.&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Wed, 09 Apr 2025 06:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2025/04/09/grad-school-admission/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2025/04/09/grad-school-admission/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Reflections on UpSet</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://upset.app/&quot;&gt;UpSet&lt;/a&gt; has won the 10-year &lt;a href=&quot;https://ieeevis.org/year/2024/program/awards/awards.html&quot;&gt;Test of Time Award at IEEE VIS&lt;/a&gt;. I’m deeply honored by this award, and I want to thank the committee for choosing our paper. But most importantly, I want to thank my co-authors &lt;a href=&quot;http://gehlenborglab.org/&quot;&gt;Nils Gehlenborg&lt;/a&gt;, &lt;a href=&quot;http://hendrik.strobelt.com/&quot;&gt;Hendrik Strobelt&lt;/a&gt;, &lt;a href=&quot;https://romain.vuillemot.net/&quot;&gt;Romain Vuillemot&lt;/a&gt;, and my PostDoc advisor &lt;a href=&quot;https://vcg.seas.harvard.edu/people/hanspeter-pfister&quot;&gt;Hanspeter Pfister&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;As of October 2024 the &lt;a href=&quot;https://vdl.sci.utah.edu/publications/2014_infovis_upset/&quot;&gt;UpSet paper&lt;/a&gt; has been cited 1900 times. Together with Jake Conway and Nils Gehlenborg, I also wrote a short &lt;a href=&quot;https://vdl.sci.utah.edu/publications/2017_bioinformatics_upsetr/&quot;&gt;follow-up paper introducing an R version of UpSet&lt;/a&gt;, which has been cited 2500 times. So clearly, those papers have hit a nerve – UpSet is now the de-facto standard for visualizing set data with more than 3 sets. Also, UpSet has been &lt;a href=&quot;https://upset.app/implementations/&quot;&gt;re-implemented many times&lt;/a&gt;, making it available on various platforms and in many programming languages, which certainly contributed significantly to its success.&lt;/p&gt;

&lt;p&gt;In this post, I want to first give a brief history on how UpSet came to be and acknowledge the giants on whose shoulders we stood, and then share my thoughts on &lt;a href=&quot;#what-made-upset-successful&quot;&gt;what made UpSet successful&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;how-upset-came-to-be&quot;&gt;&lt;strong&gt;How UpSet Came to Be&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;The intention of UpSet plots was to solve the problem of set visualizations for more than three sets. Very specifically, it was inspired by the &lt;a href=&quot;https://www.nature.com/nature/journal/v488/n7410/full/nature11241.html&quot;&gt;now infamous six-set banana&lt;/a&gt; &lt;br /&gt;
V&lt;a href=&quot;https://www.nature.com/nature/journal/v488/n7410/full/nature11241.html&quot;&gt;enn diagram&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/banana.jpg&quot; alt=&quot;A six-set venn diagram where one of the set shapes is a banana. The chart is difficult to read.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This chart was widely ridiculed on Twitter back then. When I saw this, I figured that we should be able to create a better visualization for set data than that. So I did some research and found various set vis methods, including &lt;a href=&quot;https://ieeexplore.ieee.org/abstract/document/6634104&quot;&gt;radial sets&lt;/a&gt;. But the method that I liked the most was a response to a &lt;a href=&quot;https://nuit-blanche.blogspot.com/2007/09/on-difficulty-of-autism-diagnosis-can.html&quot;&gt;“Vennerable Challenge” for Autism data&lt;/a&gt; by &lt;a href=&quot;https://nuit-blanche.blogspot.com/2007/10/judging-autism-charts-challenge.html&quot;&gt;Robert&lt;/a&gt; &lt;a href=&quot;https://eagereyes.org/blog/2007/autism-diagnosis-accuracy&quot;&gt;Kosara&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/tree.png&quot; alt=&quot;A bar chart on Autism prevalence and a tree below the bars that identifies whether a group has certain properties.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I liked that this chart uses a simple bar chart instead of the irregular shapes that Venn diagrams use, but I struggled a bit with parsing the tree – it always had to trace paths to the root and I couldn’t spot any trends with regards to the set intersection patterns.&lt;/p&gt;

&lt;p&gt;Also, this chart doesn’t actually plot intersection sizes as bars but a different attribute (% of correct diagnostic tests), so it’s actually visualizing something very different.&lt;/p&gt;

&lt;p&gt;Still, this chart got me thinking, and shortly before Christmas 2013, while I was a PostDoc in Hanspeter Pfister’s group at Harvard, I sketched this in my notebook:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/sketch.jpg&quot; alt=&quot;A hand-drawn sketch of UpSet, with all of the principal ideas present.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As you can see, all of the principal ideas of UpSet are already in that sketch: the matrix layout for sets, the bar charts for set and intersection sizes, even advanced metrics such as “deviations from expected intersection sizes” and grouping and aggregation.&lt;/p&gt;

&lt;p&gt;I pitched the idea to my office-mate at Harvard, &lt;a href=&quot;https://perso.liris.cnrs.fr/nicolas.bonneel/&quot;&gt;Nicolas Bonneel&lt;/a&gt; (who is a computer graphics researcher), and he immediately got the idea, but was not very excited because it “was so simple”.&lt;/p&gt;

&lt;p&gt;I went home for Christmas and kept thinking about it and decided to give it a go for VIS.  I also wanted to do this on the web; I had only done Java-based visualizations up to this point, but there was this hot new thing called D3 that I wanted to take out for a spin. So I started out and pretty quickly got to a first prototype that you can see here:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/upset_first_results.png&quot; alt=&quot;A very simple first implementation of the UpSet idea with squares and bar charts.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;At this point I realized that I’ll need help to pull this off, especially with only three months to go till the conference submission deadline. My advisor, Hanspeter Pfister, was supportive, so we assembled the “dream team” consisting of my PostDocs friends that were also doing visualization research at Harvard: &lt;a href=&quot;http://gehlenborglab.org/&quot;&gt;Nils Gehlenborg&lt;/a&gt; (at the Harvard Medical School), &lt;a href=&quot;http://hendrik.strobelt.com/&quot;&gt;Hendrik Strobelt&lt;/a&gt; (who had just joined the lab that month), and &lt;a href=&quot;https://romain.vuillemot.net/&quot;&gt;Romain Vuillemot&lt;/a&gt; (at the Harvard Kennedy School).&lt;/p&gt;

&lt;p&gt;Everyone on this team was an experienced visualization researcher, and magically, everyone found the time to really pour their heart into the project. We had many lively discussions on our wall-spanning whiteboard, and lots of heated arguments about UpSet features in its early days. &lt;br /&gt;
We were able to pull the UpSet paper off in just three months – from idea, to design and refinement, to evaluation, to writeup – and you can still admire the result here: &lt;a href=&quot;https://vcg.github.io/upset/&quot;&gt;https://vcg.github.io/upset/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/upset_original.png&quot; alt=&quot;The UpSet system as published in the paper&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As you can see this is actually a fairly complicated interactive visualization system. While the basic design remained the same, everyone contributed immensely to make it all come together. It took a million design decisions to actually make this work. We also used interaction extensively to enable users to answer a variety of questions. The system includes supplementary visualizations, queries, aggregations of intersections, attribute visualizations, and so on.&lt;/p&gt;

&lt;p&gt;We made the deadline and properly celebrated the submission!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/celebrate.png&quot; alt=&quot;Celebrations with the authors in a collage&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The paper received mostly strong reviews on its first submission to InfoVis. &lt;br /&gt;
Of course, the dreaded “Reviewer 2” found that “&lt;em&gt;The proposed software and its associated methodology does not go beyond what this area has been providing as standard for the last 15 years.”&lt;/em&gt; Nevertheless, the other reviews were sufficiently positive, and we were happy to present it that year in &lt;a href=&quot;https://ieeevis.org/year/2014/info/vis-welcome/welcome&quot;&gt;Paris at the 2014 Visualization&lt;/a&gt; conference.&lt;/p&gt;

&lt;h2 id=&quot;what-made-upset-successful&quot;&gt;&lt;strong&gt;What Made UpSet Successful?&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;So, what made UpSet more successful than some of my other visualization papers? I think it was a multitude of factors. Some of them might be unique to this project, but others may be relevant as a lesson also for other visualization projects.&lt;/p&gt;

&lt;h3 id=&quot;upset-solved-a-real-and-pressing-need&quot;&gt;&lt;strong&gt;UpSet Solved a Real and Pressing Need&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;There really wasn’t a good way to visualize set intersections of four, five or even more sets. Existing Venn or Euler diagram solutions just don’t work, it gets too complicated to understand what sets are involved in an intersection, and area-proportionality is incredibly hard for Venn diagrams with more than three sets. And especially in the biomedical domain we saw a lot of bizarre many-set Venn diagrams:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/venns.png&quot; alt=&quot;A collection of five or more set venn diagrams.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Other solutions required interaction, or were difficult to interpret. In contrast, UpSet plots are at least somewhat self-explanatory and solve the problem.&lt;/p&gt;

&lt;h3 id=&quot;upset-met-users-where-they-are&quot;&gt;&lt;strong&gt;UpSet Met Users Where They Are&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;If we had just stopped with our UpSet implementation where we were with our InfoVis paper, I doubt that it would have been a big success. I’m sure people in our community would have appreciated it, and might even have built extensions based on our open source code. But we realized that for broad adoption, we have to meet users where they are, and we have to communicate with them.&lt;/p&gt;

&lt;p&gt;So we set out to build the R version of UpSet – &lt;a href=&quot;https://github.com/hms-dbmi/UpSetR&quot;&gt;UpSetR&lt;/a&gt;. The idea was that the biomedical community is already working with R for their data visualization needs. Uploading data to our web tool and then taking a screenshot would have yanked them from their workflow, and would have resulted in a “dead-end” plot that they’d have to manually adapt if their data changed. An R version &lt;strong&gt;would make it simple to just integrate UpSet plots in their current workflow&lt;/strong&gt;, significantly lowering the barrier to its use.&lt;/p&gt;

&lt;p&gt;We also promoted the UpSet, and particularly the R version in two ways: we wrote a short two-page &lt;a href=&quot;https://academic.oup.com/bioinformatics/article/33/18/2938/3884387&quot;&gt;“Applications Note” in a widely read bioinformatics journal&lt;/a&gt;. There was no new scientific content (as judged by the VIS community) in that article, it was just a simplified re-implementation of the tool in R.&lt;/p&gt;

&lt;p&gt;And then we also wrote a &lt;a href=&quot;https://www.nature.com/articles/nmeth.3033&quot;&gt;how-to guide style article for a then popular series: Nature Method’s “Points of View”&lt;/a&gt;. In that article we talk about set visualization in general: for example, we recommend using Venn diagrams for two or three sets, and matrices for very large numbers of sets. But we also point to UpSet plots for the “middle ground” of 4–10 sets.&lt;/p&gt;

&lt;p&gt;At a later point, we also set up &lt;a href=&quot;https://upset.app/&quot;&gt;UpSet.app&lt;/a&gt; to document the general use of UpSet and published a &lt;a href=&quot;https://en.wikipedia.org/wiki/UpSet_plot&quot;&gt;Wikipedia article&lt;/a&gt; on UpSet. It was a small victory when the Wikipedia article was approved for inclusion by the Wikipedia admins instead of being deleted for “lack of notability”.&lt;/p&gt;

&lt;h3 id=&quot;a-simple-idea-is-more-impactful-than-a-complicated-system&quot;&gt;&lt;strong&gt;A Simple Idea is More Impactful than a Complicated System&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;While our IEEE InfoVis submission was a complicated system with sophisticated interaction. Our team, and our reviewers, are visualization researchers, and we like to think through all of the things that we &lt;em&gt;can&lt;/em&gt; do, and how we can leverage interactions and have cool animations.&lt;/p&gt;

&lt;p&gt;However, the version of UpSet that really took off is not our web-based version, but the more basic one that is implemented in UpSetR:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024-10-upset/upsetr.png&quot; alt=&quot;The UpSetR version of UpSet showing the movies dataset. Only the basic elements are included in this plot&quot; /&gt;&lt;/p&gt;

&lt;p&gt;While some features, such as sorting made it into UpSetR, others didn’t. That basic idea is simple enough, and people started using and citing it in their papers. Soon we saw other versions of UpSet pop up, for example in Python, or even a different (better) R version. At this point we know of &lt;a href=&quot;https://upset.app/implementations/&quot;&gt;13 re-implementations of UpSet&lt;/a&gt; – so clearly, the &lt;strong&gt;idea transcended the implementation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While I do think software tools and libraries add tremendous value to our community, and my team invests a lot of energy in building and maintaining software, it is certainly nice to see that an idea takes off, not at least because you don’t have to maintain the software yourself!&lt;/p&gt;

&lt;h3 id=&quot;upset-is-useful-for-communication-not-only-discovery&quot;&gt;&lt;strong&gt;UpSet is Useful for Communication, Not Only Discovery&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;Finally, UpSet hits the sweet-spot between being useful for discovery / exploration, but also for communication. Like scatter plots or histograms, UpSet can be used in the discovery process, but is still simple enough so that they can be used as a figure in scientific articles.&lt;/p&gt;

&lt;p&gt;Nils often complained to me that he believes that a lot of &lt;strong&gt;complicated visualizations don’t get “credit” for the parts they play in the discovery process&lt;/strong&gt;, but rather give ideas that are later tested using statistical methods or simpler plots; while the visualization tool that lead to the insight doesn’t get credit (for example, in the form of a citation).&lt;/p&gt;

&lt;p&gt;This isn’t the case of UpSet plots, which are easy for scientists to generate, given the many implementations. And unlike our web-based tools, the static Python and R tools &lt;strong&gt;generate “paper-ready” figures&lt;/strong&gt;, without UI elements that need to be edited out or other issues that make it difficult to include a figure.  It probably also helped that we clearly instructed users on how to properly cite UpSet.&lt;/p&gt;

&lt;h2 id=&quot;the-future-of-upset&quot;&gt;&lt;strong&gt;The Future of UpSet&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;While I’m certain that the reason for UpSet’s success is its simplicity, I still haven’t abandoned the interactive UpSet visualization system. In fact, we’ve just released &lt;a href=&quot;https://upset.multinet.app/&quot;&gt;UpSet 2.0&lt;/a&gt; – a version that has the same analytical features as the original UpSet, but supports data upload, public sharing of interactive UpSet plots, and can be integrated in other tools as a React component. It also has a full provenance history (including undo/redo). And it’s enabling new research: for example, we are now generating (and evaluating) text descriptions for UpSet plots, so that they can become accessible to blind and low-vision users.&lt;/p&gt;

&lt;p&gt;In closing: thanks again to my co-authors and friends for helping bring this project to life, and to the community for this recognition.&lt;/p&gt;

</description>
        <pubDate>Wed, 16 Oct 2024 10:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2024/10/16/upset_reflections/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2024/10/16/upset_reflections/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Lessons Learned from Visualizing Multimodal Data... with Aardvarks...</title>
        <description>&lt;h1 id=&quot;understanding-cancer-is-the-key-to-fighting-it&quot;&gt;Understanding cancer is the key to fighting it.&lt;/h1&gt;

&lt;p&gt;Cancer is a terrible disease caused by your cells growing out of control. If we can understand exactly how cancer cells grow, move, and divide, we can develop strategies to prevent, diagnose, and treat cancer.&lt;/p&gt;

&lt;p&gt;A fundamental way to further our understanding is to collect data to represent how cancer cells are growing. It turns out that this is pretty hard because…&lt;/p&gt;

&lt;h1 id=&quot;cancer-is-complex--we-need-complex--multimodal--data-to-represent-it&quot;&gt;Cancer is complex — we need complex ✨ multimodal ✨ data to represent it.&lt;/h1&gt;

&lt;p&gt;What is ✨ multimodal ✨ data? For us, it is data in different formats that represent different aspects of the same phenomenon. Specifically, we work with 🌌 &lt;strong&gt;images&lt;/strong&gt;, 🌳 &lt;strong&gt;trees&lt;/strong&gt;, and 📈 &lt;strong&gt;time-series&lt;/strong&gt; data.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_science.jpg&quot; alt=&quot;An artificial image of an aardvark wearing a lab coat, doing science. It’s a bit in the weird AI uncanny valley style, the hands are almost correct.&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;how-is-this-data-collected&quot;&gt;How is this data collected?&lt;/h2&gt;

&lt;p&gt;The datasets we worked with are from live-cell microscopy imaging; in other words, 🌌 &lt;strong&gt;images&lt;/strong&gt; of cancer cells are recorded over time as those cells grow and divide.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_imaging.gif&quot; alt=&quot;Animation showing a sequence of cells growing, moving, and dividing over time.&quot; id=&quot;aardvark_imaging&quot; /&gt;
&lt;button onclick=&quot;document.getElementById(&apos;aardvark_imaging&apos;).src=&apos;/assets/images/posts/2024_aardvark_imaging.gif&apos;&quot;&gt;Replay Animation&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;Algorithms can track individual cells over time based on their position and other characteristics, such as size.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_tracking.gif&quot; alt=&quot;Animation showing how a single cell is tracked across the sequence of images and grows larger.&quot; id=&quot;aardvark_tracking&quot; /&gt;
&lt;button onclick=&quot;document.getElementById(&apos;aardvark_tracking&apos;).src=&apos;/assets/images/posts/2024_aardvark_tracking.gif&apos;&quot;&gt;Replay Animation&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;Then, derived attributes based on these images can be computed, such as the cell size or the amount of a specific protein in a cell. These attributes are calculated over time, resulting in 📈 &lt;strong&gt;time-series&lt;/strong&gt; data.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_time-series.gif&quot; alt=&quot;Animation showing a single attribute increasing over time for a single tracked cell.&quot; id=&quot;aardvark_time-series&quot; /&gt;
&lt;button onclick=&quot;document.getElementById(&apos;aardvark_time-series&apos;).src=&apos;/assets/images/posts/2024_aardvark_time-series.gif&apos;&quot;&gt;Replay Animation&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;During these experiments cells might divide into two daughter cells. If we record these divisions, we can construct a 🌳 &lt;strong&gt;tree&lt;/strong&gt; of cell relationships or cell lineage.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_tree.gif&quot; alt=&quot;animation that shows one cell dividing into two, and indicating that those two divide into four in a tree of divisions.&quot; id=&quot;aardvark_tree&quot; /&gt;
&lt;button onclick=&quot;document.getElementById(&apos;aardvark_tree&apos;).src=&apos;/assets/images/posts/2024_aardvark_tree.gif&apos;&quot;&gt;Replay Animation&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;Now that we have the data, we need to try to understand it. It turns out that…&lt;/p&gt;

&lt;h1 id=&quot;understanding-multimodal-data-requires-us-to-think-about-all-the-modalities-together-and-this-is-hard&quot;&gt;Understanding multimodal data requires us to think about all the modalities together… and this is hard.&lt;/h1&gt;

&lt;p&gt;Each of the data modalities (🌳🌌📈) is necessary because they capture a different aspect of how cancer cells develop:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The 🌌 &lt;strong&gt;images&lt;/strong&gt; show the spatial relationships between cells.&lt;/li&gt;
  &lt;li&gt;The 📈 &lt;strong&gt;time-series&lt;/strong&gt; data shows how cells grow and change over time.&lt;/li&gt;
  &lt;li&gt;The 🌳 &lt;strong&gt;tree&lt;/strong&gt; captures how cell attributes propagate across generations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But one thing that quickly became clear was that to fully understand the phenomenon of interest (the spread of cancer cells), we needed to synthesize all three of these modalities together.&lt;/p&gt;

&lt;p&gt;Right now, researchers are synthesizing or combining data modalities manually. In other words, they look at an image, then at time-series data, then back at an image, then at a tree, and mentally link data elements together. In the best case, this is tedious and taxing. In the worst case, it is impossible to relate elements from one modality to another.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_confused.jpg&quot; alt=&quot;A cartoon headshot of a confused aardvark, looking straight ahead with a cloud of question mark uncertainty around it.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;So now what? Well, now is the part where the &lt;em&gt;visualization nerds&lt;/em&gt; get to cheer and applaud as the hero of this story 🦸‍♀️ &lt;strong&gt;visualizations&lt;/strong&gt; 🦸‍♂️ get to swoop in and save the day! This is because…&lt;/p&gt;

&lt;h1 id=&quot;-composite-visualizations--can-show-different-data-modalities-together&quot;&gt;🍢 Composite visualizations 🍢 can show different data modalities together!&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_dashboard.jpg&quot; alt=&quot;An artificial image of an aardvark looking at a fancy-looking visualization dashboard. The dashboard is very much a sci-fi visualization dashboard. There are some reasonable-looking bar charts, but there are also more abstract lines and shapes that probably aren’t useful in reality but give the vibe of fancy tech. Anyway, the aardvark is facing the dashboard but is actually looking back at the camera. It has an expression like, are you seeing this cool science visualization stuff? Yeah, it’s cool, and I’m cool because I’m standing next to it!&quot; /&gt;&lt;/p&gt;

&lt;p&gt;What are 🍢 composite visualizations🍢? In short, composite visualizations combine multiple visualizations together into the same view. &lt;em&gt;(If you want a longer/better answer, Javed and Elmqvist do a great job “&lt;a href=&quot;https://www.doi.org/10.1109/PacificVis.2012.6183556&quot;&gt;Exploring the design space of composite visualization&lt;/a&gt;.”)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is great for our purposes because we have three different data modalities that are each represented best by different visualizations, and we are trying to link the elements across these modalities.&lt;/p&gt;

&lt;p&gt;What’s the catch? Even though they are powerful…&lt;/p&gt;

&lt;h1 id=&quot;designing-composite-visualizations-is-you-guessed-it-hard-heres-our-approach-in-three-simple-steps&quot;&gt;Designing composite visualizations is… (you guessed it) …hard. Here’s our approach in &lt;strong&gt;three simple steps&lt;/strong&gt;.&lt;/h1&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Selected a primary data modality&lt;/strong&gt;. Even though all data modalities are needed, specific tasks prioritize certain modalities.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Choose the best visual encoding&lt;/strong&gt; for it. This visualization serves as the host representation.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Embed secondary data modalities&lt;/strong&gt; as client visualizations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Of course, it’s a bit more complicated than this, but we’ll get to that. First, let’s give an example.&lt;/p&gt;

&lt;h1 id=&quot;heres-one-of-the-composite-visualizations-we-designed-this-one-is-my--favorite-&quot;&gt;Here’s one of the composite visualizations we designed (this one is my 🤗 favorite 🤗).&lt;/h1&gt;

&lt;ol&gt;
  &lt;li&gt;First, we select the &lt;strong&gt;tree&lt;/strong&gt; as the &lt;strong&gt;primary data type&lt;/strong&gt; and, by extension, name this composite visualization the tree-first visualization.&lt;/li&gt;
  &lt;li&gt;We &lt;strong&gt;encode&lt;/strong&gt; this &lt;strong&gt;tree&lt;/strong&gt; as a &lt;strong&gt;node-link diagram&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Then, we &lt;strong&gt;embed&lt;/strong&gt; &lt;strong&gt;the time-series&lt;/strong&gt; data by nesting it within the nodes of the tree and &lt;strong&gt;superimpose the images&lt;/strong&gt; of cells above the nodes either automatically or on demand.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_tree-first.gif&quot; alt=&quot;An animation that shows a schematic of the tree first diagram, first with the primary data type shown and then with the secondary data types added.&quot; id=&quot;aardvark_tree-first&quot; /&gt;
&lt;button onclick=&quot;document.getElementById(&apos;aardvark_tree-first&apos;).src=&apos;/assets/images/posts/2024_aardvark_tree-first.gif&apos;&quot;&gt;Replay Animation&lt;/button&gt;&lt;/p&gt;

&lt;h1 id=&quot;here-are-the-other-two-composite-visualizations&quot;&gt;Here are the other two composite visualizations!&lt;/h1&gt;

&lt;p&gt;We call these the time-series-first visualization and the image-first visualization.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_other-designs.png&quot; alt=&quot;Two composite visualization schematics.&quot; /&gt;&lt;/p&gt;

&lt;h1 id=&quot;what-we-learned&quot;&gt;What we learned.&lt;/h1&gt;

&lt;p&gt;Ok, I have a confession. These &lt;strong&gt;three simple steps&lt;/strong&gt; for constructing composite visualizations are hiding a lot of complexity. Specifically, step three is carrying a lot of weight here. In reality, this single step is a much more &lt;strong&gt;iterative process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The design space of composite visualizations can be intimidatingly large. You have to choose a visual encoding for each data type AND how those encodings get combined together. The choice of one affects the other, and there will be some back and forth while exploring this space.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_student.jpg&quot; alt=&quot;An artificial image of an aardvark dressed as a student in a classroom. The aardvark has an eager expression and has its notebook out; it is ready to learn! I don’t know how many fingers an aardvark has, but this AI image has given this aardvark three fingers on the left hand and four fingers on the right hand; well, not including thumbs, those are out of sight.&quot; /&gt;&lt;/p&gt;

&lt;p&gt;That said, we found that the first two steps help anchor this exploration and reduce the design space. Selecting a primary data type and prioritizing a good encoding for that data type will help ensure that tasks for that primary data type can be done effectively. With that settled, figuring out how the other pieces fit into the puzzle becomes much more manageable.&lt;/p&gt;

&lt;h1 id=&quot;we-also-made-a-tool&quot;&gt;We also made a tool!&lt;/h1&gt;

&lt;p&gt;We didn’t just design these visualizations; we implemented them into a tool! It’s an excellent tool! Our collaborators like it! I like it! You can &lt;a href=&quot;https://aardvark.sci.utah.edu/&quot;&gt;try it out yourself&lt;/a&gt;! You can watch a video that demonstrates it!&lt;/p&gt;

&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;
  &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube-nocookie.com/embed/mA6H4-i04g4?si=irqDeOlGiHO5AQVr&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;If you’re an &lt;em&gt;extreme visualization nerd,&lt;/em&gt; you can even &lt;a href=&quot;/publications/2024_vis_aardvark/&quot;&gt;read our research paper&lt;/a&gt;!! The paper also goes into more detail about the design and theory pieces I talked about in this blog.&lt;/p&gt;

&lt;p&gt;I could talk about it more, but this blog is already long enough, and I’m tired of writing it, and you are probably tired of reading it. But I promise I am actually quite proud of the tool.&lt;/p&gt;

&lt;h1 id=&quot;are-you-still-here-that-must-mean-one-of-two-things&quot;&gt;Are you still here? That must mean one of two things…&lt;/h1&gt;

&lt;h2 id=&quot;you-are-wondering-what-the-deal-is-with-the-aardvarks&quot;&gt;You are wondering what the deal is with the aardvarks.&lt;/h2&gt;

&lt;p&gt;Well, the &lt;a href=&quot;/publications/2024_vis_aardvark/&quot;&gt;research paper&lt;/a&gt; I mentioned is titled “&lt;strong&gt;Aardvark&lt;/strong&gt;: Composite Visualizations of Trees, Time-Series, and Images”. Aardvark is the name for the tool we made.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But why aardvarks?? Is it some really awesome acronym?? I bet the “v” stands for visualization!!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sorry to burst your bubble, but it is not an acronym, I just like naming my research tools after animals (see &lt;a href=&quot;/publications/2023_eurovis_ferret/&quot;&gt;Ferret&lt;/a&gt; and &lt;a href=&quot;/publications/2021_vis_loon/&quot;&gt;Loon&lt;/a&gt;). As to why I picked aardvarks, I think they are a cool, weird animal. 🤷&lt;/p&gt;

&lt;h2 id=&quot;or-you-want-to-listen-to-us-brag-about-our----award--&quot;&gt;…or you want to listen to us brag about our &lt;br /&gt; ✨🏆✨ ~ &lt;strong&gt;award&lt;/strong&gt; ~ ✨🏆✨!&lt;/h2&gt;

&lt;p&gt;We are &lt;strong&gt;incredibly honored&lt;/strong&gt; and excited to share that the &lt;a href=&quot;/publications/2024_vis_aardvark/&quot;&gt;research paper&lt;/a&gt; has received a &lt;a href=&quot;https://ieeevis.org/year/2024/program/awards/awards.html&quot;&gt;&lt;strong&gt;Best Paper Award&lt;/strong&gt; at IEEE VIS 2024&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I am amped up and &lt;em&gt;just a touch&lt;/em&gt; terrified to present to the whole conference this year in Florida! I hope to see you there! 🏖️&lt;/p&gt;

&lt;h2 id=&quot;ok-actually-three-things-you-want-to-see-the-blooper-images&quot;&gt;Ok, actually three things; you want to see the “blooper” images.&lt;/h2&gt;

&lt;p&gt;And yes, I made all of the aardvark images with AI. Specifically Adobe Firefly. These &lt;em&gt;monstrosities&lt;/em&gt; are the result of asking for an image of an aardvark, loon, and ferret…&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_aardvark_monsters.jpeg&quot; alt=&quot;Four artificial images that attempt to fuse a loon, aardvark, and ferret into one animal. It is not very successful. It mostly looks like a beaver or otter swimming in water with a long neck and weird face that sort of looks like an aardvark face.&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Mon, 30 Sep 2024 14:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2024/09/30/aardvark/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2024/09/30/aardvark/</guid>
        
        
        <category>blog</category>
        
      </item>
    
      <item>
        <title>Devin Lange Successfully Defends Dissertation</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://www.devinlange.com/&quot;&gt;Devin Lange&lt;/a&gt; successfully defended his dissertation on “&lt;a href=&quot;/publications/2024_thesis_lange&quot;&gt;Is that Right? Data Visualizations for Scientific Quality Control&lt;/a&gt;”. Devin was advised by Alex Lex, with Kate Isaacs, Paul Rosen, Hanspeter Pfister, and Nils Gehlenborg serving on the committee.&lt;/p&gt;

&lt;p&gt;Devin will join &lt;a href=&quot;https://hidivelab.org/&quot;&gt;Nils Gehlenborg’s Lab at Harvard Medical School&lt;/a&gt; as a PostDoc, while also spending some more time with us as a PostDoc. Congrats, and good luck with your next steps!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/posts/2024_devin_alex.jpg&quot; alt=&quot;Devin and Alex&quot; /&gt;
&lt;img src=&quot;/assets/images/posts/2024_devin_group.jpg&quot; alt=&quot;Devin and the Group&quot; /&gt;
&lt;img src=&quot;/assets/images/posts/2024_devin_cake.jpg&quot; alt=&quot;Devin Cutting two Cakes&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Mon, 29 Jul 2024 11:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/event/2024/07/29/lange_defense/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/event/2024/07/29/lange_defense/</guid>
        
        
        <category>event</category>
        
      </item>
    
      <item>
        <title>reVISit: Taking Control of Your Online Studies!</title>
        <description>&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;You might have heard of &lt;strong&gt;&lt;a href=&quot;https://revisit.dev/&quot;&gt;reVISit&lt;/a&gt;&lt;/strong&gt; before from &lt;a href=&quot;https://vdl.sci.utah.edu/publications/2023_shortpaper_revisit/&quot;&gt;our paper&lt;/a&gt;, or you might have &lt;a href=&quot;https://revisit.dev/community/#community-activities&quot;&gt;seen a talk or participated in a meetup&lt;/a&gt;. But as of today, we’re excited to give the community a chance to run your own studies with reVISit with our 1.0 release – and CHI is just around the corner!&lt;/p&gt;

&lt;h2 id=&quot;what-is-revisit&quot;&gt;What is reVISit?&lt;/h2&gt;

&lt;p&gt;ReVISit is a software framework that enables you to &lt;a href=&quot;https://revisit.dev/docs/getting-started/how-does-it-work/&quot;&gt;assemble experimental stimuli and survey questions into an online user study&lt;/a&gt;. 
One of the biggest time-saving features of ReVISit is a JSON grammar, the &lt;strong&gt;reVISit Spec&lt;/strong&gt;, used to describe the setup of your study. 
Stimuli are contained in components and can be either markdown, images, web pages, React components, or survey questions. 
The figure at the top shows the relationship of the reVISit Spec, the components, and how they are then compiled into a study.&lt;/p&gt;

&lt;p&gt;Due to the different types of components, you can use reVISit for a diverse set of studies, spanning simple surveys, image-based perceptual experiments, and experiments evaluating complex interactive visualizations.&lt;/p&gt;

&lt;p&gt;ReVISit is designed to accommodate sophisticated stimuli and study designs. Suppose you want to &lt;a href=&quot;https://revisit.dev/study/demo-cleveland/&quot;&gt;replicate the seminal Cleveland and McGill study&lt;/a&gt;. With reVISit you could implement a React-based set of visualizations (a bar chart, a stacked bar chart, a pie chart), and then pass parameters, such as the data, and the markers to highlight the marks, all via the study configuration.&lt;/p&gt;

&lt;p&gt;Similarly, the reVISit spec enables designers to create &lt;a href=&quot;https://revisit.dev/docs/designing-studies/study-sequences/&quot;&gt;controlled sequences&lt;/a&gt; defining in which order participants see stimuli. reVISit supports fixed, random, and Latin square designs that can be nested at various levels. For example, the overall study sequence (intro, training, experiment, survey) could be fixed. In the experiment arm, two conditions could use a Latin-square design. Within each condition, the experiment could randomly draw a small number of stimuli from a large stimuli pool while interspersing attention checks at random points and adding breaks.&lt;/p&gt;

&lt;h3 id=&quot;assembling-and-deploying-your-study&quot;&gt;Assembling and Deploying your Study&lt;/h3&gt;

&lt;p&gt;The components and your study configuration are then used to &lt;a href=&quot;https://revisit.dev/docs/getting-started/your-first-study/&quot;&gt;assemble a web-based study&lt;/a&gt;. You can first look at your study on your local browser, and if you want to share it deploy it to the web server of your choice. We &lt;a href=&quot;https://revisit.dev/docs/data-and-deployment/deploying-to-static-website/&quot;&gt;recommend and document deploying to GitHub pages&lt;/a&gt;, but any web server you have access to will do.&lt;/p&gt;

&lt;p&gt;You can then use the online version to direct participants to your study. You can use crowdsourcing platforms such as Prolific, Mechanical Turk or LabintheWild, or you can simply send a link to participants that you have recruited in other ways.&lt;/p&gt;

&lt;h3 id=&quot;data-collection&quot;&gt;Data Collection&lt;/h3&gt;
&lt;p&gt;A typical study will have response fields, such as a text field or a slider, to provide the response. Such form-based responses are tracked by reVISit by default and can be downloaded in either JSON or a tidy tabular format. Similarly, you can provide &lt;a href=&quot;https://revisit.dev/docs/designing-studies/html-stimulus/&quot;&gt;response data out of interactive stimuli&lt;/a&gt;. For example, if a task is to click on a specific bar in a bar chart, you can log which bars were clicked. ReVISit tracks a diverse set of browser window events such as mouse moves, clicks, scrolls, resizes, which are time-stamped and can hence be used for basic log file analysis.&lt;/p&gt;

&lt;p&gt;ReVISit also supports advanced provenance tracking based on &lt;a href=&quot;https://apps.vdl.sci.utah.edu/trrack&quot;&gt;trrack&lt;/a&gt; a provenance tracking library  developed at our lab. If you instrument your study stimuli with trrack, you can recreate every state of your interface of every single participant! This can be incredibly useful to &lt;a href=&quot;https://vdl.sci.utah.edu/publications/2021_chi_revisit/&quot;&gt;understand nuances of user behavior&lt;/a&gt;, as well as to help you debug your stimuli by exploring what went wrong in a particular session. In a future release, reVISit will also allow you to dynamically browse these events and fully “re-hydrate” all participants experiments.&lt;/p&gt;

&lt;h3 id=&quot;data-storage&quot;&gt;Data Storage&lt;/h3&gt;

&lt;p&gt;ReVISit is implemented as a (mostly) server-less application, meaning that you don’t have to run, secure, and maintain a server to use reVISit. The only exception to this is data storage, as obviously, the data of online participants has to be stored somewhere.&lt;/p&gt;

&lt;p&gt;If you’re running a local study, you can get away without this – you can just download the data from your browser after a study is complete. For online studies, we use Google Firebase to store data.&lt;/p&gt;

&lt;p&gt;Currently, &lt;a href=&quot;https://revisit.dev/docs/data-and-deployment/firebase-setup/&quot;&gt;setting up Firebase for a reVISit study&lt;/a&gt; might be the most challenging part of working with reVISit. On the plus side, Firebase is a tried-and-true system where you have full control over your data. You even have options to choose the locale of your server so that you are compliant with your country’s regulations on data storage.&lt;/p&gt;

&lt;h3 id=&quot;data-analysis&quot;&gt;Data Analysis&lt;/h3&gt;

&lt;p&gt;ReVISit is not meant to replace your usual data analysis approaches. Instead, it aims to make it easy to export data in the formats you might use in R, Python, or your analysis platform of choice.&lt;/p&gt;

&lt;p&gt;ReVISit, however, does provide a basic &lt;a href=&quot;https://revisit.dev/docs/analysis/&quot;&gt;analytics interface&lt;/a&gt; that is most useful for monitoring the progress of your study. You can also use reVISit to identify participants that didn’t appropriately complete the study and reject them, which is most useful if you want to ensure that you have appropriate numbers of participants in your Latin square design.&lt;/p&gt;

&lt;h2 id=&quot;what-are-the-benefits-of-using-revisit&quot;&gt;What are the Benefits of Using reVISit?&lt;/h2&gt;

&lt;p&gt;So, why would you use reVISit over other approaches to running your study, such as Qualtrics, Survey Monkey, or even a custom experiment interface?&lt;/p&gt;

&lt;p&gt;First, &lt;strong&gt;reVISit is open source&lt;/strong&gt; with all the benefits you have of using open source software: it’s free; you can extend it; you can contribute to improving it.&lt;/p&gt;

&lt;p&gt;Second, the open source nature and our approach of forking reVISit for your own study and storing your data in your own Firebase means that &lt;strong&gt;you have full control over your study and the data&lt;/strong&gt;. Once you have forked the study, it will remain accessible and unchanged for as long as you like.&lt;/p&gt;

&lt;p&gt;Third, reVISit has dedicated modes for &lt;strong&gt;quickly navigating your study&lt;/strong&gt;, and you can also turn off data collection. This is great for both, developing your study, but also sharing your study with reviewers and readers of your research. That means that readers can see &lt;strong&gt;exactly&lt;/strong&gt; what your participants saw, and hence may trust your study more. They could also fork your study and run a &lt;strong&gt;replication of your study&lt;/strong&gt; with minimal effort! You can check out an &lt;a href=&quot;https://vdl.sci.utah.edu/viz-guardrails-study/&quot;&gt;example study and the associated results.&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;im-intrigued-but-can-i-trust-it-for-my-experiment&quot;&gt;I’m Intrigued, but Can I Trust it for my Experiment?&lt;/h2&gt;

&lt;p&gt;reVISit is new, and we know that it’s fraught to bet on a new project if you don’t know whether it actually works or whether it will be maintained down the line. But we hope we can convince you to trust us!&lt;/p&gt;

&lt;p&gt;First, we currently have multiple years of funding to continue development of reVISit. 
We’ve also ourselves run several successful studies, such as &lt;a href=&quot;https://vdl.sci.utah.edu/viz-guardrails-study/&quot;&gt;a study on guardrails against misinformation&lt;/a&gt;. Finally, we are committed to help you out if you run into issues! Join our &lt;a href=&quot;https://join.slack.com/t/revisit-nsf/shared_invite/zt-25mrh5ppi-6sDAL6HqcWJh_uvt2~~DMQ&quot;&gt;slack team&lt;/a&gt; to get low-friction help, or write to us at &lt;a href=&quot;mailto:contact@revisit.dev&quot;&gt;contact@revisit.dev&lt;/a&gt;. We’re also happy to set up a meeting to answer any questions you may have; for example, to talk us through whether reVISit will work for your study design.&lt;/p&gt;

&lt;h2 id=&quot;how-can-i-learn-more-or-get-involved&quot;&gt;How Can I Learn More or Get Involved?&lt;/h2&gt;

&lt;p&gt;We’re grateful to all the community members who have shared their study needs and helped to make ReVISit 1.0 a reality, and we’re looking forward to bringing the community exciting new features in the coming year. Future releases will include better debugging tools through study rehydration, a way to capture and code think-aloud data, and improved analysis capabilities. Depending on community feedback we’re also interested in branching out to unconventional display devices (phones, AR/VR, etc.)&lt;/p&gt;

&lt;p&gt;To take your first steps with reVISit, check out our &lt;a href=&quot;https://revisit.dev/docs/getting-started/&quot;&gt;getting started guide&lt;/a&gt; for instructions on how to install our software and to build a study.&lt;/p&gt;

&lt;p&gt;Finally, if you are missing a feature or find a bug, let us know! Since reVISit is completely open source you could even submit a pull request!&lt;/p&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;We are very grateful to everyone who helped make reVISit a reality, including our wonderful &lt;a href=&quot;https://revisit.dev/community/#community-advisory-board&quot;&gt;community advisory board&lt;/a&gt; and the &lt;a href=&quot;https://vdl.sci.utah.edu/projects/2022-nsf-revisit/&quot;&gt;National Science Foundation for generous funding&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Thu, 20 Jun 2024 01:00:00 +0000</pubDate>
        <link>https://vdl.sci.utah.edu/blog/2024/06/20/revisit/</link>
        <guid isPermaLink="true">https://vdl.sci.utah.edu/blog/2024/06/20/revisit/</guid>
        
        
        <category>blog</category>
        
      </item>
    
  </channel>
</rss>
