<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Interactive Media Lab DresdenInteractive Media Lab Dresden</title>
	<atom:link href="https://www.mt.inf.tu-dresden.de/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.mt.inf.tu-dresden.de</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2026 09:30:14 +0000</lastBuildDate>
	<language>de</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>AlpCHI in Ascona</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2026/03/alpchi-in-ascona/</link>
		<pubDate>Wed, 18 Mar 2026 10:55:17 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=32670</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/K89A3627-scaled.jpg" alt=""/>Raimund Dachselt und Katja Krug ko-organisierten den Workshop „Beyond Fatigue: Building an Ergonomic Future for XR“ im Rahmen der ersten AlpCHI-Konferenz, die im März 2026 in Ascona, Schweiz, stattfand. Katja Krug nahm zudem vor Ort an der Konferenz teil. Unter dem Thema „Interaction in Nature, in the Wild, at the Summit“ brachte die neu ins [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/K89A3627-scaled.jpg" alt=""/><p data-start="79" data-end="277"><a href="https://imld.de/our-group/team/raimund-dachselt/">Raimund Dachselt</a> und <a href="https://imld.de/our-group/team/katja-krug/">Katja Krug</a> ko-organisierten den Workshop <a href="https://alpchi.org/accepted-workshops/#w4">„Beyond Fatigue: Building an Ergonomic Future for XR“</a> im Rahmen der ersten <em data-start="788" data-end="796">AlpCHI</em>-Konferenz, die im März 2026 in Ascona, Schweiz, stattfand. <a href="https://imld.de/our-group/team/katja-krug/">Katja Krug</a> nahm zudem vor Ort an der Konferenz teil.</p>
<p data-start="79" data-end="277"><span id="more-32670"></span></p>
<p data-start="279" data-end="574">Unter dem Thema <em data-start="299" data-end="352">„Interaction in Nature, in the Wild, at the Summit“</em> brachte die neu ins Leben gerufene Konferenz Forschende aus dem Bereich Mensch-Computer-Interaktion aus der Alpenregion und darüber hinaus zusammen. Veranstaltungsort war der Monte Verità in Ascona in der Schweiz mit Blick auf den Lago Maggiore.</p>
<p data-start="576" data-end="1178">Unser Workshop widmete sich aktuellen ergonomischen Herausforderungen in Extended Reality (XR), insbesondere mit den physischen und kognitiven Belastungen durch multimodale Interaktion und weiteren Ursachen von Ermüdung in XR. In interaktiven Breakout-Sessions wurden zentrale Fragestellungen diskutiert und Ansätze für nachhaltigere und komfortablere XR-Interaktionen entwickelt. Der Workshop stieß auf großes Interesse und bot eine lebendige Plattform für Austausch und Zusammenarbeit. Wir freuen uns über die vielen wertvollen Impulse und darauf, wie sich dieses Forschungsgebiet in Zukunft weiter entwickeln wird.</p>
]]></content:encoded>
					</item>
		<item>
		<title>Eva Hornecker besucht das IMLD</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2026/02/eva-hornecker-besucht-das-imld/</link>
		<pubDate>Tue, 03 Feb 2026 10:05:00 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=32499</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/News_Image.png" alt=""/>Diese Woche durften wir Prof. Eva Hornecker von der Gruppe Human-Computer Interaction der Bauhaus-Universität Weimar am Interactive Media Lab begrüßen. Als Mitbegründerin der internationalen Tangible, Embedded and Embodied Interaction Konferenz (TEI) und Distinguished Member der Association for Computing Machinery (ACM), bot sie uns faszinierende Einblicke in ihre Forschung, insbesondere im Bereich Data Physicalisation und „beyond [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/News_Image.png" alt=""/><p data-start="122" data-end="323">Diese Woche durften wir <strong>Prof. Eva Hornecker</strong> von der Gruppe <em>Human-Computer Interaction</em> der Bauhaus-Universität Weimar am Interactive Media Lab begrüßen.</p>
<p data-start="122" data-end="323"><span id="more-32499"></span></p>
<p>Als Mitbegründerin der internationalen <em>Tangible, Embedded and Embodied Interaction Konferenz (TEI)</em> und <em>Distinguished Member</em> der <em>Association for Computing Machinery (ACM),</em> bot sie uns faszinierende Einblicke in ihre Forschung, insbesondere im Bereich Data Physicalisation und „beyond the desktop“-Interaktion mit Daten.</p>
<p>Ein Höhepunkt des Tages war ihr inspirierender Vortrag mit dem Titel <strong><em>„Data Physicalisation and Sensification – How Explorative Research Led to a Design Vocabulary for Physicalisation”</em></strong>, den sie zuvor bereits als Keynote auf der EuroVis 2025 in Luxemburg gehalten hatte.</p>
<p>Wir danken Prof. Hornecker herzlich für ihren Besuch und den spannenden Austausch und freuen uns auf die Möglichkeit in Zukunft zusammenzuarbeiten.</p>
]]></content:encoded>
					</item>
		<item>
		<title>ACM SIGGRAPH MIG 2025</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/12/acm-siggraph-mig-2025/</link>
		<pubDate>Fri, 12 Dec 2025 15:49:21 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=32413</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/ACM-MIG-2025.jpg" alt=""/>Die 18. ACM SIGGRAPH-Konferenz zu Motion, Interaction und Games (MIG 2025) fand vom 3. bis 5. Dezember an der ETH Zürich in der Schweiz statt. Julián Méndez nahm an der Konferenz teil, um unseren eingeladenen Artikel „Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games“ vorzustellen. Das Konferenzprogramm der [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/ACM-MIG-2025.jpg" alt=""/><p>Die 18. ACM SIGGRAPH-Konferenz zu Motion, Interaction und Games (<a href="https://mig.siggraph.org/2025/">MIG 2025</a>) fand vom 3. bis 5. Dezember an der ETH Zürich in der Schweiz statt. <a href="https://imld.de/our-group/team/julian-mendez/">Julián Méndez</a> nahm an der Konferenz teil, um unseren eingeladenen Artikel „Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games“ vorzustellen.</p>
<p><span id="more-32413"></span></p>
<p>Das Konferenzprogramm der MIG 2025 umfasste 15 lange und 4 kurze Beiträge (von insgesamt 46 Einreichungen), 11 Poster und 3 eingeladene Beiträge: 2 aus „Computers &amp; Graphics” und 1 aus „Transactions on Computer Graphics &amp; Visualization” (TVCG). Das Programm umfasste außerdem mehrere Keynotes von einflussreichen Forschern und Industriepartnern. Die Single-Track-Konferenz beleuchtete vor allem Fortschritte in den Bereichen Mixed Reality, Avatar-Darstellungen, Rendering, Animation und Spiele. So wurde unser eingeladener TVCG-Artikel von der Community sehr positiv aufgenommen und mit dem Preis für die „Beste studentische Präsentation” ausgezeichnet. </p>
<div class="block-list-of-publications"><h2>Artikel vorgestellt auf der MIG 2025</h2><div class="publications"><ul><li class="pub-entry pub-entry-30558" rel="tooltip" data-toggle="tooltip" data-placement="left" title="Auszeichnung: Best Student Presentation"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf"><img decoding="async" src="/cnt/uploads/immdatastories-150x150.jpg" alt="Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games" loading="lazy"><div class="pub-trophy"><i class="icon-trophy"></i></div></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf" title="Download: Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games</a></h3><p class="pub-authors"><span><a href="/our-group/team/julian-mendez/">Méndez, J.</a>;</span> <span><a href="/our-group/team/weizhou-luo/">Luo, W.</a>;</span> <span><a href="/our-group/team/rufat-rzayev/">Rzayev, R.</a>;</span> <span><a href="/our-group/team/wolfgang-bueschel/">Büschel, W.</a>;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-journal">In <em>IEEE Transactions on Visualization and Computer Graphics</em> (Band 31, Ausgabe 10).</span><span class="pub-meta-event">VIS '25, Vienna, Austria.</span><span class="pub-meta-publisher">IEEE,</span><span class="pub-meta-pages">Seite&nbsp;6839-6851,</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1109/TVCG.2025.3531138" title="Gehe zu: https://doi.org/10.1109/TVCG.2025.3531138">10.1109/TVCG.2025.3531138</a></span><span class="pub-meta-comments">Received "Best Student Presentation Award" at ACM SIGGRAPH MIG 2025 where it was presented as an invited TVCG article. </span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf" title="Dokument herunterladen Dokument: Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/immersive-data-driven-storytelling/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@article{MLRBD-2025-ImmDataStoriesReview,<br />
&nbsp;&nbsp;&nbsp;author = {Juli\'{a}n M\'{e}ndez and Weizhou Luo and Rufat Rzayev and Wolfgang B\"{u}schel and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games},<br />
&nbsp;&nbsp;&nbsp;journal = {IEEE Transactions on Visualization and Computer Graphics},<br />
&nbsp;&nbsp;&nbsp;volume = {31},<br />
&nbsp;&nbsp;&nbsp;issue = {10},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {1},<br />
&nbsp;&nbsp;&nbsp;location = {Vienna, Austria},<br />
&nbsp;&nbsp;&nbsp;pages = {6839--6851},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1109/TVCG.2025.3531138},<br />
&nbsp;&nbsp;&nbsp;publisher = {IEEE},<br />
&nbsp;&nbsp;&nbsp;address = {New Jersey}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> 	<a href="/cnt/uploads/Design-Space.pdf" title="Download Design Space (.pdf): Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Design Space (.pdf)</a>, 	<a href="/cnt/uploads/Scoping-Review-Corpus-Codebook.xlsx" title="Download Scoping Review Corpus & Codebook (.xlsx): Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Scoping Review Corpus & Codebook (.xlsx)</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --></ul></div></div>
]]></content:encoded>
					</item>
		<item>
		<title>IEEE VIS 2025 in Vienna</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/11/ieee-vis-2025-in-vienna/</link>
		<pubDate>Fri, 14 Nov 2025 12:04:28 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=32335</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/vis-2025-news-e1763121811261.jpg" alt=""/>Die diesjährige IEEE VIS‑Konferenz fand vom 2. bis 7. November in Wien, Österreich, statt. Unser Interactive Media Lab Dresden trug zwei TVCG‑Artikel und ein Poster bei. Raimund Dachselt und Julián Méndez reisten vor Ort an, um die TVCG‑Beiträge zu den Themen augmentierte dynamische Datenphysikalisierung und immersives, datengetriebenes Storytelling zu präsentieren. Die führende Konferenz für Fortschritte [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/vis-2025-news-e1763121811261.jpg" alt=""/><p>Die diesjährige <a href="https://ieeevis.org/year/2025/welcome">IEEE VIS</a>‑Konferenz fand vom 2. bis 7. November in Wien, Österreich, statt. Unser Interactive Media Lab Dresden trug zwei TVCG‑Artikel und ein Poster bei. <a href="https://imld.de/our-group/team/raimund-dachselt/">Raimund Dachselt</a> und <a href="https://imld.de/our-group/team/julian-mendez/">Julián Méndez</a> reisten vor Ort an, um die TVCG‑Beiträge zu den Themen augmentierte dynamische Datenphysikalisierung und immersives, datengetriebenes Storytelling zu präsentieren.</p>
<p><span id="more-32335"></span><br />
Die führende Konferenz für Fortschritte in Visualisierung und visueller Analytik verzeichnete in diesem Jahr 1114 Teilnehmende, mit 80 angenommenen Beiträgen aus 234 Einreichungen (Akzeptanzrate: 34 %). Die Konferenz, bestehend aus Vortrags­blöcken für lange und kurze Paper, Workshops, Tutorials sowie Preisverleihungen für technische und „Test‑of‑Time“‑Leistungen, war für unsere Vertreter ein faszinierendes und inspirierendes Erlebnis. Nachfolgend finden Sie weitere Informationen zu unseren diesjährigen Beiträgen.</p>
<div class="block-list-of-publications"><h2>Contributions presented at IEEE VIS 2025</h2><div class="publications"><ul><li class="pub-entry pub-entry-32267"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Islam_2025_SmartWristband.pdf"><img decoding="async" src="/cnt/uploads/Islam_2025_SmartWristband_thumbnail-300x300.jpg" alt="Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Islam_2025_SmartWristband.pdf" title="Download: Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios">Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios</a></h3><p class="pub-authors"><span>Islam, A.;</span> <span>Grioui, F.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a>;</span> <span>Isenberg, P.</span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>VIS 2025 - Posters of the IEEE Conference on Visualization and Visual Analytics.</em></span><span class="pub-meta-event">VIS '25</span><span class="pub-meta-date">2025.</span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Islam_2025_SmartWristband.pdf" title="Dokument herunterladen Dokument: Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios">Dokument <span class="material-icons md-18">article</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@misc{islam-2025-smart-wristbands,<br />
&nbsp;&nbsp;&nbsp;author = {Alaul Islam and Fairouz Grioui and Raimund Dachselt and Petra Isenberg},<br />
&nbsp;&nbsp;&nbsp;title = {Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios},<br />
&nbsp;&nbsp;&nbsp;booktitle = {VIS 2025 - Posters of the IEEE Conference on Visualization and Visual Analytics},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {11}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> 	<a href="/cnt/uploads/Islam_2025_SmartWristband_Poster.pdf" title="Download Poster: Visualization on Smart Wristbands: Results from an In-situ Design Workshop with Four Scenarios">Poster</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-30799"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Engert-2025_Augmented-Dynamic-Data-Phys.pdf"><img decoding="async" src="/cnt/uploads/Engert_2025_ADP_thumbnail-150x150.jpg" alt="Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Engert-2025_Augmented-Dynamic-Data-Phys.pdf" title="Download: Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization">Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization</a></h3><p class="pub-authors"><span><a href="/our-group/team/severin-engert/">Engert, S.</a>;</span> <span><a href="/our-group/team/andreas-peetz/">Peetz, A.</a>;</span> <span><a href="/our-group/team/konstantin-klamka/">Klamka, K.</a>;</span> <span>Surer, P.;</span> <span>Isenberg, T.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-journal">In <em>IEEE Transactions on Visualization and Computer Graphics</em> (Band 31, Ausgabe 10).</span><span class="pub-meta-event">VIS' 25, Vienna, Austria.</span><span class="pub-meta-publisher">IEEE,</span><span class="pub-meta-pages">Seite&nbsp;7580-7597,</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1109/TVCG.2025.3547432" title="Gehe zu: https://doi.org/10.1109/TVCG.2025.3547432">10.1109/TVCG.2025.3547432</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Engert-2025_Augmented-Dynamic-Data-Phys.pdf" title="Dokument herunterladen Dokument: Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/augmented-dynamic-data-physicalization/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@article{Engert:2025:ADP,<br />
&nbsp;&nbsp;&nbsp;author = {Severin Engert and Andreas Peetz and Konstantin  Klamka and Pierre Surer and Tobias Isenberg and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization},<br />
&nbsp;&nbsp;&nbsp;journal = {IEEE Transactions on Visualization and Computer Graphics},<br />
&nbsp;&nbsp;&nbsp;volume = {31},<br />
&nbsp;&nbsp;&nbsp;issue = {10},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {3},<br />
&nbsp;&nbsp;&nbsp;location = {Vienna, Austria},<br />
&nbsp;&nbsp;&nbsp;pages = {7580--7597},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1109/TVCG.2025.3547432},<br />
&nbsp;&nbsp;&nbsp;publisher = {IEEE}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p><span class="pub-meta-attach-web-links"><a href="https://youtu.be/IWF2Phv4SuQ" target="_blank" title="Video">Video</a></span></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-30558" rel="tooltip" data-toggle="tooltip" data-placement="left" title="Auszeichnung: Best Student Presentation"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf"><img decoding="async" src="/cnt/uploads/immdatastories-150x150.jpg" alt="Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games" loading="lazy"><div class="pub-trophy"><i class="icon-trophy"></i></div></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf" title="Download: Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games</a></h3><p class="pub-authors"><span><a href="/our-group/team/julian-mendez/">Méndez, J.</a>;</span> <span><a href="/our-group/team/weizhou-luo/">Luo, W.</a>;</span> <span><a href="/our-group/team/rufat-rzayev/">Rzayev, R.</a>;</span> <span><a href="/our-group/team/wolfgang-bueschel/">Büschel, W.</a>;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-journal">In <em>IEEE Transactions on Visualization and Computer Graphics</em> (Band 31, Ausgabe 10).</span><span class="pub-meta-event">VIS '25, Vienna, Austria.</span><span class="pub-meta-publisher">IEEE,</span><span class="pub-meta-pages">Seite&nbsp;6839-6851,</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1109/TVCG.2025.3531138" title="Gehe zu: https://doi.org/10.1109/TVCG.2025.3531138">10.1109/TVCG.2025.3531138</a></span><span class="pub-meta-comments">Received "Best Student Presentation Award" at ACM SIGGRAPH MIG 2025 where it was presented as an invited TVCG article. </span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Immersive-Data-Driven-Storytelling.pdf" title="Dokument herunterladen Dokument: Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/immersive-data-driven-storytelling/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@article{MLRBD-2025-ImmDataStoriesReview,<br />
&nbsp;&nbsp;&nbsp;author = {Juli\'{a}n M\'{e}ndez and Weizhou Luo and Rufat Rzayev and Wolfgang B\"{u}schel and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games},<br />
&nbsp;&nbsp;&nbsp;journal = {IEEE Transactions on Visualization and Computer Graphics},<br />
&nbsp;&nbsp;&nbsp;volume = {31},<br />
&nbsp;&nbsp;&nbsp;issue = {10},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {1},<br />
&nbsp;&nbsp;&nbsp;location = {Vienna, Austria},<br />
&nbsp;&nbsp;&nbsp;pages = {6839--6851},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1109/TVCG.2025.3531138},<br />
&nbsp;&nbsp;&nbsp;publisher = {IEEE},<br />
&nbsp;&nbsp;&nbsp;address = {New Jersey}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> 	<a href="/cnt/uploads/Design-Space.pdf" title="Download Design Space (.pdf): Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Design Space (.pdf)</a>, 	<a href="/cnt/uploads/Scoping-Review-Corpus-Codebook.xlsx" title="Download Scoping Review Corpus & Codebook (.xlsx): Immersive Data-Driven Storytelling: Scoping an Emerging Field Through the Lenses of Research, Journalism, and Games">Scoping Review Corpus & Codebook (.xlsx)</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --></ul></div></div>
]]></content:encoded>
					</item>
		<item>
		<title>FroCoS &#038; DL 2025</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/10/frocos-dl-2025/</link>
		<pubDate>Tue, 14 Oct 2025 14:51:58 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=32103</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/dl-2025.png" alt=""/>Unsere Forschung zu „The Concrete Evonne” wurde auf dem 15th International Symposium on Frontiers of Combining Systems (FroCoS 2025) und auf dem 38th International Workshop on Description Logics (DL 2025) in Island bzw. Polen vorgestellt. Diese Arbeit ist das Ergebnis unserer Zusammenarbeit mit dem Transregional Collaborative Research Centre 248 (CPEC). Evonne ist ein webbasiertes Visualisierungstool [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/dl-2025.png" alt=""/><p>Unsere Forschung zu <em>„The Concrete Evonne”</em> wurde auf dem <em>15th International Symposium on Frontiers of Combining Systems (<a href="https://icetcs.github.io/frocos-itp-tableaux25/frocos/">FroCoS 2025</a>)</em> und auf dem <em>38th International Workshop on Description Logics (<a href="https://dl25.cs.uni.opole.pl/">DL 2025</a>)</em> in Island bzw. Polen vorgestellt. Diese Arbeit ist das Ergebnis unserer Zusammenarbeit mit dem <em>Transregional Collaborative Research Centre 248 (<a href="http://cpec.science">CPEC</a>)</em>.</p>
<p><span id="more-32103"></span><br />
<a href="https://imld.de/evonne">Evonne</a> ist ein webbasiertes Visualisierungstool zur Erklärung von Schlussfolgerungen und zur Fehlerbehebung in Ontologien. In seiner neuesten Version umfasst „The Concrete Evonne“ Funktionen zur Erklärung kombinierter Logiken und konkreter Domänen linearer Gleichungen und Differenzbeschränkungen durch maßgeschneiderte Visualisierungen und Beweise. Die Hauptautoren dieser Arbeit sind <a href="https://fis.tu-dresden.de/portal/en/researchers/christian-alrabbaa(01f4a4d5-7f4e-47df-8f83-2318923ff6ca).html">Christian Alrabbaa</a> von der Professur für Automatentheorie von <a href="https://tu-dresden.de/ing/informatik/thi/lat/die-professur/franz-baader">Prof. Dr.-Ing. Franz Baader</a> und <a href="https://fis.tu-dresden.de/portal/en/researchers/julin-mendez(f03d6a5c-e03f-409b-9196-7ccbcf3400c4).html">Julián Méndez</a> von unserem Interactive Media Lab Dresden. Daher reisten sie zur FroCoS und DL, um ihre Arbeit vorzustellen und zu diskutieren.</p>
<p>Vom 3. bis 6. September fand der DL-Workshop an der University of Opole in Polen statt, auf dem viele spannende Arbeiten vorgestellt wurden und gleichzeitig die Karriere, Leistungen und der prägende Einfluss von Prof. Baader innerhalb der DL-Community gewürdigt wurden. Bei dieser Veranstaltung wurde &#8222;The Concrete Evonne&#8220; als extended abstract in den Postersessions vorgestellt.</p>
<p>Anschließend fand vom 29. September bis zum 1. Oktober die FroCoS an der Reykjavík University in Island statt, zusammen mit anderen Konferenzen und Workshops zu automated reasoning und Theorembeweisen (<a href="https://icetcs.github.io/frocos-itp-tableaux25/index.html">TABLEAUX, ITP</a>). Mit ihrem Schwerpunkt auf Systemkombinationen war die FroCoS der perfekte Ort, um „The Concrete Evonne” als Full Paper über Erklärungen zu kombinierten Beschreibungslogiken und konkreten Domänen vorzustellen.</p>
<div class="block-list-of-publications"><h2>Contributions</h2><div class="publications"><ul><li class="pub-entry pub-entry-31491"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/concrete_evonne.pdf"><img decoding="async" src="/cnt/uploads/vis-c-domains-300x300.jpg" alt="The Concrete Evonne: Visualization Meets Concrete Domain Reasoning" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/concrete_evonne.pdf" title="Download: The Concrete Evonne: Visualization Meets Concrete Domain Reasoning">The Concrete Evonne: Visualization Meets Concrete Domain Reasoning</a></h3><p class="pub-authors"><span>Alrabbaa, C.;</span> <span>Baader, F.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a>;</span> <span>Kovtunova, A.;</span> <span><a href="/our-group/team/julian-mendez/">Méndez, J.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Frontiers of Combining Systems.</em></span><span class="pub-meta-event">FroCoS '25, Reykjavík, Iceland.</span><span class="pub-meta-publisher">Springer,</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1007/978-3-032-04167-8_1" title="Gehe zu: https://doi.org/10.1007/978-3-032-04167-8_1">10.1007/978-3-032-04167-8_1</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/concrete_evonne.pdf" title="Dokument herunterladen Dokument: The Concrete Evonne: Visualization Meets Concrete Domain Reasoning">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/evonne/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{ABDKM25,<br />
&nbsp;&nbsp;&nbsp;author = {Christian Alrabbaa and Franz Baader and Raimund Dachselt and Alisa Kovtunova and Juli\'{a}n M\'{e}ndez},<br />
&nbsp;&nbsp;&nbsp;title = {The Concrete Evonne: Visualization Meets Concrete Domain Reasoning},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Frontiers of Combining Systems},<br />
&nbsp;&nbsp;&nbsp;series = {Lecture Notes in Artificial Intelligence},<br />
&nbsp;&nbsp;&nbsp;volume = {15979},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {9},<br />
&nbsp;&nbsp;&nbsp;location = {Reykjavík, Iceland},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1007/978-3-032-04167-8_1},<br />
&nbsp;&nbsp;&nbsp;publisher = {Springer}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href=" http://141.76.67.139:7007/" target="_blank" title="Online Demo"><i class="icon-external-link"></i> Online Demo</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-31513"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/concrete-evonne-extended-abstract.pdf"><img decoding="async" src="/cnt/uploads/concrete-evonne-dl.jpg" alt="The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/concrete-evonne-extended-abstract.pdf" title="Download: The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)">The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)</a></h3><p class="pub-authors"><span>Alrabbaa, C.;</span> <span>Baader, F.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a>;</span> <span>Kovtunova, A.;</span> <span><a href="/our-group/team/julian-mendez/">Méndez, J.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Proceedings of the 38th International Workshop on Description Logics.</em></span><span class="pub-meta-event">DL '25, Opole, Poland.</span><span class="pub-meta-date">2025.</span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/concrete-evonne-extended-abstract.pdf" title="Dokument herunterladen Dokument: The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/evonne/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{eaABDKM25,<br />
&nbsp;&nbsp;&nbsp;author = {Christian Alrabbaa and Franz Baader and Raimund Dachselt and Alisa Kovtunova and Juli\'{a}n M\'{e}ndez},<br />
&nbsp;&nbsp;&nbsp;title = {The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Proceedings of the 38th International Workshop on Description Logics},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {9},<br />
&nbsp;&nbsp;&nbsp;location = {Opole, Poland},<br />
&nbsp;&nbsp;&nbsp;url = {https://ceur-ws.org/Vol-4091/paper18.pdf}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --></ul></div></div>
]]></content:encoded>
					</item>
		<item>
		<title>Start des DFG-Projekts tVISt</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/10/start-des-dfg-projekts-tvist/</link>
		<pubDate>Wed, 01 Oct 2025 07:45:47 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=31998</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/tvist.png" alt=""/>Im Oktober startete unser DFG-Projekt &#8222;tVISt: Datenvisualisierung für nicht-planare Displays&#8220; in Kooperation mit unseren französischen Projektpartnern Anastasia Bezerianos von der Université Paris-Saclay sowie Tobias Isenberg und Petra Isenberg vom Inria Team Aviz. Das Projekt erforscht, wie Datenvisualisierung jenseits klassischer flacher Displays, etwa auf gebogenen, kugelförmigen oder flexiblen Oberflächen, aussehen und funktionieren kann. Dabei werden neue [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/tvist.png" alt=""/><p>Im Oktober startete unser DFG-Projekt <a href="https://imld.de/research/research-projects/tvist/">&#8222;tVISt: Datenvisualisierung für nicht-planare Displays&#8220;</a> in Kooperation mit unseren französischen Projektpartnern <a href="https://www.lri.fr/~anab/">Anastasia Bezerianos</a> von der Université Paris-Saclay sowie <a href="https://tobias.isenberg.cc">Tobias Isenberg</a> und <a href="https://petra.isenberg.cc">Petra Isenberg</a> vom Inria Team Aviz.</p>
<p><span id="more-31998"></span></p>
<p>Das Projekt erforscht, wie Datenvisualisierung jenseits klassischer flacher Displays, etwa auf gebogenen, kugelförmigen oder flexiblen Oberflächen, aussehen und funktionieren kann. Dabei werden neue Ansätze entwickelt, um Visualisierungen und Interaktionen an die besonderen Eigenschaften solcher Displays anzupassen und ihr Potenzial voll auszuschöpfen.</p>
<p>Unter der Leitung von <a href="https://imld.de/our-group/team/raimund-dachselt/">Raimund Dachselt</a>, dem deutschen Koordinator des Projekts, startet unser Lab in die Zusammenarbeit. Unser Mitarbeiter <a href="https://imld.de/~baader">Julian Baader</a> ist auf deutscher Seite Teil des Projekts, während auf der französischen Seite zwei weitere Doktoranden beteiligt sind.</p>
]]></content:encoded>
					</item>
		<item>
		<title>MuC in Chemnitz</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/09/muc-in-chemnitz/</link>
		<pubDate>Mon, 22 Sep 2025 12:21:37 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=31946</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/2025-09-03-17_21_36.2560200.jpg" alt=""/>Julian Baader und Katja Krug nahmen kürzlich an der Mensch und Computer Konferenz in Chemnitz teil. Beide steuerten jeweils ein Short Paper bei und präsentierten dazugehörige Poster. Julian Baader stellte das Poster zur Arbeit „The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels“ vor, die in Zusammenarbeit mit Mats Ellenberg und Marc [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/2025-09-03-17_21_36.2560200.jpg" alt=""/><p><a href="https://imld.de/our-group/team/julian-baader/">Julian Baader</a> und <a href="https://imld.de/our-group/team/katja-krug/">Katja Krug</a> nahmen kürzlich an der Mensch und Computer Konferenz in Chemnitz teil. Beide steuerten jeweils ein Short Paper bei und präsentierten dazugehörige Poster.</p>
<p><span id="more-31946"></span></p>
<p><a href="https://imld.de/our-group/team/julian-baader/">Julian Baader</a> stellte das Poster zur Arbeit „<em>The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels</em>“ vor, die in Zusammenarbeit mit <a href="https://imld.de/our-group/team/mats-ole-ellenberg/">Mats Ellenberg</a> und <a href="https://imld.de/our-group/team/marc-satkowski/">Marc Satkowski</a> entstanden ist.</p>
<p><a href="https://imld.de/our-group/team/katja-krug/">Katja Krug</a> präsentierte das Poster zur Arbeit „<em>Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality</em>“, gemeinsam mit Xiaoli Song und <a href="https://imld.de/our-group/team/wolfgang-bueschel/">Wolfgang Büschel</a>.</p>
<p>Die diesjährige Mensch und Computer feierte ihr 25-jähriges Jubiläum und zog rund 600 Teilnehmende an die TU Chemnitz. Julian und Katja hatten eine tolle Zeit, erhielten wertvolles Feedback und hoffen, bald wieder dabei zu sein.</p>
<div class="block-list-of-publications"><h2>Beiträge zur MuC 2025</h2><div class="publications"><ul><li class="pub-entry pub-entry-31670"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/3743049.3748552.pdf"><img decoding="async" src="/cnt/uploads/small-header-300x300.jpg" alt="The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/3743049.3748552.pdf" title="Short Paper: The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels">The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels</a></h3><p class="pub-authors"><span><a href="/our-group/team/julian-baader/">Baader, J.</a>;</span> <span><a href="/our-group/team/mats-ole-ellenberg/">Ellenberg, M.</a>;</span> <span><a href="/our-group/team/marc-satkowski/">Satkowski, M.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Mensch und Computer 2025 - Tagungsband.</em></span><span class="pub-meta-event">MuC'25, Chemnitz, Germany .</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1145/3743049.3748552" title="Gehe zu: https://doi.org/10.1145/3743049.3748552">10.1145/3743049.3748552</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/3743049.3748552.pdf" title="Dokument herunterladen Dokument: The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/ceti/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{baader2025authoring,<br />
&nbsp;&nbsp;&nbsp;author = {Julian Baader and Mats Ole Ellenberg and Marc Satkowski},<br />
&nbsp;&nbsp;&nbsp;title = {The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Mensch und Computer 2025 - Tagungsband},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {08},<br />
&nbsp;&nbsp;&nbsp;location = {Chemnitz, Germany },<br />
&nbsp;&nbsp;&nbsp;numpages = {6},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1145/3743049.3748552},<br />
&nbsp;&nbsp;&nbsp;keywords = {Mixed Reality, Labeling, Label Authoring, Context Aware Labels, Mixed Reality Labels}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> 	<a href="/cnt/uploads/MRLA_Poster_v4.pdf" title="Download Poster: The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels">Poster</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-31637"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/MUC_Shortpaper___Facial_Expressions-4.pdf"><img decoding="async" src="/cnt/uploads/small_header-500x500.png" alt="Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/MUC_Shortpaper___Facial_Expressions-4.pdf" title="Face Off Paper PDF: Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality">Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality</a></h3><p class="pub-authors"><span><a href="/our-group/team/katja-krug/">Krug, K.</a>;</span> <span>Song, X.;</span> <span><a href="/our-group/team/wolfgang-bueschel/">Büschel, W.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Mensch und Computer 2025 - Tagungsband.</em></span><span class="pub-meta-event">MuC'25, Chemnitz, Germany .</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/https://doi.org/10.1145/3743049.3748590" title="Gehe zu: https://doi.org/https://doi.org/10.1145/3743049.3748590">https://doi.org/10.1145/3743049.3748590</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/MUC_Shortpaper___Facial_Expressions-4.pdf" title="Dokument herunterladen Dokument: Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/ceti/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{krug2025face,<br />
&nbsp;&nbsp;&nbsp;author = {Katja Krug and Xiaoli Song and Wolfgang B\"{u}schel},<br />
&nbsp;&nbsp;&nbsp;title = {Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Mensch und Computer 2025 - Tagungsband},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {08},<br />
&nbsp;&nbsp;&nbsp;location = {Chemnitz, Germany },<br />
&nbsp;&nbsp;&nbsp;numpages = {5},<br />
&nbsp;&nbsp;&nbsp;doi = {https://doi.org/10.1145/3743049.3748590},<br />
&nbsp;&nbsp;&nbsp;keywords = {Mixed Reality, Collaboration, Facial Expression, Avatars}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> 	<a href="/cnt/uploads/pfb_gruen.pdf" title="Download Poster: Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality">Poster</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --></ul></div></div>
]]></content:encoded>
					</item>
		<item>
		<title>Can Liu besucht das IMLD</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/08/can-liu-besucht-das-imld/</link>
		<pubDate>Mon, 18 Aug 2025 14:58:32 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=31659</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/news_header.png" alt=""/>Über die letzten zwei Wochen hinweg durften wir am Interactive Media Lab Dresden einen weiteren besonderen Gast begrüßen: Prof. Can Liu, Professorin für Human-Computer Interaction an der City University of Hong Kong. Während ihres zweiwöchigen Aufenthalts hatten wir die Gelegenheit, ihr eine Auswahl unserer aktuellen Demos zu präsentieren und uns intensiv über unsere Forschung und [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/news_header.png" alt=""/><p data-start="108" data-end="307">Über die letzten zwei Wochen hinweg durften wir am Interactive Media Lab Dresden einen weiteren besonderen Gast begrüßen: <strong data-start="207" data-end="224">Prof. Can Liu</strong>, Professorin für Human-Computer Interaction an der City University of Hong Kong.</p>
<p data-start="108" data-end="307"><span id="more-31659"></span></p>
<p data-start="309" data-end="613">Während ihres zweiwöchigen Aufenthalts hatten wir die Gelegenheit, ihr eine Auswahl unserer aktuellen Demos zu präsentieren und uns intensiv über unsere Forschung und die Gemeinsamkeiten unserer beider Labs auszutauschen. Darüber hinaus haben wir gemeinsam an einem Projekt gearbeitet, das wir für die nächste <strong data-start="575" data-end="592">CHI-Konferenz</strong> einreichen wollen.</p>
<p data-start="615" data-end="810">Ein Highlight ihres Besuchs war ihr Vortrag mit dem Titel <strong data-start="673" data-end="724">„AI-powered Semantic and Multimodal Interfaces“</strong>, der spannende Einblicke in ihre aktuellen Arbeiten und Forschungsschwerpunkte gab.</p>
<p data-start="812" data-end="919">Wir danken Prof. Liu herzlich für den bereichernden Besuch und freuen uns auf die weitere Zusammenarbeit.</p>
]]></content:encoded>
					</item>
		<item>
		<title>Tim Dwyer besucht das IMLD</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/08/tim-dwyer-besucht-das-imld/</link>
		<pubDate>Mon, 11 Aug 2025 09:10:24 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=31550</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/news_header_image-scaled.png" alt=""/>Kürzlich durften wir am IMLD einen besonderen Gast begrüßen: Prof. Tim Dwyer vom Department of Human Centred Computing an der Monash University, Australien. Als einer der führenden Köpfe im Bereich Immersive Analytics brachte er umfassende Expertise und spannende Einblicke in aktuelle Forschungstrends mit, die auch für unserer Forschung besonders relevant sind. Während seines eintägigen Besuchs [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/news_header_image-scaled.png" alt=""/><p data-start="122" data-end="323">Kürzlich durften wir am IMLD einen besonderen Gast begrüßen: <strong>Prof. Tim Dwyer</strong> vom Department of Human Centred Computing an der Monash University, Australien. Als einer der führenden Köpfe im Bereich<em> Immersive Analytics</em> brachte er umfassende Expertise und spannende Einblicke in aktuelle Forschungstrends mit, die auch für unserer Forschung besonders relevant sind.<span id="more-31550"></span></p>
<p data-start="325" data-end="1240">Während seines eintägigen Besuchs präsentierten wir ihm eine Auswahl unserer aktuellen Demos und nutzten in angeregten Gesprächen die Gelegenheit, uns über Forschungsthemen auszutauschen und Ideen für zukünftige Zusammenarbeit zu brainstormen.</p>
<p data-start="325" data-end="1240">Ein Highlight des Tages war sein sehr gut besuchter Vortrag mit dem Thema „<em>Emerging Topics in Visual and Immersive Analytics</em>”, in dem er aktuelle Fragestellungen und Entwicklungen in diesem Forschungsfeld, und aktuelle Beispiele aus seiner  eigenen Forschung vorstellte.</p>
<p data-start="325" data-end="1240">Wir danken Prof. Dwyer herzlich für seinen Besuch und den inspirierenden Austausch – und freuen uns auf die Möglichkeit zukünftiger gemeinsamer Projekte.</p>
]]></content:encoded>
					</item>
		<item>
		<title>LNDW 2025</title>
		<link>https://www.mt.inf.tu-dresden.de/news/2025/06/lndw-2025/</link>
		<pubDate>Fri, 27 Jun 2025 12:12:27 +0000</pubDate>
		<guid isPermaLink="false">https://imld.de/?p=31405</guid>
		<description><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/News_pic-scaled.png" alt=""/>Die diesjährige Lange Nacht der Wissenschaft in Dresden war warm und erkenntnisreich – und unser Interactive Media Lab war erneut mit einer Vielzahl interaktiver Ausstellungen im APB-Gebäude vertreten. Der Abend begann mit einer besonderen Führung, bei der Bürgermeister Jan Pratzka sowie Institutsleitungen verschiedener Hochschulen in Dresden drei ausgewählte Demonstrationen mit medizinischem und biologischem Fokus erkunden [&#8230;]]]></description>
							<content:encoded><![CDATA[<img src="https://www.mt.inf.tu-dresden.de/cnt/uploads/News_pic-scaled.png" alt=""/><p data-start="122" data-end="323">Die diesjährige Lange Nacht der Wissenschaft in Dresden war warm und erkenntnisreich – und unser Interactive Media Lab war erneut mit einer Vielzahl interaktiver Ausstellungen im APB-Gebäude vertreten.</p>
<p data-start="122" data-end="323"><span id="more-31405"></span></p>
<p data-start="325" data-end="1240">Der Abend begann mit einer besonderen Führung, bei der Bürgermeister Jan Pratzka sowie Institutsleitungen verschiedener Hochschulen in Dresden drei ausgewählte Demonstrationen mit medizinischem und biologischem Fokus erkunden konnten: <em data-start="582" data-end="596">BacteriaZoom</em>, eine Augmented-Reality-Anwendung im Rahmen des <em data-start="645" data-end="660">Bakteriopolis</em>-Projekts (eine laufende Kooperation mit der Professur für Allgemeine Mikrobiologie im DFG-geförderten Schwerpunktprogramm SPP 2389), ermöglichte die Erkundung eines Bakterienmodells in 3D. Ergänzt wurde die Tour durch ein Projekt zur Registrierung medizinischer Punktwolken mittels Mid-Air-Gesten auf einem stereoskopischen Display sowie durch <a href="https://imld.de/research/research-projects/endomersion-an-immersive-remote-guidance-and-feedback-system-for-robot-assisted-minimally-invasive-surgery/"><em data-start="1013" data-end="1026">Endomersion</em></a>, unser System zur AR-gestützten Remote-Unterstützung bei minimalinvasiven Operationen (MIS). Nach der exklusiven Führung wurden diese Exponate in unsere Laborräume verlegt und für das breite Publikum geöffnet.</p>
<p data-start="1242" data-end="1629">In unserem Raum „Mixed Reality and Data Visualization“ konnten Besucher*innen zudem weitere Highlights entdecken: <a href="https://imld.de/research/research-projects/pearl/"><em data-start="1356" data-end="1363">PEARL</em></a>, eine AR-Anwendung zur Analyse menschlicher Bewegungen, <a href="https://imld.de/research/publications/evonne-cgf/"><em data-start="1420" data-end="1428">Evonne</em> </a>und <a href="https://imld.de/research/research-projects/pmc-vis/"><em data-start="1433" data-end="1442">PMC-VIS</em></a>, zwei Projekte zur Visualisierung rechnergestützter Modelle, sowie unsere Forschung zu <a href="https://imld.de/research/research-projects/mcv-displaywall/"><em data-start="1530" data-end="1558">Multiple Coordinated Views</em></a> zur effektiven Informationsvisualisierung auf großformatigen Displays.</p>
<p data-start="1631" data-end="1710">Weitere Informationen zu diesen Projekten finden Sie unter den folgenden Links.</p>
<div class="block-list-of-publications"><h2>Verwandte Projekte</h2><div class="publications"><ul><li class="pub-entry pub-entry-30632"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Ellenberg2025_IEEE_VR25_Endomersion_Preprint.pdf"><img decoding="async" src="/cnt/uploads/IEEEVR-Endomersion-Thumbnail-e1742301247633-150x150.png" alt="Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Ellenberg2025_IEEE_VR25_Endomersion_Preprint.pdf" title="Endomersion - Author Version: Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery">Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery</a></h3><p class="pub-authors"><span><a href="/our-group/team/mats-ole-ellenberg/">Ellenberg, M.</a>;</span> <span><a href="/our-group/team/katja-krug/">Krug, K.</a>;</span> <span>Fan, Y.;</span> <span>Krzywinski, J.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a>;</span> <span>Younis, R.;</span> <span>Wagner, M.;</span> <span>Weitz, J.;</span> <span>Rodriguez, A.;</span> <span>Just, G.;</span> <span>Bodenstedt, S.;</span> <span>Speidel, S.</span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).</em></span><span class="pub-meta-event">IEEE VR '25, Saint-Malo, France.</span><span class="pub-meta-publisher">IEEE,</span><span class="pub-meta-date">2025.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1109/VRW66409.2025.00450" title="Gehe zu: https://doi.org/10.1109/VRW66409.2025.00450">10.1109/VRW66409.2025.00450</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Ellenberg2025_IEEE_VR25_Endomersion_Preprint.pdf" title="Dokument herunterladen Dokument: Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/endomersion-an-immersive-remote-guidance-and-feedback-system-for-robot-assisted-minimally-invasive-surgery/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{ellenberg2025endomersion,<br />
&nbsp;&nbsp;&nbsp;author = {Mats Ole Ellenberg and Katja Krug and Yichen Fan and Jens Krzywinski and Raimund Dachselt and Rayan Younis and Martin Wagner and J\"{u}rgen Weitz and Ariel Rodriguez and Gregor Just and Sebastian Bodenstedt and Stefanie Speidel},<br />
&nbsp;&nbsp;&nbsp;title = {Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery},<br />
&nbsp;&nbsp;&nbsp;booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},<br />
&nbsp;&nbsp;&nbsp;series = {IEEE VR '25},<br />
&nbsp;&nbsp;&nbsp;year = {2025},<br />
&nbsp;&nbsp;&nbsp;month = {03},<br />
&nbsp;&nbsp;&nbsp;location = {Saint-Malo, France},<br />
&nbsp;&nbsp;&nbsp;numpages = {2},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1109/VRW66409.2025.00450},<br />
&nbsp;&nbsp;&nbsp;url = {https://doi.org/10.1109/VRW66409.2025.00450},<br />
&nbsp;&nbsp;&nbsp;publisher = {IEEE},<br />
&nbsp;&nbsp;&nbsp;keywords = {Mixed Reality, Remote Guidance, Telestration, Minimally Invasive Surgery}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href="https://www.imld.de/endomersion-IEEE-VR-video/" target="_blank" title="Video"><i class="icon-external-link"></i> Video</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-28781"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/PMC-VIS.pdf"><img decoding="async" src="/cnt/uploads/tool-overview-e1694383521101-150x150.png" alt="PMC-VIS: An Interactive Visualization Tool for Probabilistic Model Checking" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/PMC-VIS.pdf" title="Download: PMC-VIS: An Interactive Visualization Tool for Probabilistic Model Checking">PMC-VIS: An Interactive Visualization Tool for Probabilistic Model Checking</a></h3><p class="pub-authors"><span>Korn, M.;</span> <span><a href="/our-group/team/julian-mendez/">Méndez, J.</a>;</span> <span>Klüppelholz, S.;</span> <span><a href="/our-group/team/ricardo-langner/">Langner, R.</a>;</span> <span>Baier, C.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Software Engineering and Formal Methods.</em></span><span class="pub-meta-event">SEFM '23, Eindhoven, Netherlands.</span><span class="pub-meta-publisher">Springer,</span><span class="pub-meta-date">2023.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1007/978-3-031-47115-5_20" title="Gehe zu: https://doi.org/10.1007/978-3-031-47115-5_20">10.1007/978-3-031-47115-5_20</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/PMC-VIS.pdf" title="Dokument herunterladen Dokument: PMC-VIS: An Interactive Visualization Tool for Probabilistic Model Checking">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/pmc-vis/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{pmc-vis-2023,<br />
&nbsp;&nbsp;&nbsp;author = {Max Korn and Juli\'{a}n M\'{e}ndez and Sascha  Kl\"{u}ppelholz and Ricardo Langner and Christel  Baier and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {PMC-VIS: An Interactive Visualization Tool for Probabilistic Model Checking},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Software Engineering and Formal Methods},<br />
&nbsp;&nbsp;&nbsp;series = {Lecture Notes in Computer Science},<br />
&nbsp;&nbsp;&nbsp;volume = {14323},<br />
&nbsp;&nbsp;&nbsp;year = {2023},<br />
&nbsp;&nbsp;&nbsp;month = {11},<br />
&nbsp;&nbsp;&nbsp;location = {Eindhoven, Netherlands},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1007/978-3-031-47115-5_20},<br />
&nbsp;&nbsp;&nbsp;publisher = {Springer}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href=" https://zenodo.org/record/8172531" target="_blank" title="Accompanying Artifact"><i class="icon-external-link"></i> Accompanying Artifact</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-27097"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Mendez2023-Evonne.pdf"><img decoding="async" src="/cnt/uploads/thumbnail-evonne-150x150.png" alt="Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Mendez2023-Evonne.pdf" title="Download: Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging">Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging</a></h3><p class="pub-authors"><span><a href="/our-group/team/julian-mendez/">Méndez, J.</a>;</span> <span>Alrabbaa, C.;</span> <span>Koopmann, P.;</span> <span><a href="/our-group/team/ricardo-langner/">Langner, R.</a>;</span> <span>Baader, F.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-journal">In <em>Computer Graphics Forum</em> (Band 42, Ausgabe 6).</span><span class="pub-meta-event">EuroVis '23, Leipzig, Germany.</span><span class="pub-meta-publisher">John Wiley & Sons, Ltd,</span><span class="pub-meta-pages">Seite&nbsp;e14730,</span><span class="pub-meta-date">2023.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1111/cgf.14730" title="Gehe zu: https://doi.org/10.1111/cgf.14730">10.1111/cgf.14730</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Mendez2023-Evonne.pdf" title="Dokument herunterladen Dokument: Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/evonne/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@article{MAKLBD-CGF-2023,<br />
&nbsp;&nbsp;&nbsp;author = {Juli\'{a}n M\'{e}ndez and Christian Alrabbaa and Patrick Koopmann and Ricardo Langner and Franz Baader and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging},<br />
&nbsp;&nbsp;&nbsp;journal = {Computer Graphics Forum},<br />
&nbsp;&nbsp;&nbsp;volume = {42},<br />
&nbsp;&nbsp;&nbsp;issue = {6},<br />
&nbsp;&nbsp;&nbsp;year = {2023},<br />
&nbsp;&nbsp;&nbsp;month = {3},<br />
&nbsp;&nbsp;&nbsp;location = {Leipzig, Germany},<br />
&nbsp;&nbsp;&nbsp;pages = {e14730},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1111/cgf.14730},<br />
&nbsp;&nbsp;&nbsp;publisher = {John Wiley \& Sons, Ltd}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href=" http://141.76.67.139:7007/" target="_blank" title="Online Demo"><i class="icon-external-link"></i> Online Demo</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-27752"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Luo2023_CHI23_AuthorVersion_PEARL.pdf"><img decoding="async" src="/cnt/uploads/CHI23_PEARL_Paper-Preview-Image-150x150.jpg" alt="Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Luo2023_CHI23_AuthorVersion_PEARL.pdf" title="Full Paper: Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis">Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis</a></h3><p class="pub-authors"><span><a href="/our-group/team/weizhou-luo/">Luo, W.</a>;</span> <span>Yu, Z.;</span> <span><a href="/our-group/team/rufat-rzayev/">Rzayev, R.</a>;</span> <span><a href="/our-group/team/marc-satkowski/">Satkowski, M.</a>;</span> <span>Gumhold, S.;</span> <span>McGinity, M.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.</em></span><span class="pub-meta-event">CHI '23, Hamburg, Germany.</span><span class="pub-meta-publisher">Association for Computing Machinery,</span><span class="pub-meta-date">2023.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1145/3544548.3580715" title="Gehe zu: https://doi.org/10.1145/3544548.3580715">10.1145/3544548.3580715</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Luo2023_CHI23_AuthorVersion_PEARL.pdf" title="Dokument herunterladen Dokument: Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/pearl/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{luo2023pearl,<br />
&nbsp;&nbsp;&nbsp;author = {Weizhou Luo and Zhongyuan Yu and Rufat Rzayev and Marc Satkowski and Stefan Gumhold and Matthew McGinity and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},<br />
&nbsp;&nbsp;&nbsp;series = {CHI '23},<br />
&nbsp;&nbsp;&nbsp;number = {381},<br />
&nbsp;&nbsp;&nbsp;year = {2023},<br />
&nbsp;&nbsp;&nbsp;month = {04},<br />
&nbsp;&nbsp;&nbsp;isbn = {9781450394215},<br />
&nbsp;&nbsp;&nbsp;location = {Hamburg, Germany},<br />
&nbsp;&nbsp;&nbsp;numpages = {15},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1145/3544548.3580715},<br />
&nbsp;&nbsp;&nbsp;url = {https://doi.org/10.1145/3544548.3580715},<br />
&nbsp;&nbsp;&nbsp;publisher = {Association for Computing Machinery},<br />
&nbsp;&nbsp;&nbsp;address = {New York, NY, USA},<br />
&nbsp;&nbsp;&nbsp;keywords = {Immersive Analytics, physical referents, augmented/mixed reality, affordance, In-situ visualization, movement data analysis}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href="https://youtu.be/Gr532F6Sh1o" target="_blank" title="30-Second Video Preview"><i class="icon-external-link"></i> 30-Second Video Preview</a>, <a href="https://youtu.be/Dv_1q5rpwcw" target="_blank" title="5-min Video Figure"><i class="icon-external-link"></i> 5-min Video Figure</a>, <a href="https://youtu.be/rwcnPBWavAA" target="_blank" title="10-min Recorded Talk"><i class="icon-external-link"></i> 10-min Recorded Talk</a>, <a href=" https://imld.de/cnt/uploads/Luo2023_CHI23_Appendix_PEARL.pdf" target="_blank" title="Appendix"><i class="icon-external-link"></i> Appendix</a>, <a href=" https://github.com/PearlDeveloper/PEARL-Physical-Environment-based-Augmented-Reality-Lenses" target="_blank" title="GitHub Repository"><i class="icon-external-link"></i> GitHub Repository</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-27894"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Krug2023_CHI23AE_AuthorVersion_PointCloudAlignment.pdf"><img decoding="async" src="/cnt/uploads/CHI23_PointCloud_Paper-Preview-Image-150x150.jpg" alt="Point Cloud Alignment through Mid-Air Gestures on a Stereoscopic Display" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Krug2023_CHI23AE_AuthorVersion_PointCloudAlignment.pdf" title="Full Paper: Point Cloud Alignment through Mid-Air Gestures on a Stereoscopic Display">Point Cloud Alignment through Mid-Air Gestures on a Stereoscopic Display</a></h3><p class="pub-authors"><span><a href="/our-group/team/katja-krug/">Krug, K.</a>;</span> <span><a href="/our-group/team/marc-satkowski/">Satkowski, M.</a>;</span> <span>Docea, R.;</span> <span>Ku, T.;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-book">In&nbsp;<em>Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems.</em></span><span class="pub-meta-event">CHI '23, Hamburg, Germany.</span><span class="pub-meta-publisher">ACM,</span><span class="pub-meta-date">2023.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1145/3544549.3585862" title="Gehe zu: https://doi.org/10.1145/3544549.3585862">10.1145/3544549.3585862</a></span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Krug2023_CHI23AE_AuthorVersion_PointCloudAlignment.pdf" title="Dokument herunterladen Dokument: Point Cloud Alignment through Mid-Air Gestures on a Stereoscopic Display">Dokument <span class="material-icons md-18">article</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@inproceedings{krug2023point,<br />
&nbsp;&nbsp;&nbsp;author = {Katja Krug and Marc Satkowski and Reuben Docea and Tzu-Yu Ku and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Point Cloud Alignment through Mid-Air Gestures on a Stereoscopic Display},<br />
&nbsp;&nbsp;&nbsp;booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},<br />
&nbsp;&nbsp;&nbsp;series = {CHI EA'23},<br />
&nbsp;&nbsp;&nbsp;year = {2023},<br />
&nbsp;&nbsp;&nbsp;month = {04},<br />
&nbsp;&nbsp;&nbsp;location = {Hamburg, Germany},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1145/3544549.3585862},<br />
&nbsp;&nbsp;&nbsp;publisher = {ACM},<br />
&nbsp;&nbsp;&nbsp;address = {New York, NY, USA}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href=" /docs/projects/point-cloud-registration/Krug2023_CHI23AE_Poster_PointCloudAlignment.pdf" target="_blank" title="Poster"><i class="icon-external-link"></i> Poster</a>, <a href=" https://youtu.be/9hxx3NPbL8M" target="_blank" title="Pre-Recorded Talk"><i class="icon-external-link"></i> Pre-Recorded Talk</a>, <a href=" https://youtu.be/RdS1TzleUXQ" target="_blank" title="Teaser"><i class="icon-external-link"></i> Teaser</a>, <a href=" /docs/projects/point-cloud-registration/supplemental-material.zip" target="_blank" title="Supplemental Material"><i class="icon-external-link"></i> Supplemental Material</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --><li class="pub-entry pub-entry-14295"><div class="pub-content"><a class="no-icon pub-preview" href="/cnt/uploads/Langner-2018_MCV-LargeDisplays_InfoVis2018.pdf"><img decoding="async" src="/cnt/uploads/Langner-2018_mcv-wall-display_VIS18_thumb-150x150.jpg" alt="Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances" loading="lazy"></a><div class="pub-data"><h3 class="pub-title"><a class="no-icon" href="/cnt/uploads/Langner-2018_MCV-LargeDisplays_InfoVis2018.pdf" title="Artikel: Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances">Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances</a></h3><p class="pub-authors"><span><a href="/our-group/team/ricardo-langner/">Langner, R.</a>;</span> <span><a href="/our-group/team/ulrike-kister/">Kister, U.</a>;</span> <span><a href="/our-group/team/raimund-dachselt/">Dachselt, R.</a></span></p><p class="pub-meta"><span class="pub-meta-journal">In <em>IEEE Transactions on Visualization and Computer Graphics</em> (Band 25, Ausgabe 1).</span><span class="pub-meta-event">InfoVis '18, Berlin, Germany.</span><span class="pub-meta-publisher">IEEE,</span><span class="pub-meta-pages">Seite&nbsp;608-618,</span><span class="pub-meta-date">2019.</span><span class="pub-meta-doi"><a class="label label-info" href="https://doi.org/10.1109/TVCG.2018.2865235" title="Gehe zu: https://doi.org/10.1109/TVCG.2018.2865235">10.1109/TVCG.2018.2865235</a></span><span class="pub-meta-comments"><i class="fa icon-info-sign"></i> Online publication date: 20 August 2018</span></p><!-- / .pub-meta --></div><!-- / .pub-data --><div class="pub-features"><ul><li><a href="/cnt/uploads/Langner-2018_MCV-LargeDisplays_InfoVis2018.pdf" title="Dokument herunterladen Dokument: Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances">Dokument <span class="material-icons md-18">article</span></a></li><li><a href="/research/research-projects/mcv-displaywall/" title="Link zur Hauptprojektseite.">Projektseite <span class="material-icons md-18">link</span></a></li><li><a class="morematerial-toggle" href="#">Materialien <span class="material-icons md-18">loupe</span></a></li><li><a class="bibtex-toggle" href="#">BibTeX <span class="material-icons md-18">format_quote</span></a></li></ul></div><!-- / .pub-features --></div><!-- / .pub-content --><div class="pub-bibtex"><p>@article{fixme,<br />
&nbsp;&nbsp;&nbsp;author = {Ricardo Langner and Ulrike Kister and Raimund Dachselt},<br />
&nbsp;&nbsp;&nbsp;title = {Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances},<br />
&nbsp;&nbsp;&nbsp;journal = {IEEE Transactions on Visualization and Computer Graphics},<br />
&nbsp;&nbsp;&nbsp;volume = {25},<br />
&nbsp;&nbsp;&nbsp;issue = {1},<br />
&nbsp;&nbsp;&nbsp;year = {2019},<br />
&nbsp;&nbsp;&nbsp;month = {1},<br />
&nbsp;&nbsp;&nbsp;location = {Berlin, Germany},<br />
&nbsp;&nbsp;&nbsp;pages = {608--618},<br />
&nbsp;&nbsp;&nbsp;numpages = {11},<br />
&nbsp;&nbsp;&nbsp;doi = {10.1109/TVCG.2018.2865235},<br />
&nbsp;&nbsp;&nbsp;url = {https://doi.org/10.1109/TVCG.2018.2865235},<br />
&nbsp;&nbsp;&nbsp;publisher = {IEEE},<br />
&nbsp;&nbsp;&nbsp;keywords = {multiple coordinated views, wall-sized displays, mobile devices, distant interaction, physical navigation, user behavior, user movements, multi-user, collaborative data analysis}<br />
}</p></div><!-- / .pub-bibtex --><div class="pub-morematerial"><p class="heading">Weitere Materialien</p><p> <a href="https://youtu.be/kiXMn2VPZek" target="_blank" title="Video"><i class="icon-external-link"></i> Video</a>, 	<a href="/cnt/uploads/Langner-2018_MCV-DisplayWall_infovis-18_v7_exp.pdf" title="Download Talk: Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances">Talk</a></p></div><!-- / .pub-more-content --></li><!-- / .pub-entry --></ul></div></div>
]]></content:encoded>
					</item>
	</channel>
</rss>
