publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2024
- IVAPP ’25Immersive In Situ Visualizations for Monitoring Architectural-Scale Multiuser MR ExperiencesZhongyuan Yu, Daniel Zeidler, Krishnan Chandran, and 3 more authors2024
Mixed reality (MR) environments provide great value in displaying 3D virtual content. Systems facilitating co-located multiuser MR (Co-MUMR) experiences allow multiple users to co-present in a shared immersive virtual environment with natural locomotion. They can be used to support a broad spectrum of applications such as immersive presentations, public exhibitions, psychological experiments, etc. However, based on our experiences in delivering Co-MUMR experiences in large architectures and our reflections, we noticed that the crucial challenge for hosts to ensure the quality of experience is their lack of insight into the real-time information regarding visitor engagement, device performance, and system events. This work facilitates the display of such information by introducing immersive in situ visualizations.
@misc{yu2024immersivesituvisualizationsmonitoring, title = {Immersive In Situ Visualizations for Monitoring Architectural-Scale Multiuser MR Experiences}, author = {Yu, Zhongyuan and Zeidler, Daniel and Chandran, Krishnan and Engeln, Lars and Mende, Kelsang and McGinity, Matthew}, year = {2024}, eprint = {2412.15918}, archiveprefix = {arXiv}, primaryclass = {cs.HC}, url = {https://arxiv.org/abs/2412.15918}, }
- RAGI ’24An immersive labeling method for large point cloudsTianfang Lin, Zhongyuan Yu, Matthew McGinity, and 1 more authorComputers & Graphics, 2024
3D point clouds, such as those produced by 3D scanners, often require labeling – the accurate classification of each point into structural or semantic categories – before they can be used in their intended application. However, in the absence of fully automated methods, such labeling must be performed manually, which can prove extremely time and labor intensive. To address this we present a virtual reality tool for accelerating and improving the manual labeling of very large 3D point clouds. The labeling tool provides a variety of 3D interactions for efficient viewing, selection and labeling of points using the controllers of consumer VR-kits. The main contribution of our work is a mixed CPU/GPU-based data structure that supports rendering, selection and labeling with immediate visual feedback at high frame rates necessary for a convenient VR experience. Our mixed CPU/GPU data structure supports fluid interaction with very large point clouds in VR, what is not possible with existing continuous level-of-detail rendering algorithms. We evaluate our method with 25 users on tasks involving point clouds of up to 50 million points and find convincing results that support the case for VR-based point cloud labeling.
@article{LIN2024104101, title = {An immersive labeling method for large point clouds}, journal = {Computers & Graphics}, volume = {124}, pages = {104101}, year = {2024}, issn = {0097-8493}, doi = {https://doi.org/10.1016/j.cag.2024.104101}, url = {https://www.sciencedirect.com/science/article/pii/S009784932400236X}, author = {Lin, Tianfang and Yu, Zhongyuan and McGinity, Matthew and Gumhold, Stefan}, keywords = {Virtual reality, Point cloud, Immersive labeling}, }
- NICOInt ’24An Immersive Method for Extracting Structural Information from Unorganized Point CloudsZhongyuan Yu, Tianfang Lin, Stefan Gumhold, and 1 more authorIn 2024 Nicograph International (NicoInt), 2024
Point clouds find extensive applications across diverse domains such as construction, architecture, and archaeology due to their capacity to encapsulate rich datasets. Given the density and detail of these datasets, extracting accurate structural information (geometrical and connectivity information) demands sophisticated algorithms and tools. The recent advancements in virtual reality (VR) technology and rendering techniques have significantly elevated the ability to visualize these point cloud datasets, affording users an immersive experience within the data. This immersive capability holds the potential to streamline the process of extracting valuable information from these datasets. In this work, we explore the feasibility of extracting and visualizing structural information from point clouds with immersive interfaces. We propose strategies to enhance human perception of the underlying structure and interactive methods to extract face, edge, vertex, and connectivity data from the raw, unorganized point clouds. Our method is evaluated based on public datasets. We believe our work represents a promising attempt to integrate human insights into the data processing pipeline.
@inproceedings{yu_pointstructuring, author = {Yu, Zhongyuan and Lin, Tianfang and Gumhold, Stefan and McGinity, Matthew}, booktitle = {2024 Nicograph International (NicoInt)}, title = {An Immersive Method for Extracting Structural Information from Unorganized Point Clouds}, year = {2024}, volume = {}, number = {}, pages = {24-31}, keywords = {Point cloud compression;Geometry;Pipelines;Data visualization;Prototypes;Immersive experience;Rendering (computer graphics);Point Cloud;Virtual Reality/Augmented Reality;Computer Graphics}, doi = {10.1109/NICOInt62634.2024.00014}, }
- TVCGViewR: Architectural-Scale Multi-User Mixed Reality With Mobile Head-Mounted DisplaysFlorian Schier, Daniel Zeidler, Krishnan Chandran, and 2 more authorsIEEE Transactions on Visualization and Computer Graphics, 2024
@article{10210595, author = {Schier, Florian and Zeidler, Daniel and Chandran, Krishnan and Yu, Zhongyuan and McGinity, Matthew}, journal = {IEEE Transactions on Visualization and Computer Graphics}, title = {ViewR: Architectural-Scale Multi-User Mixed Reality With Mobile Head-Mounted Displays}, year = {2024}, volume = {30}, number = {8}, pages = {5609-5622}, keywords = {Virtual reality;Mixed reality;Visualization;Collaboration;Head-mounted displays;Cameras;Resists;Visualization systems;mixed / augmented reality;virtual reality;collaborative systems;co-located systems}, doi = {10.1109/TVCG.2023.3299781}, }
2023
- VRST ’23Dynascape: Immersive Authoring of Real-World Dynamic Scenes with Spatially Tracked RGB-D VideosZhongyuan Yu, Daniel Zeidler, Victor Victor, and 1 more authorIn Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology, Christchurch, New Zealand, 2023
In this paper, we present Dynascape, an immersive approach to the composition and playback of dynamic real-world scenes in mixed and virtual reality. We use spatially tracked RGB-D cameras to capture point cloud representations of arbitrary dynamic real-world scenes. Dynascape provides a suite of tools for spatial and temporal editing and composition of such scenes, as well as fine control over their visual appearance. We also explore strategies for spatiotemporal navigation and different tools for the in situ authoring and viewing of mixed and virtual reality scenes. Dynascape is intended as a research platform for exploring the creative potential of dynamic point clouds captured with mobile, tracked RGB-D cameras. We believe our work represents a first attempt to author and playback spatially tracked RGB-D video in an immersive environment, and opens up new possibilities for involving dynamic 3D scenes in virtual space.
@inproceedings{10.1145/3611659.3615718, author = {Yu, Zhongyuan and Zeidler, Daniel and Victor, Victor and Mcginity, Matthew}, title = {Dynascape: Immersive Authoring of Real-World Dynamic Scenes with Spatially Tracked RGB-D Videos}, year = {2023}, isbn = {9798400703287}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3611659.3615718}, doi = {10.1145/3611659.3615718}, booktitle = {Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology}, articleno = {10}, numpages = {12}, keywords = {Data Visualization, Human Computer Interaction, Immersive Authoring}, location = {Christchurch, New Zealand}, series = {VRST '23}, }
- CHI ’23PEARL: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement AnalysisWeizhou Luo, Zhongyuan Yu, Rufat Rzayev, and 4 more authorsIn Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems,
, , 2023Hamburg ,Germany ,This paper presents Pearl, a mixed-reality approach for the analysis of human movement data in situ. As the physical environment shapes human motion and behavior, the analysis of such motion can benefit from the direct inclusion of the environment in the analytical process. We present methods for exploring movement data in relation to surrounding regions of interest, such as objects, furniture, and architectural elements. We introduce concepts for selecting and filtering data through direct interaction with the environment, and a suite of visualizations for revealing aggregated and emergent spatial and temporal relations. More sophisticated analysis is supported through complex queries comprising multiple regions of interest. To illustrate the potential of Pearl, we developed an Augmented Reality-based prototype and conducted expert review sessions and scenario walkthroughs in a simulated exhibition. Our contribution lays the foundation for leveraging the physical environment in the in-situ analysis of movement data.
@inproceedings{luo2023pearl, author = {Luo, Weizhou and Yu, Zhongyuan and Rzayev, Rufat and Satkowski, Marc and Gumhold, Stefan and McGinity, Matthew and Dachselt, Raimund}, title = {PEARL: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis}, year = {2023}, isbn = {9781450394215}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3544548.3580715}, doi = {10.1145/3544548.3580715}, booktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems}, articleno = {381}, numpages = {15}, keywords = {physical referents, movement data analysis, augmented/mixed reality, affordance, In-situ visualization, Immersive Analytics}, location = {<conf-loc>, <city>Hamburg</city>, <country>Germany</country>, </conf-loc>}, series = {CHI '23}, }