Eurographics Digital Library
This is the DSpace 7 platform of the Eurographics Digital Library.
- The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
- As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
- From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
Recent Submissions
Discrete Laplacians for General Polygonal and Polyhedral Meshes
(TU Dortmund University, 2024) Astrid Pontzen (née Bunge)
This thesis presents several approaches that generalize the Laplace-Beltrami operator and its closely related gradient and divergence operators to arbitrary polygonal and polyhedral meshes.
We start by introducing the linear virtual refinement method, which provides a simple yet effective discretization of the Laplacian with the help of the Galerkin method from a Finite Element perspective.
Its flexibility allows us to explore alternative numerical schemes in this setting and to derive a second Laplacian, called the Diamond Laplacian with a similar approach, but this time combined with the Discrete Duality Finite Volume method.
It offers enhanced accuracy but comes at the cost of denser matrices and slightly longer solving times.
In the second part of the thesis, we extend the linear virtual refinement to higher-order discretizations. This method is called the quadratic virtual refinement method.
It introduces variational quadratic shape functions for arbitrary polygons and polyhedra. We also present a custom multigrid approach to address the computational challenges of higher-order discretizations,
making the faster convergence rates and higher accuracy of these polygon shape functions more affordable for the user.
The final part of this thesis focuses on the open degrees of freedom of the linear virtual refinement method.
By uncovering connections between our operator and the underlying tessellations, we can enhance the accuracy and stability of our initial method and improve its overall performance.
These connections equally allow us to define what a ``good'' polygon would be in the context of our Laplacian.
We present a smoothing approach that alters the shape of the polygons (while retaining the original surface as much as possible) to allow for even better performance.
Perception-Based Techniques to Enhance User Experience in Virtual Reality
(2024-07-26) Colin Groth
Virtual reality (VR) ushered in a new era of immersive content viewing with vast potential for entertainment, design, medicine, and other fields. However, the willingness of users to practically apply the technology is bound to the quality of the virtual experience. In this dissertation, we describe the development and investigation of novel techniques to reduce negative influences on the user experience in VR applications. Our methods not only include substantial technical improvements but also consider important characteristics of human perception that are exploited to make the applications more effective and subtle. Mostly, we are focused on visual perception, since we deal with visual stimuli, but we also consider the vestibular sense which is a key component for the occurrence of negative symptoms in VR, referred to as cybersickness. In this dissertation, our techniques are designed for three groups of VR applications, characterized by the degree of freedom to apply adjustments.
The first set of techniques addresses the extension of VR systems with stimulation hardware. By adjusting common techniques from the medical field, we artificially induce human body signals to create immersive experiences that reduce common mismatches between perceptual information.
The second group focuses on applications that use common hardware and allow adjustments of the full render pipeline. Here, especially immersive video content is notable, where the frame rates and quality of the presentations are often not in line with the high requirements of VR systems to satisfy a decent user experience. To address the display problems, we present a novel video codec based on wavelet compression and perceptual features of the visual system.
Finally, the third group of applications is the most restrictive and does not allow modifications of the rendering pipeline. Here, our techniques consist of post-processing manipulations in screen space after rendering the image, without knowledge of the 3D scene. To allow techniques in this group to be subtle, we exploit fundamental properties of human peripheral vision and apply spatial masking as well as gaze-contingent motion scaling in our methods.
Efficient image-based rendering
(2023-07-12) Bemana, Mojtaba
Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.
Point Based Representations for Novel View Synthesis
(2023-11-03) Georgios Kopanas
The primary goal of inverse rendering is to recover 3D information from a set of 2D observations, usually a set of images or videos. Observing a 3D scene from different viewpoints can provide rich information about the underlying geometry, materials, and physical properties of the objects. Having access to this information allows many downstream applications. In this dissertation, we will focus on free-viewpoint navigation and novel view synthesis, which is the task of re-rendering the captured 3D scenes from unobserved viewpoints.The field has gained incredible momentum after the introduction of Neural Radiance Fields or NeRFs. While NeRFs achieve exceptional image quality on novel view synthesis it is not the only reason they managed to attract high engagement with the community. Another important property is the simplicity of the optimization, since they frame the problem of 3D reconstruction as a continuous optimization problem over the parameters of a scene representation with a simple photometric objective function.In this thesis, we will keep these two advantages of NeRFs and propose points as new way to represent Radiance Fields that not only achieves state-of-the-art results on image quality but also achieves real-time rendering at over 100 frames per second. Our solution also offers fast training with tractable memory footprint, and is easily integrated into graphics engines.In a traditional image-based rendering context, we propose a point-based representation with a differentiable rasterization pipeline, that optimizes geometry and appearance to achieve high visual quality for novel view synthesis. Next, we use points to tackle highly reflective curved objects -- arguably one of the hardest cases of novel view synthesis-- by learning the trajectory of reflections. In our latest work we show for the first time that points, augmented to become anisotropic 3D Gaussians, can maintain the differentiable properties of NeRFs but also manage to recover higher frequency signals and represent empty space more efficiently. At the same time, their Lagrangian nature and the fact that, in the most recent method, we manage to omit the use of Neural Networks allows us to have an explicit and interpretable geometry and appearance representation.Finally, in this dissertation, we will also briefly touch two other interesting topics. The first topic is how to place cameras to efficiently capture a scene for the purpose of 3D reconstruction. In complicated non-object-centric environments, we provide theoretical and practical intuition on how to place cameras that allow good reconstruction.The second topic is motivated by the recent success of generative models: we study how we can utilize point-based representations with diffusion models. Current methods impose extreme limitations on the number of points. In exploratory work, we suggest an architecture that leverages multi-view information as a tool to de-correlate the number of points with the speed and performance of the model.We conclude this dissertation by reflecting on the work performed throughout the thesis and sketching some exciting directions for future work.
Evaluating Graph Layout Algorithms: A Systematic Review of Methods and Best Practices
(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Di Bartolomeo, Sara; Crnovrsanin, Tarik; Saffo, David; Puerta, Eduardo; Wilson, Connor; Dunne, Cody; Alliez, Pierre; Wimmer, Michael
Evaluations—encompassing computational evaluations, benchmarks and user studies—are essential tools for validating the performance and applicability of graph and network layout algorithms (also known as graph drawing). These evaluations not only offer significant insights into an algorithm's performance and capabilities, but also assist the reader in determining if the algorithm is suitable for a specific purpose, such as handling graphs with a high volume of nodes or dense graphs. Unfortunately, there is no standard approach for evaluating layout algorithms. Prior work holds a ‘Wild West’ of diverse benchmark datasets and data characteristics, as well as varied evaluation metrics and ways to report results. It is often difficult to compare layout algorithms without first implementing them and then running your own evaluation. In this systematic review, we delve into the myriad of methodologies employed to conduct evaluations—the utilized techniques, reported outcomes and the pros and cons of choosing one approach over another. Our examination extends beyond computational evaluations, encompassing user‐centric evaluations, thus presenting a comprehensive understanding of algorithm validation. This systematic review—and its accompanying website—guides readers through evaluation types, the types of results reported, and the available benchmark datasets and their data characteristics. Our objective is to provide a valuable resource for readers to understand and effectively apply various evaluation methods for graph layout algorithms. A free copy of this paper and all supplemental material is available at , and the categorized papers are accessible on our website at .