PublicationsBarcelona
ViRVIG
3D4LIFE Journals Results
Visualization of Large Molecular Trajectories
Duran, David; Hermosilla, Pedro; Ropinski, Timo; Kozlíková, Barbora; Vinacua, Àlvar; Vázquez, Pere-Pau
Proc. IEEE Transactions on Visualization and Computer Graphics, Vol. 25, Num. 1, pp 987--996, 2019.
DOI: http://dx.doi.org/10.1109/TVCG.2018.2864851
PDF
The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Graphics Forum, Vol. 37, Num. 5, pp 233--243, 2018.
DOI: http://dx.doi.org/10.1111/cgf.13505
Normal vectors are essential for many point cloud operations, including segmentation, reconstruction and rendering. The robust estimation of normal vectors from 3D range scans is a challenging task due to undersampling and noise, specially when combining points sampled from multiple sensor locations. Our error model assumes a Gaussian distribution of the range error with spatially-varying variances that depend on sensor distance and reflected intensity, mimicking the features of Lidar equipment. In this paper we study the impact of measurement errors on the covariance matrices of point neighborhoods. We show that covariance matrices of the true surface points can be estimated from those of the acquired points plus sensordependent directional terms. We derive a lower bound on the neighbourhood size to guarantee that estimated matrix coefficients will be within a predefined error with a prescribed probability. This bound is key for achieving an optimal trade-off between smoothness and fine detail preservation. We also propose and compare different strategies for handling neighborhoods with samples coming from multiple materials and sensors. We show analytically that our method provides better normal estimates than competing approaches in noise conditions similar to those found in Lidar equipment.
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau
Computers & Graphics, Vol. 73, pp 1--16, 2018.
DOI: http://dx.doi.org/https://doi.org/10.1016/j.cag.2018.02.007
Mobile devices have experienced an incredible market penetration in the last decade. Currently, medium to premium smartphones are relatively a ordable devices. With the increase in screen size and resolution, together with the improvements in performance of mobile CPUs and GPUs, more tasks have become possible. In this paper we explore the rendering of medium to large volumetric models on mobile and low performance devices in general. To do so, we present a progressive ray casting method that is able to obtain interactive frame rates and high quality results for models that not long ago were only supported by desktop computers.
Hermosilla, Pedro; Ristchel, T.; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, T.
Proc.ACM Transactions on Computer Graphics, Proc. SIGGRAPH Asia., Vol. 37, Num. 6, pp 235:1--235:12, 2018.
DOI: http://dx.doi.org/10.1145/3272127.3275110
Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, Authors’ addresses: Pedro Hermosilla, Ulm University, Germany, pedro-1. hermosilla-casajus@uni-ulm.de; Tobias Ritschel, University College London, United Kingdom, t.ritschel@ucl.ac.uk; Pere-Pau Vázquez, pere.pau@cs.upc.edu; Àlvar Vinacua, alvar@cs.upc.edu, Universitat Politècnica de Catalunya, Spain; Timo Ropinski, Ulm University, Germany, timo.ropinski@uni-ulm.de. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2018 Copyright held by the owner/author(s). 0730-0301/2018/11-ART235 https://doi.org/10.1145/3272127.3275110 we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.
A General Illumination Model for Molecular Visualization
Hermosilla, Pedro; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, Timo
Computer Graphics Forum, Vol. 37, Num. 3, pp 367--378, 2018.
DOI: http://dx.doi.org/https://doi.org/10.1111/cgf.13426
Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom-based representations are the Space-filling, the Solvent Excluded Surface, the Balls-and-Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large-scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom-based molecular representations. The proposed model can be further evaluated in real-time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.
Top-down model fitting for hand pose recovery in sequences of depth images
Madadi. Meysam; Escalera, Sergio; Carruesco, Alex; Andújar, Carlos; Baró, Xavier; González, Jordi
Image and Vision Computing, Vol. 79, pp 63--75, 2018.
DOI: http://dx.doi.org/https://doi.org/10.1016/j.imavis.2018.09.006
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. We evaluate our approach on a new created synthetic hand dataset along with NYU and MSRA real datasets. Results demonstrate that the proposed method outperforms the most recent pose recovering approaches, including those based on CNNs.
Vázquez, Pere-Pau; Hermosilla, Pedro; Guallar, Víctot; Estrada, Jorge; Vinacua, Àlvar
Computer Graphics Forum, Vol. 37, Num. 3, pp 391--402, 2018.
DOI: http://dx.doi.org/https://doi.org/10.1111/cgf.13428
The analysis of protein-ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein-ligand interactions of Molecular Simulation trajectories is greatly facilitated.
3D4LIFE Conferences Results
Agus, Marco; Gobbetti, Enrico; Marton, Fabio; Pintore, Giovanni; Vázquez, Pere-Pau
International Conference on 3DVision, Verona, Italy, Sept. 5-8, 2018.
The hardware for mobile devices, from smartphones and tablets to mobile cameras, continues to be one of the fastest-growing areas of the technology market. Not only mobile CPUs and GPUs are rapidly increasing in power, but a variety of high-quality visual and motion sensors are being embedded in mobile solutions. This, together with the increased availability of high-speed networks at lower prices, has opened the door to a variety of novel VR, AR, vision, and graphics applications. This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state-of-the-art and key advances in specific application domains. The five key areas that will be presented are: 1) the evolution of mobile graphics capabilities; 2) the current trends in GPU hardware for mobile devices; 3) the main software development systems; 4) the scalable visualization of large scenes on mobile platforms; and, finally, 5) the use of mobile capture and data fusion for 3D acquisition and reconstruction.
Andújar, Carlos; Argudo, Oscar; Besora, Isaac; Brunet, Pere; Chica, Antoni; Comino, Marc
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181162
Structure-from-motion along with multi-view stereo techniques jointly allow for the inexpensive scanning of 3D objects (e.g. buildings) using just a collection of images taken from commodity cameras. Despite major advances in these fields, a major limitation of dense reconstruction algorithms is that correct depth/normal values are not recovered on specular surfaces (e.g. windows) and parts lacking image features (e.g. flat, textureless parts of the facade). Since these reflective properties are inherent to the surface being acquired, images from different viewpoints hardly contribute to solve this problem. In this paper we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.
Andújar, Carlos; Brunet, Pere; Buxareu, Jerónimo; Fons, Joan; Laguarda, Narcís; Pascual, Jordi; Pelechano, Nuria
EUROGRAPHICS Workshop on Graphics and Cultural Heritage (EG GCH) . November 12-15. Viena (Austria), pp 47--56, 2018.
DOI: http://dx.doi.org/10.2312/gch.20181340
PDF
Virtual Reality (VR) simulations have long been proposed to allow users to explore both yet-to-built buildings in architectural design, and ancient, remote or disappeared buildings in cultural heritage. In this paper we describe an on-going VR project on an UNESCO World Heritage Site that simultaneously addresses both scenarios: supporting architects in the task of designing the remaining parts of a large unfinished building, and simulating existing parts that define the environment that new designs must conform to. The main challenge for the team of architects is to advance towards the project completion being faithful to the original Gaudí’s project, since many plans, drawings and plaster models were lost. We analyze the main requirements for collaborative architectural design in such a unique scenario, describe the main technical challenges, and discuss the lessons learned after one year of use of the system.
GL-Socket: A CG Plugin-based Framework for Teaching and Assessment
Andújar, Carlos; Chica, Antoni; Fairén, Marta; Vinacua, Àlvar
EG 2018 - Education Papers, pp 25--32, 2018.
DOI: http://dx.doi.org/10.2312/eged.20181003
In this paper we describe a plugin-based C++ framework for teaching OpenGL and GLSL in introductory Computer Graphics courses. The main strength of the framework architecture is that student assignments are mostly independent and thus can be completed, tested and evaluated in any order. When students complete a task, the plugin interface forces a clear separation of initialization, interaction and drawing code, which in turn facilitates code reusability. Plugin code can access scene, camera, and OpenGL window methods through a simple API. The plugin interface is flexible enough to allow students to complete tasks requiring shader development, object drawing, and multiple rendering passes. Students are provided with sample plugins with basic scene drawing and camera control features. One of the plugins that the students receive contains a shader development framework with self-assessment features. We describe the lessons learned after using the tool for four years in a Computer Graphics course involving more than one hundred Computer Science students per year.
Díaz, Jose; Meruvia-Pastor, Oscar; Vázquez, Pere-Pau
22nd International Conference Information Visualisation, IV 2018, Fisciano, Italy, July 10-13, 2018, pp 159--168, 2018.
DOI: http://dx.doi.org/10.1109/iV.2018.00037
Fons, Joan; Monclús, Eva; Vázquez, Pere-Pau; Navazo, Isabel
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 47--50, 2018.
DOI: http://dx.doi.org/10.2312/ceig.20181153
The recent advances in VR headsets, such as the Oculus Rift or HTC Vive, at affordable prices offering a high resolution display, has empowered the development of immersive VR applications. data. In this paper we propose an immersive VR system that uses some well-known acceleration algorithms to achieve real-time rendering of volumetric datasets in an immersive VR system. Moreover, we have incorporated different basic interaction techniques to facilitate the inspection of the volume dataset. The interaction has been designed to be as natural as possible in order to achieve the most comfortable, user-friendly virtual experience. We have conducted an informal user study to evaluate the user preferences. Our evaluation shows that our application is perceived usable, easy of learn and very effective in terms of the high level of immersion achieved
Hermosilla, Pedro; Maisch, Sebastian; Vázquez, Pere-Pau; Ropinski, Timo
VCBM 18: Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, September 20-21, 2018, pp 185--195, 2018.
DOI: http://dx.doi.org/10.2312/vcbm.20181244
PDF
Molecular surfaces are a commonly used representation in the analysis of molecular structures as they provide a compact description of the space occupied by a molecule and its accessibility. However, due to the high abstraction of the atomic data, fine grain features are hard to identify. Moreover, these representations involve a high degree of occlusions, which prevents the identification of internal features and potentially impacts shape perception. In this paper, we present a set of techniques which are inspired by the properties of translucent materials, that have been developed to improve the perception of molecular surfaces: First, we introduce an interactive algorithm to simulate subsurface scattering for molecular surfaces, in order to improve the thickness perception of the molecule. Second, we present a technique to visualize structures just beneath the surface, by still conveying relevant depth information. And lastly, we introduce reflections and refractions into our visualization that improve the shape perception of molecular surfaces. We evaluate the benefits of these methods through crowd-sourced user studies as well as the feedback from several domain experts.
Orellana, Bernat; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Bendezú, Álvaro; Azpiroz, Fernando
Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part {II}, pp 638--647, 2018.
DOI: http://dx.doi.org/10.1007/978-3-030-00934-2\_71
About 50% of the patients consulting a gastroenterology clinic report symptoms without detectable cause. Clinical researchers are interested in analyzing the volumetric evolution of colon segments under the effect of different diets and diseases. These studies require noninvasive abdominal MRI scans without using any contrast agent. In this work, we propose a colon segmentation framework designed to support T2-weighted abdominal MRI scans obtained from an unprepared colon. The segmentation process is based on an efficient and accurate quasiautomatic approach that drastically reduces the specialist interaction and effort with respect other state-of-the-art solutions, while decreasing the overall segmentation cost. The algorithm relies on a novel probabilistic tubularity filter, the detection of the colon medial line, probabilistic information extracted from a training set and a final unsupervised clustering. Experimental results presented show the benefits of our approach for clinical use.
Users’ locomotor behavior in Collaborative Virtual Reality
Rios, Àlex; Palomar, Marc; Pelechano, Nuria
Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games, MIG 2018, Limassol, Cyprus, November 08-10, 2018, pp 1--9, 2018.
DOI: http://dx.doi.org/10.1145/3274247.3274513
This paper presents a virtual reality experiment in which two participants share both the virtual and the physical space while performing a collaborative task. We are interested in studying what are the differences in human locomotor behavior between the real world and the VR scenario. For that purpose, participants performed the experiment in both the real and the virtual scenarios. For the VR case, participants can see both their own animated avatar and the avatar of the other participant in the environment. As they move, we store their trajectories to obtain information regarding speeds, clearance distances and task completion times. For the VR scenario, we also wanted to evaluate whether the users were aware of subtle differences in the avatar’s animations and foot steps sounds. We ran the same experiment under three different conditions: (1) synchronizing the avatar’s feet animation and sound of footsteps with the movement of the participant; (2) synchronizing the animation but not the sound and finally (3) not synchronizing either one. The results show significant differences in user’s presence questionnaires and also different trends in their locomotor behavior between the real world and the VR scenarios. However the subtle differences in animations and sound tested in our experiment had no impact on the results of the presence questionnaires, although it showed a small impact on their locomotor behavior in terms of time to complete their tasks, and clearance distances kept while crossing paths.
3D4life PhD Thesis Results