3D4LIFE Journals Results
Males, Jan; Monclús, Eva; Díaz, Jose; Navazo, Isabel; Vázquez, Pere-Pau
Computers & graphics , Vol. 91, pp 39--51, 2020.
Computerized Tomography (CT) and, more recently, Magnetic Resonance Imaging (MRI) have become the state-of-the art techniques for morpho-volumetric analysis of abdominal cavities. Due to its constant motility, the colon is an organ difficult to analyze. Unfortunately, CT’s radiative nature makes it only indicated for patients with important disorders. Lately, acquisition techniques that rely on the use of MRI have matured enough to enable the analysis of colon data. This allows gathering data of patients with- out preparation (i.e. administration of drugs or contrast agents), and incorporating data of patients with non life-threatening diseases and healthy subjects to databases. In this paper we present an end-to-end framework that comprises all the steps to extract colon content and morphology data coupled with a web-based visualization tool that facilitates the visual exploration of such data. We also introduce the set of tools for the extraction of morphological data, and a detailed description of a specifically-designed interactive tool that facilitates a visual comparison of numerical variables within a set of patients, as well as a detailed inspection of an individual. Our prototype was evaluated by domain experts, which showed that our visual approach may reduce the costly process of colon data analysis. As a result, physicians have been able to get new insights on the effects of diets, and also to obtain a better understanding on the motility of the colon.
Orellana, Bernat; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Bendezú, Álvaro; Azpiroz, Álvaro
Medical Image Analysis, 2020.
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation.
Rahmani, Vahid; Pelechano, Nuria
Computers & Graphics, Vol. 86, pp 1--14, 2020.
One of the main challenges in video games is to compute paths as efficiently as possible for groups of agents. As both the size of the environments and the number of autonomous agents increase, it becomes harder to obtain results in real time under the constraints of memory and computing resources. Hierarchical approaches, such as HNA* (Hierarchical A* for Navigation Meshes) can compute paths more efficiently, although only for certain configurations of the hierarchy. For other configurations, the method suffers from a bottleneck in the step that connects the Start and Goal positions with the hierarchy. This bottleneck can drop performance drastically. In this paper we present two approaches to solve the HNA* bottleneck and thus obtain a performance boost for all hierarchical configurations. The first method relies on further memory storage, and the second one uses parallelism on the GPU. Our comparative evaluation shows that both approaches offer speed-ups as high as 9x faster than A*, and show no limitations based on hierarchical configuration. Finally we show how our CUDA based parallel implementation of HNA* for multi-agent path finding can now compute paths for over 500K agents simultaneously in real-time, with speed-ups above 15x faster than a parallel multi-agent implementation using A*.
Rios, Àlex; Pelechano, Nuria
Virtual Reality, Num. 24, pp 683--694, 2020.
Understanding human decision making is a key requirement to improve crowd simulation models so that they can better mimic real human behavior. It is often difficult to study human decision making during dangerous situations because of the complexity of the scenarios and situations to be simulated. Immersive virtual reality offers the possibility to carry out such experiments without exposing participants to real danger. In the real world, it has often been observed that people tend to follow others in certain situations (e.g., unfamiliar environments or stressful situations). In this paper, we study human following behavior when it comes to exit choice during an evacuation of a train station. We have carried out immersive VR experiments under different levels of stress (alarm only or alarm plus fire), and we have observed how humans consistently tend to follow the crowd regardless of the levels of stress. Our results show that decision making is strongly influenced by the behavior of the virtual crowd: the more virtual people running, the more likely are participants to simply follow others. The results of this work could improve behavior simulation models during crowd evacuation, and thus build more plausible scenarios for training firefighters.
Van Toll, Wouter; Triesscheijn, Roy; Kallmann, Marcelo; Oliva, Ramon; Pelechano, Nuria; Pettre, Julien; Geraerts, Roland
Computers & Graphics, Vol. 91, pp 52--82, 2020.
A navigation mesh is a representation of a 2D or 3D virtual environment that enables path planning and crowd simulation for walking characters. Various state-of-the-art navigation meshes exist, but there is no standardized way of evaluating or comparing them. Each implementation is in a different state of maturity, has been tested on different hardware, uses different example environments, and may have been designed with a different application in mind. In this paper, we develop and use a framework for comparing navigation meshes. First, we give general definitions of 2D and 3D environments and navigation meshes. Second, we propose theoretical properties by which navigation meshes can be classified. Third, we introduce metrics by which the quality of a navigation mesh implementation can be measured objectively. Fourth, we use these properties and metrics to compare various state-of-the-art navigation meshes in a range of 2D and 3D environments. Finally, we analyze our results to identify important topics for future research on navigation meshes. We expect that this work will set a new standard for the evaluation of navigation meshes, that it will help developers choose an appropriate navigation mesh for their application, and that it will steer future research in interesting directions.
Argudo, Oscar; Andújar, Carlos; Chica, Antoni
Computer Graphics Forum, Vol. 39, Num. 1, pp 174--184, 2019.
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.
Barba, Elizabeth; Sánchez, Borja; Burri, Emanuel; Accarino, Anna; Monclús, Eva; Navazo, Isabel; Guarner, Francisco; Margolles, Abelardo; Azpiroz, Fernando
Neurogastroenterology & Motility, Vol. 31, Num. 12, pp 1--7, 2019.
Some patients complain that eating lettuce, gives them gas and abdominal distention. Our aim was to determine to what extent the patients assertion is sustained by evidence. An in vitro study measured the amount of gas produced during the process of fermentation by a preparation of human colonic microbiota (n = 3) of predigested lettuce, as compared to beans, a high gas-releasing substrate, to meat, a low gas-releasing substrate, and to a nutrient-free negative control. A clinical study in patients complaining of abdominal distention after eating lettuce (n = 12) measured the amount of intestinal gas and the morphometric configuration of the abdominal cavity in abdominal CT scans during an episode of lettuce-induced distension as compared to basal conditions. Gas production by microbiota fermentation of lettuce in vitro was similar to that of meat (P = .44), lower than that of beans (by 78 ± 15%; P < .001) and higher than with the nutrient-free control (by 25 ± 19%; P = .05). Patients complaining of abdominal distension after eating lettuce exhibited an increase in girth (35 ± 3 mm larger than basal; P < .001) without significant increase in colonic gas content (39 ± 4 mL increase; P = .071); abdominal distension was related to a descent of the diaphragm (by 7 ± 3 mm; P = .027) with redistribution of normal abdominal contents. Lettuce is a low gas - releasing substrate for microbiota fermentation and lettuce - induced abdominal distension is produced by an uncoordinated activity of the abdominal walls. Correction of the somatic response might be more effective than the current dietary restriction strategy.
Visualization of Large Molecular Trajectories
Duran, David; Hermosilla, Pedro; Ropinski, Timo; Kozlíková, Barbora; Vinacua, Àlvar; Vázquez, Pere-Pau
Proc. IEEE Transactions on Visualization and Computer Graphics, Vol. 25, Num. 1, pp 987--996, 2019.
The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.
Julio C. S. Jacques; Yağmur Güçlütürk; Marc Perez; Umut Güçlü; Andújar, Carlos; Xavier Baró; Hugo Jair; Isabelle Guyon; Marcel A. J. Van Gerven; Rob Van Lier; Sergio Escalera
IEEE Transactions on Affective Computing, pp 1-21, 2019.
Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.
Vázquez, Pere-Pau
Entropy, Vol. 21, Num. 6, pp 612, 2019.
The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the analysis of keywords or reference lists, with different degrees of human intervention. In this paper, we demonstrate how, with the use of Normalized Relative Compression, together with a set of automated data-processing tasks, we can successfully visually compare research articles and document collections. We also achieve very similar results with Normalized Conditional Compression that can be applied with a regular compressor. With our approach, we can group papers of different disciplines, analyze how a conference evolves throughout the different editions, or how the profile of a researcher changes through the time. We provide a set of tests that validate our technique, and show that it behaves better for these tasks than other techniques previously proposed.
Comino, Marc; Andújar, Carlos; Chica, Antoni; Brunet, Pere
Computer Graphics Forum, Vol. 37, Num. 5, pp 233--243, 2018.
Normal vectors are essential for many point cloud operations, including segmentation, reconstruction and rendering. The robust estimation of normal vectors from 3D range scans is a challenging task due to undersampling and noise, specially when combining points sampled from multiple sensor locations. Our error model assumes a Gaussian distribution of the range error with spatially-varying variances that depend on sensor distance and reflected intensity, mimicking the features of Lidar equipment. In this paper we study the impact of measurement errors on the covariance matrices of point neighborhoods. We show that covariance matrices of the true surface points can be estimated from those of the acquired points plus sensordependent directional terms. We derive a lower bound on the neighbourhood size to guarantee that estimated matrix coefficients will be within a predefined error with a prescribed probability. This bound is key for achieving an optimal trade-off between smoothness and fine detail preservation. We also propose and compare different strategies for handling neighborhoods with samples coming from multiple materials and sensors. We show analytically that our method provides better normal estimates than competing approaches in noise conditions similar to those found in Lidar equipment.
Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau
Computers & Graphics, Vol. 73, pp 1--16, 2018.
Mobile devices have experienced an incredible market penetration in the last decade. Currently, medium to premium smartphones are relatively a ordable devices. With the increase in screen size and resolution, together with the improvements in performance of mobile CPUs and GPUs, more tasks have become possible. In this paper we explore the rendering of medium to large volumetric models on mobile and low performance devices in general. To do so, we present a progressive ray casting method that is able to obtain interactive frame rates and high quality results for models that not long ago were only supported by desktop computers.
Hermosilla, Pedro; Ristchel, T.; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, T.
Proc.ACM Transactions on Computer Graphics, Proc. SIGGRAPH Asia., Vol. 37, Num. 6, pp 235:1--235:12, 2018.
Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, Authors’ addresses: Pedro Hermosilla, Ulm University, Germany, pedro-1.; Tobias Ritschel, University College London, United Kingdom,; Pere-Pau Vázquez,; Àlvar Vinacua,, Universitat Politècnica de Catalunya, Spain; Timo Ropinski, Ulm University, Germany, Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2018 Copyright held by the owner/author(s). 0730-0301/2018/11-ART235 we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at
A General Illumination Model for Molecular Visualization
Hermosilla, Pedro; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, Timo
Computer Graphics Forum, Vol. 37, Num. 3, pp 367--378, 2018.
Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom-based representations are the Space-filling, the Solvent Excluded Surface, the Balls-and-Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large-scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom-based molecular representations. The proposed model can be further evaluated in real-time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.
Top-down model fitting for hand pose recovery in sequences of depth images
Madadi. Meysam; Escalera, Sergio; Carruesco, Alex; Andújar, Carlos; Baró, Xavier; González, Jordi
Image and Vision Computing, Vol. 79, pp 63--75, 2018.
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. We evaluate our approach on a new created synthetic hand dataset along with NYU and MSRA real datasets. Results demonstrate that the proposed method outperforms the most recent pose recovering approaches, including those based on CNNs.
Vázquez, Pere-Pau; Hermosilla, Pedro; Guallar, Víctot; Estrada, Jorge; Vinacua, Àlvar
Computer Graphics Forum, Vol. 37, Num. 3, pp 391--402, 2018.
The analysis of protein-ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein-ligand interactions of Molecular Simulation trajectories is greatly facilitated.
3D4LIFE Conferences Results
Andújar, Carlos; Chica, Antoni; Comino, Marc
EuroVis 2020, Eurographics/IEEE VGTC Conference on Visualization 2020, pp 151--155, 2020.
Finding robust correspondences between images is a crucial step in photogrammetry applications. The traditional approach to visualize sparse matches between two images is to place them side-by-side and draw link segments connecting pixels with matching features. In this paper we present new visualization techniques for sparse correspondences between image pairs. Key ingredients of our techniques include (a) the clustering of consistent matches, (b) the optimization of the image layout to minimize occlusions due to the super-imposed links, (c) a color mapping to minimize color interference among links (d) a criterion for giving visibility priority to isolated links, (e) the bending of link segments to put apart nearby links, and (f) the use of glyphs to facilitate the identification of matching keypoints. We show that our technique substantially reduces the clutter in the final composite image and thus makes it easier to detect and inspect both inlier and outlier matches. Potential applications include the validation of image pairs in difficult setups and the visual comparison of feature detection / matching algorithms.
Easy Authoring of Image-Supported Short Stories for 3D Scanned Cultural Heritage
Comino, Marc; Chica, Antoni; Andújar, Carlos
Eurographics Workshop on Graphics and Cultural Heritage, 2020.
Visual storytelling is a powerful tool for Cultural Heritage communication. However, traditional authoring tools either produce videos that cannot be fully integrated with 3D scanned models, or require 3D content creation skills that imply a high entry barrier for Cultural Heritage experts. In this paper we present an image-supported, video-based authoring tool allowing non-3D-experts to create rich narrative content that can be fully integrated in immersive virtual reality experiences. Given an existing 3D scanned model, each story is based on a user-provided photo or system-proposed image. First, the system automatically registers the image against the 3D model, and creates an undistorted version that will serve as a fixed background image for the story. Authors can then use their favorite presentation software to annotate or edit the image while recording their voice. The resulting video is processed automatically to detect per-frame regions-of-interest. At visualization time, videos are projected onto the 3D scanned model, allowing the audience to watch the narrative piece in its surrounding spatial context. We discuss multiple color blending techniques, inspired by detail textures, to provide high-resolution detail. The system uses the image-to-model registration data to find suitable locations for triggers and avatars that draw the user attention towards the 3D model parts being referred to by the presenter. We conducted an informal user study to evaluate the quality of the immersive experience. Our findings suggest that our approach is a valuable tool for fast and easy creation of fully-immersive visual storytelling experiences.
Fons, Joan; Chica, Antoni; Andújar, Carlos
GRAPP, pp 71--82, 2020.
The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the users point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.
Avatars rendering and its effect on perceived realism in Virtual Reality
Rios, Àlex; Pelechano, Nuria
MARCH: Modeling and Animating Realistic Crowds and Humans; Workshop in 3rd IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) , 2020.
Immersive virtual environments have proven to be a plausible platform to be used by multiple disciplines to simulate different types of scenarios and situations at a low cost. When participants are immersed in a virtual environment experience presence, they are more likely to behave as if they were in the real world. Improving the level of realism should provide a more compelling scenario so that users will experience higher levels of presence, and thus be more likely to behave as if they were in the real world. This paper presents preliminary results of an experiment in which participants navigate through two versions of the same scenario with different levels of realism of both the environment and the avatars. Our current results, from a between subjects experiment, show that the reported levels of quality in the visualization are not significantly different, which means that other aspects of the virtual environment and/or avatars must be taken into account in order to improve the perceived level of realism.
Colonic content assessment from MRI imaging using a semi-automatic approach
Ceballos, Victor; Monclús, Eva; Vázquez, Pere-Pau; Bendezú, Álvaro; Mego,Marianela; Merino, Xavier; Azpiroz, Fernando; Navazo, Isabel
Eurographics Workshop on Visual Computing for Biology and Medicine. EG VCBM 2019, pp 17-26, 2019.
The analysis of the morphology and content of the gut is necessary in order to achieve a better understanding of its metabolic and functional activity. Magnetic resonance imaging (MRI) has become an important imaging technique since it is able to visualize soft tissues in an undisturbed bowel using no ionizing radiation. In the last few years, MRI of gastrointestinal function has advanced substantially. However, few studies have focused on the colon, because the analysis of colonic content is time consuming and cumbersome. This paper presents a semi-automatic segmentation tool for the quantitative assessment of the unprepared colon from MRI images. The techniques developed here have been crucial for a number of clinical experiments.
Comino, Marc; Chica, Antoni; Andújar, Carlos
CEIG-Spanish Computer Graphics Conference (2019), pp 51--57, 2019.
Nowadays, there are multiple available range scanning technologies which can capture extremely detailed models of realworld surfaces. The result of such process is usually a set of point clouds which can contain billions of points. While these point clouds can be used and processed offline for a variety of purposes (such as surface reconstruction and offline rendering) it is unfeasible to interactively visualize the raw point data. The most common approach is to use a hierarchical representation to render varying-size oriented splats, but this method also has its limitations as usually a single color is encoded for each point sample. Some authors have proposed the use of color-textured splats, but these either have been designed for offline rendering or do not address the efficient encoding of image datasets into textures. In this work, we propose extending point clouds by encoding their color information into textures and using a pruning and scaling rendering algorithm to achieve interactive rendering. Our approach can be combined with hierarchical point-based representations to allow for real-time rendering of massive point clouds in commodity hardware.
Delicado, Luis; Pelechano, Nuria
ACM Conference on Motion Interaction and Games (MIG'19), pp 1--6, 2019.
Achieving realistic virtual humans is crucial in virtual reality appli-cations and video games. Nowadays there are software and gamedevelopment tools, that are of great help to generate and simulatecharacters. They offer easy to use GUIs to create characters bydragging and drooping features, and making small modifications.Similarly, there are tools to create animation graphs and settingblending parameters among others. Unfortunately, even thoughthese tools are relatively user friendly, achieving natural animationtransitions is not straight forward and thus non-expert users tendto spend a large amount of time to generate animations that arenot completely free of artefacts. In this paper we present a methodto automatically generate animation blend spaces in Unreal engine,which offers two advantages: the first one is that it provides a toolto evaluate the quality of an animation set, and the second one isthat the resulting graph does not depend on user skills and it isthus not prone to user errors.
Escolano, Carlos; Costa-jussà, Marta R.; Lacroux, Elora; Vázquez, Pere-Pau
Conference on Empirical Methods in Natural Language Processing (EMNLP) and 9th International Joint Conference on Natural Language Processing (IJCNLP), pp 151–156, 2019.
The main alternatives nowadays to deal with sequences are Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN) architectures and the Transformer. In this context, RNNs, CNNs and Transformer have most commonly been used as an encoder-decoder architecture with multiple layers in each module. Far beyond this, these architectures are the basis for the contextual word embeddings which are revolutionizing most natural language downstream applications. However, intermediate layer representations in sequence-based architectures can be difficult to interpret. To make each layer representation within these architectures more accessible and meaningful, we introduce a web-based tool that visualizes them both at the sentence and token level. We present three use cases. The first analyses gender issues in contextual word embeddings. The second and third are showing multilingual intermediate representations for sentences and tokens and the evolution of these intermediate representations along the multiple layers of the decoder and in the context of multilingual machine translation.
Males, Jan; Monclús, Eva; Díaz, Jose; Vázquez, Pere-Pau
Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2019)-Short papers, pp 27-31, 2019.
The colon is an organ whose constant motility poses difficulties to its analysis. Although morphological data can be successfully extracted from Computational Tomography, its radiative nature makes it only indicated for patients with disorders. Only recently, acquisition techniques that rely on the use of Magnetic Resonance Imaging have matured enough to enable the generation of morphological colon data of healthy patients without preparation (i. e. administration of drugs or contrast agents). As a result, a database of colon morphological data for patients under different diets, has been created. Currently, the digestologists we collaborate with analyze the measured data of the gut by inspecting a set of spreadsheets. In this paper, we propose a system for the exploratory visual analysis of the whole database of morphological data at once. It provides features for the visual comparison of data correlations, the inspection of the morphological measures, as well 3D rendering of the colon segmented models. The system solely relies on the use of web technologies, which makes it portable even to mobile devices.
The future of avatar‐human interaction in VR, AR and mixed reality applications.
Pelechano, Nuria; Pettré, Julien; Chrysanthou, Yiorgos
Eurographics 2019 Think Tank, 2019.
As HMDs and AR technology have become increasingly popular and cheaper, the number of applications is also rapidly increasing. An important remaining challenge with such environments is the faithful representation of virtual humanoids. Not necessarily their visual appearance as much as the naturalness of their motion, behavior and responses. Simulating and animating correctly virtual humanoid for immersive VR and AR sits at the crossing between several research fields: Computer Graphics, Computer Animation, Computer Vision, Machine Learning and Virtual Reality and Mixed Reality. This Think Tank aims at discussing the integration of the latest advancements in the fields mentioned above with the purpose of enhancing VR, AR and mixed reality for populated environments. This session should open the discussion regarding how these different fields could work together to achieve real breakthroughs that go beyond the current state of the art in interaction between avatars and humans.
Salvetti, Isadora; Rios, Àlex; Pelechano, Nuria
CEIG-Spanish Computer Gra`hics Conference (2019), pp 97--101, 2019.
Virtual navigation should be as similar as possible to how we move in the real world, however the limitations of hardware and physical space make this a challenging problem. Tracking natural walk is only feasible when the dimensions of the virtual environment match those of the real world. The problem of most navigation techniques is that they produce motion sickness because the optical flow observed does not match the vestibular and proprioceptive information that appears during real physical movement. Walk in place is a technique that can successfully reduce motion sickness without losing presence in the virtual environment. It is suitable for navigating in a very large virtual environment but it is not usually needed in small virtual spaces. Most current work focuses on one specific navigation metaphor, however in our experience we have observed that if users are given the possibility to use walk in place for large distances, they tend to switch to normal walk when they are in a confined virtual area (such as a small room). Therefore, in this paper we present our ongoing work to seamlessly switch between two navigation metaphors based on leg and head tracking to achieve a more intuitive and natural virtual navigation.
Agus, Marco; Gobbetti, Enrico; Marton, Fabio; Pintore, Giovanni; Vázquez, Pere-Pau
International Conference on 3DVision, Verona, Italy, Sept. 5-8, 2018.
The hardware for mobile devices, from smartphones and tablets to mobile cameras, continues to be one of the fastest-growing areas of the technology market. Not only mobile CPUs and GPUs are rapidly increasing in power, but a variety of high-quality visual and motion sensors are being embedded in mobile solutions. This, together with the increased availability of high-speed networks at lower prices, has opened the door to a variety of novel VR, AR, vision, and graphics applications. This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state-of-the-art and key advances in specific application domains. The five key areas that will be presented are: 1) the evolution of mobile graphics capabilities; 2) the current trends in GPU hardware for mobile devices; 3) the main software development systems; 4) the scalable visualization of large scenes on mobile platforms; and, finally, 5) the use of mobile capture and data fusion for 3D acquisition and reconstruction.
Andújar, Carlos; Argudo, Oscar; Besora, Isaac; Brunet, Pere; Chica, Antoni; Comino, Marc
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 25--32, 2018.
Structure-from-motion along with multi-view stereo techniques jointly allow for the inexpensive scanning of 3D objects (e.g. buildings) using just a collection of images taken from commodity cameras. Despite major advances in these fields, a major limitation of dense reconstruction algorithms is that correct depth/normal values are not recovered on specular surfaces (e.g. windows) and parts lacking image features (e.g. flat, textureless parts of the facade). Since these reflective properties are inherent to the surface being acquired, images from different viewpoints hardly contribute to solve this problem. In this paper we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.
Andújar, Carlos; Brunet, Pere; Buxareu, Jerónimo; Fons, Joan; Laguarda, Narcís; Pascual, Jordi; Pelechano, Nuria
EUROGRAPHICS Workshop on Graphics and Cultural Heritage (EG GCH) . November 12-15. Viena (Austria), pp 47--56, 2018.
Virtual Reality (VR) simulations have long been proposed to allow users to explore both yet-to-built buildings in architectural design, and ancient, remote or disappeared buildings in cultural heritage. In this paper we describe an on-going VR project on an UNESCO World Heritage Site that simultaneously addresses both scenarios: supporting architects in the task of designing the remaining parts of a large unfinished building, and simulating existing parts that define the environment that new designs must conform to. The main challenge for the team of architects is to advance towards the project completion being faithful to the original Gaudí’s project, since many plans, drawings and plaster models were lost. We analyze the main requirements for collaborative architectural design in such a unique scenario, describe the main technical challenges, and discuss the lessons learned after one year of use of the system.
GL-Socket: A CG Plugin-based Framework for Teaching and Assessment
Andújar, Carlos; Chica, Antoni; Fairén, Marta; Vinacua, Àlvar
EG 2018 - Education Papers, pp 25--32, 2018.
In this paper we describe a plugin-based C++ framework for teaching OpenGL and GLSL in introductory Computer Graphics courses. The main strength of the framework architecture is that student assignments are mostly independent and thus can be completed, tested and evaluated in any order. When students complete a task, the plugin interface forces a clear separation of initialization, interaction and drawing code, which in turn facilitates code reusability. Plugin code can access scene, camera, and OpenGL window methods through a simple API. The plugin interface is flexible enough to allow students to complete tasks requiring shader development, object drawing, and multiple rendering passes. Students are provided with sample plugins with basic scene drawing and camera control features. One of the plugins that the students receive contains a shader development framework with self-assessment features. We describe the lessons learned after using the tool for four years in a Computer Graphics course involving more than one hundred Computer Science students per year.
Díaz, Jose; Meruvia-Pastor, Oscar; Vázquez, Pere-Pau
22nd International Conference Information Visualisation, IV 2018, Fisciano, Italy, July 10-13, 2018, pp 159--168, 2018.
Bar charts are among the most commonly used visualization graphs. Their main goal is to communicate quantities that can be visually compared. Since they are easy to produce and interpret, they are found in any situation where quantitative data needs to be conveyed (websites, newspapers, etc.). However, depending on the layout, the perceived values can vary substantially. For instance, previous research has shown that the positioning of bars (e.g. stacked vs separate) may influence the accuracy in bar ratio length estimation. Other works have studied the effects of embellishments on the perception of encoded quantities. However, to the best of the authors knowledge, the effect of perceptual elements used to reinforce the quantity depicted within the bars, such as contrast and inner lines, has not been studied in depth. In this research we present a study that analyzes the effect of several internal contrast and framing enhancements with respect to the use of basic solid bars. Our results show that the addition of minimal visual elements that are easy to implement with current technology can help users to better recognize the amounts depicted by the bar charts.
Fons, Joan; Monclús, Eva; Vázquez, Pere-Pau; Navazo, Isabel
XXVIII Spanish Computer Graphics Conference (CEIG 2018), Madrid, Spain, June 27-29, 2018, pp 47--50, 2018.
The recent advances in VR headsets, such as the Oculus Rift or HTC Vive, at affordable prices offering a high resolution display, has empowered the development of immersive VR applications. data. In this paper we propose an immersive VR system that uses some well-known acceleration algorithms to achieve real-time rendering of volumetric datasets in an immersive VR system. Moreover, we have incorporated different basic interaction techniques to facilitate the inspection of the volume dataset. The interaction has been designed to be as natural as possible in order to achieve the most comfortable, user-friendly virtual experience. We have conducted an informal user study to evaluate the user preferences. Our evaluation shows that our application is perceived usable, easy of learn and very effective in terms of the high level of immersion achieved
Hermosilla, Pedro; Maisch, Sebastian; Vázquez, Pere-Pau; Ropinski, Timo
VCBM 18: Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, September 20-21, 2018, pp 185--195, 2018.
Molecular surfaces are a commonly used representation in the analysis of molecular structures as they provide a compact description of the space occupied by a molecule and its accessibility. However, due to the high abstraction of the atomic data, fine grain features are hard to identify. Moreover, these representations involve a high degree of occlusions, which prevents the identification of internal features and potentially impacts shape perception. In this paper, we present a set of techniques which are inspired by the properties of translucent materials, that have been developed to improve the perception of molecular surfaces: First, we introduce an interactive algorithm to simulate subsurface scattering for molecular surfaces, in order to improve the thickness perception of the molecule. Second, we present a technique to visualize structures just beneath the surface, by still conveying relevant depth information. And lastly, we introduce reflections and refractions into our visualization that improve the shape perception of molecular surfaces. We evaluate the benefits of these methods through crowd-sourced user studies as well as the feedback from several domain experts.
Orellana, Bernat; Monclús, Eva; Brunet, Pere; Navazo, Isabel; Bendezú, Álvaro; Azpiroz, Fernando
Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part {II}, pp 638--647, 2018.
About 50% of the patients consulting a gastroenterology clinic report symptoms without detectable cause. Clinical researchers are interested in analyzing the volumetric evolution of colon segments under the effect of different diets and diseases. These studies require noninvasive abdominal MRI scans without using any contrast agent. In this work, we propose a colon segmentation framework designed to support T2-weighted abdominal MRI scans obtained from an unprepared colon. The segmentation process is based on an efficient and accurate quasiautomatic approach that drastically reduces the specialist interaction and effort with respect other state-of-the-art solutions, while decreasing the overall segmentation cost. The algorithm relies on a novel probabilistic tubularity filter, the detection of the colon medial line, probabilistic information extracted from a training set and a final unsupervised clustering. Experimental results presented show the benefits of our approach for clinical use.
Users locomotor behavior in Collaborative Virtual Reality
Rios, Àlex; Palomar, Marc; Pelechano, Nuria
Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games, MIG 2018, Limassol, Cyprus, November 08-10, 2018, pp 1--9, 2018.
This paper presents a virtual reality experiment in which two participants share both the virtual and the physical space while performing a collaborative task. We are interested in studying what are the differences in human locomotor behavior between the real world and the VR scenario. For that purpose, participants performed the experiment in both the real and the virtual scenarios. For the VR case, participants can see both their own animated avatar and the avatar of the other participant in the environment. As they move, we store their trajectories to obtain information regarding speeds, clearance distances and task completion times. For the VR scenario, we also wanted to evaluate whether the users were aware of subtle differences in the avatars animations and foot steps sounds. We ran the same experiment under three different conditions: (1) synchronizing the avatars feet animation and sound of footsteps with the movement of the participant; (2) synchronizing the animation but not the sound and finally (3) not synchronizing either one. The results show significant differences in users presence questionnaires and also different trends in their locomotor behavior between the real world and the VR scenarios. However the subtle differences in animations and sound tested in our experiment had no impact on the results of the presence questionnaires, although it showed a small impact on their locomotor behavior in terms of time to complete their tasks, and clearance distances kept while crossing paths.
3D4life PhD Thesis Results