|CrimAnalyzer: understanding crime patterns in São Paulo City|
G. Garcia, J. Silveira, J. Poco, A. Paiva , M. Nery, C.T. Silva,
S. Adorno and L.G. Nonato
IEEE Trans. on Visualization and Computer Graphics, to appear, 2019.
Abstract: São Paulo is the largest city in South America, with criminality rates as large as the city. The number and type of crimes vary considerably around the city, assuming different patterns depending on urban and social characteristics of each particular location. Previous works have mostly focused on the analysis of crimes with the intent of uncovering patterns associated to social factors, time seasonality, and urban routine activities. Therefore, those studies and tools are more global in the sense that they are not designed to investigate specific regions of the city such as particular neighborhoods, avenues, or public areas. Enabling tools able to explore particular locations of the city is essential for domain experts to accomplish their analysis in a bottom-up fashion, more clearly revealing how local urban features related to mobility, passersby behavior, and presence of public infrastructures such as terminals of public transportation and schools, can influence the quantity and type of crimes. In this paper, we present CrimAnalyzer; a visualization assisted analytic tool that allows users to analyze the behavior of crimes in specific regions of a city. The system allows users to identify local hotspots, their pattern of crimes, and how the hotspots and corresponding crime patterns change over time. CrimAnalyzer has been developed from the demand of a team of experts in criminology and deals with three major challenges in this context, i) flexibility to explore local regions and understand their crime patterns, ii) identification of spatial crime hotspots that might not be the most prevalent ones in terms of the number of crimes but are important enough to be investigated, and iii) understand the dynamic of crime patterns and types over time. The effectiveness and usefulness of the proposed system are demonstrated by qualitative and quantitative comparisons as well as case studies involving real data and run by domain experts. The experiments show the capability of CrimAnalyzer in identifying crime-related phenomena.
|Boundary particle resampling for surface reconstruction in liquid animation|
M. Sandim, N. Oe, D. Cedrim, P. Pagliosa and A. Paiva
Computers & Graphics, 84, pp. 55-65, 2019.
Special Track of SIBGRAPI 2019
Abstract: In this paper, we present a novel adaptive particle resampling method tailored for sur- face reconstruction of level-sets defined by the boundary particles from a particle-based liquid simulation. The proposed approach is simple and easy to implement, and only requires the positions of the particles to identify and refine regions with small and thin fluid features accurately. The method comprises four main stages: boundary detection, feature classification, particle refinement, and surface reconstruction. For each simula- tion frame, firstly the free-surface particles are captured through a boundary detection method. Then, the boundary particles are classified and labeled according to the defor- mation and the stretching of the free-surface computed from the Principal Component Analysis (PCA) of the particle positions. The particles placed at feature regions are then refined according to their feature classification. Finally, we extract the free-surface of the zero level-set defined by the resampled boundary particles and its normals. In order to render the free-surface, we demonstrate how the traditional methods of surface fitting in Computer Graphics and Computational Physics literature can benefit from the proposed resampling method. Furthermore, the results shown in the paper attest the effectiveness and robustness of our method when compared to state-of-the-art adaptive particle resampling techniques.
|Vessel optimal transport for automated alignment of retinal fundus images|
D. Motta, W. Casaca, and A. Paiva
IEEE Trans. on Image Processing, 28(12), pp. 6154-6168, 2019.
Abstract: Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye’s blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the match- ing stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of- the-art methods on various fundus image databases.
|Fundus image transformation revisited: towards determining|
more accurate registrations
D. Motta, W. Casaca, and A. Paiva
Proceedings of IEEE CBMS 2018, pp. 227-232, 2018.
Abstract: Image registration is an important pre-processing step in several computer vision applications, being crucial in medical imaging systems where patients are examined and diagnosed almost exclusively by images. For fundus images, in which microscopic differences are significant to better support medical decisions, an accurate registration is im- perative. Historically, geometric transformations derived from quadratic models have been widely used as a benchmark to perform registration on fundus images, but in this paper, we demonstrate that quadratic and other high-order mappings are not necessarily the best choices for this purpose, even for well-established state-of-the-art registration methods. From a novel overlapping metric designed to determine the best image transformation that maximizes the registration accuracy, we improve the assertiveness of several methods of the literature while still preserving the same computational burden initially reached by those methods.
|A user-friendly interactive framework for unsteady fluid flow |
segmentation and visualization
D. Motta, W. Casaca, P. Pagliosa and A. Paiva
Journal of Visualization, 21(4), pp. 625-636, 2018.
Abstract: While vector fields are essential to simulate a great amount of natural phenomena, the difficulty to identify patterns and predict behaviors makes the visual segmentation in simulations an attractive and powerful tool. In this paper, we present a novel user-steered segmentation framework to cope with steady as well as unsteady vector fields on fluid flow simulations. Given a discrete vector field, our approach extracts multi-valued features from the field by exploiting its streamline structures so that these features are mapped to a visual space through a multidimensional projection technique. From an easy-to-handle interface, the user can interact with the projected data so as to partition and explore the most relevant vector features in a guidance frame of the simulation. Besides navigating and visually mining structures of interest, the interactivity with the projected data also allows us to progressively enhance the segmentation result according to the user insights. Finally, to successfully deal with unsteady simulations, the segments previously annotated by the user are used as a training set for a Support Vector Machine (SVM) approach that classifies the remaining frames in the flow. We attest the effectiveness and versatility of our methodology throughout a set of classical physical-inspired applications on fluid flow simulations as depicted in the experiment results section.
|Least-squares morphing of dynamic meshes |
A. Medalha, L. Pagliosa, A. Paiva and P. Pagliosa
Proceedings of SIBGRAPI 2017, pp. 23-30, 2017.
Abstract: Morphing is a technique that smoothly transforms a shape onto another. In this paper, we present a method for morphing of two dynamic meshes: mesh sequences representing the keyframes of animated shapes over time. The pipeline of the proposed method comprises two main stages: template- based cross-parameterization and dynamic mesh interpolation. In the cross-parameterization stage, we use a variation of least- squares (LS) meshes to provide a coarse approximation of the geometry of the source mesh onto the target mesh. In our method, the possible candidates for initial control points of the LS-mesh are detected using an approach based on the Heat Kernel Signature (HKS). Then, an iterative process of fine fitting adds new constraints in the LS-mesh processing. The cross- parameterization is performed just once for any two frames in order to establish a full correspondence between vertices of the source and target meshes. Next, we use such a correspondence in the dynamic mesh interpolation stage to produce the morphing results. The method is entirely mesh-based and does not demand the generation of skeletons, mesh segmentation or the use of any additional control structures. Moreover, it does not require the two input meshes to share the same number of vertices or triangles, or to have the same connectivity. The provided results show the robustness and effectiveness of our method.
|Depth functions as a quality measure and for steering |
D. Cedrim, V. Vad, A. Paiva, E. Gröller, L.G. Nonato and A. Castelo
Computers & Graphics, 60, pp. 93-106, 2016.
Special Track of SIBGRAPI 2016
Abstract: The analysis of multidimensional data has been for many years a topic of continuous research. This type of data can be found in several different areas of science. A common task analyzing such data is to investigate patterns by interacting with spatializations of the data in a visual domain. Understanding the relation between underlying dataset characteristics and the technique used to provide a visual representation of such dataset is of fundamental importance since it can provide a better intuition on what to expect from the spatialization. Motivated by this, in this paper we propose the usage of concepts from non-parametric statistics, namely depth functions, as quality measures for spatializations. We aim to evaluate the action of multidimensional projection (MP) techniques on such an estimate. In order to investigate this, we apply both qualitative and quantitative analyses on four different multidimensional techniques selected according to the properties they aim to preserve. We evaluate them with datasets of different characteristics: synthetic, real world, high dimensional and contaminated with outliers. As a straightforward application we propose to use depth information to guide MP techniques which rely on an intervention through the control point selection and positioning, showing that even for a technique which does no intend to preserve any centrality measure, interesting results can be achieved by separating regions possible contaminated with outliers.
|Partial similarity of 3D shapes using cross recurrence plot|
R.U. Nakanishi, J.P. Ono, P. Pagliosa, L.G. Nonato and A. Paiva
Proceedings of SIBGRAPI 2016, pp. 448-454, 2016.
Abstract: This paper presents a novel 3D partial shape retrieval algorithm based on time-series analysis. Given a piece of a 3D shape, the proposed method encodes the shape descriptor given by the Heat Kernel Signature (HKS) as a time-series, where the time is considered an ordered sequence of vertices provided by the Fiedler vector. Finally, a similarity metric is created using a well-known tool in time-series analysis called Cross Recurrence Plot (CRP). The good performance of our method is also attested in a large collection of shape models.
|Boundary detection in particle-based fluids|
M. Sandim, D. Cedrim, L.G. Nonato, P. Pagliosa and A. Paiva
Computer Graphics Forum, 35(2), pp. 215-224, 2016.
Proceedings of Eurographics 2016
Abstract: This paper presents a novel method to detect free-surfaces on particle-based volume representation. In contrast to most particle-based free-surface detection methods, which perform the surface identification based on physical and geometrical properties derived from the underlying fluid flow simulation, the proposed approach only demands the spatial location of the particles to properly recognize surface particles, avoiding even the use of kernels. Boundary particles are identified through a Hidden Point Removal (HPR) operator used for visibility test. Our method is very simple, fast, easy to implement and robust to changes in the distribution of particles, even when facing large deformation of the free-surface. A set of comparisons against state-of-the-art boundary detection methods show the effectiveness of our approach. The good performance of our method is also attested in the context of fluid flow simulation involving free-surface, mainly when using level-sets for rendering purposes.
|Visualizing and interacting with kernelized data|
A. Barbosa, F. Paulovich, A. Paiva, S. Goldenstein, F. Petronetto and L.G. Nonato
IEEE Trans. on Visualization and Computer Graphics, 22(3), pp. 1314-1325, 2016.
Abstract: Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait tha t other kernel-based projection techniques do not have.
|Particle-based fluids for viscous jet buckling|
L.F.S. Andrade, M. Sandim, F. Petronetto, P. Pagliosa and A. Paiva
Computers & Graphics, 52, pp. 106-115, 2015.
Abstract: In this paper, we introduce a novel meshfree framework for animating free surface viscous liquids with jet buckling effects, such as coiling and folding. Our method is based on Smoothed Particle Hydrodynamics (SPH) fluids and allows more realistic and complex viscous behaviors than the previous SPH frameworks in computer animation literature. The viscous liquid is modeled by a non-Newtonian fluid flow and the variable viscosity under shear stress is achieved using a viscosity model known as Cross model. We demonstrate the efficiency and stability of our framework in a wide variety of animations, including scenarios with arbitrary geometries and high resolution of SPH particles. The interaction of the viscous liquid with complex solid obstacles is performed using boundary particles. Our framework is able to deal with different inlet velocity profiles and geometries of the injector, as well as moving inlet jet along trajectories given by cubic Hermite splines. Moreover, the simulation speed is significantly accelerated by using Computer Unified Device Architecture (CUDA) computing platform. Extended version of SIBGRAPI 2014 paper.
|Concentric Radviz: visual exploration of multi-task classification|
J.P. Ono, F. Sikansi, D.C. Correa, F. Paulovich, A. Paiva and L.G. Nonato
Proceedings of SIBGRAPI 2015, pp. 165-172, 2015.
Abstract: The discovery of patterns in large data collections is a difficult task. Visualization and machine learning techniques have emerged as a way to facilitate data analysis, providing tools to uncover relevant patterns from the data. This paper presents Concentric Radviz, a general purpose class visualization system that takes into account multi-class, multi-label and multi-task classifiers. Concentric Radviz uses a force attenuation scheme, which minimizes cluttering and ambiguity in the visual layout. In addition, the user can add concentric circles to the layout in order to represent classification tasks. Our validation results and the application of Concentric Radviz for two real collections suggest that this tool can reveal important data patterns and relations. In our application, the user can interact with the visualization by selecting regions of interest according to specific criteria and changing projection parameters.
|Exploratory segmentation of vector fields using multidimensional projection|
D. Motta, M. Oliveira, P. Pagliosa, L.G. Nonato and A. Paiva
Proceedings of SIBGRAPI 2015, pp. 250-256, 2015.
Abstract: The difficulty to understand the complex behavior of vector fields makes its visual segmentation an area of constant interest in scientific visualization. In this paper, we present a novel interactive segmentation framework for discrete vector fields. In our method, the vector field domain is partitioned into multiple regions with same flow patterns. In order to accomplish this task, feature vectors are extracted from streamlines and mapped to a visual space using multidimensional projection. The interactivity with projected data in the visual space improving the results of the segmentation according to user’s knowledge. The provided results and comparisons show the flexibility and effectiveness of our framework.
|SHREC’15 Track: Non-rigid 3D Shape Retrieval|
R.U. Nakanishi, A. Paiva, L.G. Nonato, et al.
Proceedings of EG Workshop on 3D Object Retrieval, 2015.
Abstract: Non-rigid 3D shape retrieval has become a research hotpot in communities of computer graphics, computer vision, pattern recognition, etc. In this paper, we present the results of the SHREC’15 Track: Non-rigid 3D Shape Retrieval. The aim of this track is to provide a fair and effective platform to evaluate and compare the performance of current non-rigid 3D shape retrieval methods developed by different research groups around the world. The database utilized in this track consists of 1200 3D watertight triangle meshes which are equally classified into 50 categories. All models in the same category are generated from an original 3D mesh by implementing various pose transformations. The retrieval performance of a method is evaluated using 6 commonly-used measures (i.e., PR-plot, NN, FT, ST, E-measure and DCG.). Totally, there are 37 submissions and 11 groups taking part in this track. Evaluation results and comparison analyses described in this paper not only show the bright future in researches of non-rigid 3D shape retrieval but also point out several promising research directions in this topic.
|All-in-focus imaging technique used to improve 3d retinal fundus
D. Motta, L. Matos, A.C. Souza, R. Marcato, A. Paiva and L.A.V. Carvalho
Proceedings of ACM SAC 2015, pp. 26-31, 2015.
Abstract: In this paper we have applied the stacking technique on images from a medical device. There is an urgent need in ophthalmology for more cost effective instrumentation for the early diagnosis of glaucoma, one of the leading diseases in the cause of blindness worldwide. The current techniques involve expensive optical equipments generally called fundus camera, which most of the time capture a single high resolution frame for one eye at a time. In this research we have used stereoscopic videos of a state-of-the-art 3D reti nal camera, which has simpler optics and electronics when compared to current monoscopic models. Nevertheless, the cross correlation algorithms for depth computation are very sensible to image noise and out of focus regions. We demons- trate the efficiency of our technique on experiments involving a sequence of images extracted from videos in simulations of optic nerves using artificial objects.
fluids for viscous jet buckling
L.F.S. Andrade, M. Sandim, F. Petronetto, P. Pagliosa and A. Paiva
Proceedings of SIBGRAPI 2014, pp. 65-72, 2014.
Abstract: We present a novel meshfree technique for animating free surface viscous liquids with jet buckling effects, such as coiling and folding. Our technique is based on Smoothed Particle Hydrodynamics (SPH) fluids and allows more realistic and complex viscous behaviors than the preceding SPH frameworks in computer animation literature. The viscous liquid is modeled by a non-Newtonian fluid flow and the variable viscosity under shear stress is achieved using a viscosity model known as Cross model. The proposed technique is efficient and stable, and our framework can animate scenarios with high resolution of SPH particles in which the simulation speed is significantly accelerated by using Computer Unified Device Architecture (CUDA) computing platform. This work also includes several examples that demonstrate the ability of our technique.
|Approximating implicit curves on plane and
with affine arithmetic
F.C. Nascimento, A. Paiva, L.H. Figueiredo and J. Stolfi
Computers & Graphics, 40, pp. 36-48, 2014.
Abstract: We present a spatially and geometrically adaptive method for computing a robust polygonal approximation of an implicit curve defined on a planar region or on a triangulated surface. Our method uses affine arithmetic to identify regions where the curve lies inside a thin strip. Unlike other interval methods, even those based on affine arithmetic, our method works on both rectangular and triangular decompositions and can use any refinement scheme that the decomposition offers. Extended version of SIBGRAPI 2012 paper.
multiscale information and summaries of texts
P. Pagliosa, R.M. Martins, D. Cedrim, A. Paiva, R. Minghim and L.G. Nonato
Proceedings of SIBGRAPI 2013, pp. 91-98, 2013.
Abstract: Combining distinct visual metaphors has been the mechanism adopted by several systems to enable the simultaneous visualization of multiple levels of information in a single layout. However, providing a meaningful layout while avoiding visual clutter is still a challenge. In this work we combine word clouds and a rigid-body simulation engine into an intuitive visualization tool that allows a user to visualize and interact with the content of document collections using a single overlap-free layout. The proposed force scheme ensures that neighboring documents are kept close to each other during and after layout change. Each group of neighboring documents formed on the layout generates a word cloud. A multi-seeded procedure guarantees a harmonious arrangement of distinct word clouds in visual space. The visual metaphor employs lenses to represent document instances where the size of each lens defines the importance of the document in the collection. To keep the visualization clean and intuitive, only the most relevant documents are depicted as lens while the remaining ones are either displayed as smaller glyphs to help convey density information or simply removed from the layout. Hidden instances are moved together with its neighbors during force-based simulation, should they become visible later, but are not processed individually. This shadow movement avoids excess calculations by the force-based scheme, thus ensuring scalability and interactivity.
|Normal correction towards smoothing
P. Valdivia, D. Cedrim, F. Petronetto, A. Paiva and L.G. Nonato
Proceedings of SIBGRAPI 2013, pp. 187-194, 2013.
Abstract: Some of the main approaches for mesh denoising uses a two-step scheme, by filtering normals to later perform a point updating step to match the corrected normals. In this paper, we propose an adaptation of a two-step approach for point set surfaces, exploring three different weight schemes. Moreover, we explore three techniques for normal estimation, and how it improve the quality of filtering process. Towards a quantitative analysis, in addition of usual visual comparison, we evaluate the effectiveness of each combination by three measures and compare them against some state-of-art point-based denoising techniques.
|Mesh-free discrete Laplace-Beltrami operator
F. Petronetto, A. Paiva, E.S. Helou, D.E. Stewart and L.G. Nonato
Computer Graphics Forum, 32(6), pp. 214-226, 2013.
Abstract: In this work we propose a new discretization method for the Laplace-Beltrami operator defined on point-based surfaces. In contrast to the existing point-based discretization techniques, our approach does not rely on any triangle mesh structure, turning out truly meshfree. Based on a combination of Smoothed Particle Hydrodynamics and an optimization procedure to estimate area elements, our discretization method results in accurate solutions while still being robust when facing abrupt changes in the density of points. Moreover, the proposed scheme results in numerically stable discrete operators. The effectiveness of the proposed technique is brought to bear in many practical applications. In particular, we use the eigenstructure of the discrete operator for filtering and shape segmentation. Point-based surface deformation is another application that can be easily carried out from the proposed discretization mehod.
|Spectral image segmentation using image
and inner product-based metric
W. Casaca, A. Paiva, E.G. Nieto, P. Joia and L.G. Nonato
Journal of Mathematical Imaging and Vision, 45(3), pp. 227-238, 2013.
Abstract: Image segmentation is an indispensable tool in computer visionapplications, such as recognition, detection and tracking. In this work, we introduce a novel user-assisted image segmentation technique which combines image decomposition, inner product-based similarity metric, and spectral graph theory into a concise and unified framework.First, we perform an image decomposition to split the image into texture and cartoon components. Then, an affinity graph is generated and the weights are assigned to its edges according to a gradient-based inner-product function. From the eigenstructure of the affinity graph,the image is partitioned through the spectral cut of the underlying graph. The computational effort of our framework is alleviated by animage coarsening process, which reduces the graph size considerably. Finally, a coarse-to-fine interpolation is applied in order to assemble the partition back onto the original image. The efficiency of the proposed methodology is attested by comparisons with state-of-art spectral segmentation methods through a qualitative and quantitative analysis of the results. Extended version of SIBGRAPI 2011 paper.
|Approximating implicit curves on
triangulations with affine arithmetic
A. Paiva, F.C. Nascimento, L.H. Figueiredo and J. Stolfi
Proceedings of SIBGRAPI 2012, pp. 94-101, 2012.
Abstract: We present an adaptive method for computing a robust polygonal approximation of an implicit curve in the plane that uses affine arithmetic to identify regions where the curve lies inside a thin strip. Unlike other interval methods, even those based on affine arithmetic, our method works on triangulations, not only on rectangular quadtrees.
|Colorization by multidimensional projection
W. Casaca, E.G. Nieto, C. Ferreira, G. Tavares, P. Pagliosa, F. Paulovich,
L.G. Nonato and A. Paiva
Proceedings of SIBGRAPI 2012, pp. 32-38, 2012.
Abstract: Most image colorization techniques assign colors to grayscale images by embedding image pixels into a highdimensional feature space and applying a color pattern to each cluster of high-dimensional data. A main drawback of such an approach is that, depending on texture patterns and image complexity, clusters of similar pixels can hardly be defined automatically, rendering existing methods prone to fail. In this work we present a novel approach to colorize grayscale images that allows for user intervention. Our methodology makes use of multidimensional projection to map high-dimensional data to a visual space. User can manipulate projected data in the visual space so as to further improve clusters and thus the colorization result. Different from other methods, our interactive tool is ease of use while still being flexible enough to enable local color modification. We show the effectiveness of our approach through a set of examples and comparisons against existing colorization methods.
|Class-specific metrics for multidimensional
data projection applied to CBIR
P. Joia, E.G. Nieto, J.B. Neto, W. Casaca, G. Botelho, A. Paiva and L.G. Nonato
The Visual Computer, 28(10), pp. 1027-1037, 2012.
Abstract: Content-based image retrieval is still a challenging issue due to the inherent complexity of images and choice of the most discriminant descriptors. Recent developments in the field have introduced multidimensional projections to burst accuracy in the retrieval process, but many issues such as introduction of pattern recognition tasks and deeper user intervention to assist the process of choosing the most discriminant features still remain unaddressed. In this paper we present a novel framework to CBIR that combines pattern recognition tasks, class-specific metrics and multidimensional projection to devise an effective and interactive image retrieval system. User interaction plays an essential role in the computation of the final multidimensional projection from which image retrieval will be attained. Results have shown that the proposed approach outperforms existing methods, turning out to be a very attractive alternativefor managing image data sets. Extended version of SIBGRAPI 2011 paper.
adaptive mesh augmentation using stellar operators
A. Paiva, R. Amorim, L. Velho and M.C. Sousa
Proceedings of SIBGRAPI 2011, pp. 78-85, 2011.
Abstract: In this paper we present a new method for modeling and editing surface detail using free-form curves and a natural interface. It combines in a original way an adaptive multiresolution mesh structure with a simple, intuitive sketch-based interface. One of the novel contributions of this work is the curve sensitive mesh resolution control, which allows the definition of a rich set of operators that locally modify the surface geometry. Furthermore, the present framework provides the basic functionality to build a complete feature based modeling system.
|Spectral segmentation using cartoon-texture
and inner product-based metric
W. Casaca, A. Paiva and L.G. Nonato
Proceedings of SIBGRAPI 2011, pp. 266-273, 2011.
Abstract: This paper presents a user-assisted image partition technique that combines Cartoon-Texture Decomposition (CTD), inner product-based similarity metric, and spectral cut into a unified framework. The CTD is used to first split the image into textured and texture-free components, the latter being used to define a gradient-based inner-product function. An affinity graph is then derived and weights are assigned to its edges according to the inner product-based metric. Spectral cut is computed on the affinity graph so as to partition the image. The computational burden of the spectral cut is mitigated by a fine-to-coarse image representation process, which enables moderate size graphs that can be handled more efficiently. The partitioning can be steered by interactively by changing the weights of the graph through user strokes. Weights are updated by combining the texture component computed in the first stage of our pipeline and a recent harmonic analysis technique that captures waving patterns. Finally, a coarse-to-fine interpolation is applied in order to project the partition back onto the original image. The suitable performance of the proposed methodology is attested by comparisons against state-of-art spectral segmentation methods.
|Projection-based image retrieval using
P. Joia, E.G. Nieto, G. Botelho, J.B. Neto, A. Paiva and L.G. Nonato
Proceedings of SIBGRAPI 2011, pp. 125-132, 2011.
Abstract: Content-based image classification/retrieval based on image descriptors has become an essential component in most database systems. However, most existing systems do not provide mechanisms that enable interactive multi-objective queries, hampering the user experience. In this paper we present a novel methodology capable of accomplishing multi-objective searches while still being interactive. Our approach relies on a combination of class-specific metrics and multidimensional projection to devise an effective and interactive image retrieval system. Besides allowing visual exploration of image data sets, the provided results and comparisons show that the proposed approach outperforms existing methods, turning out to be a veryattractive alternative for managing image data sets.
F. Petronetto, A. Paiva, M. Lage, G. Tavares, H. Lopes and T. Lewiner
IEEE Trans. on Visualization and Computer Graphics, 16(2), pp. 338-349, 2010.
Abstract: Vector fields analysis traditionally distinguishes conservative (curl-free) from mass preserving (divergence-free) components. The Helmholtz-Hodge decomposition allows separating any vector field into the sum of three uniquely defined components: curl-free, divergence-free and harmonic. This decomposition is usually achieved by using mesh-based methods such as finite differences or finite elements. This work presents a new meshless approach to the Helmholtz-Hodge decomposition for the analysis of 2D discrete vector fields. It embeds into the SPH particle-based framework. The proposed method is efficient and can be applied to extract features from a 2D discrete vector field and to multiphase fluid flow simulation to ensure incompressibility.
for tone mapping in line illustrations
A. Paiva, E.V. Brazil, F. Petronetto and M.C. Sousa
The Visual Computer, 25(5-7), pp. 519-527, 2009.
Proceedings of CGI 2009
Abstract: This paper presents a novel physically-based, meshless framework for line art rendering of surfaces with complex geometry and arbitrary topology. We apply an inviscid fluid flow simulation using Smoothed Particles Hydrodynamics to compute the global velocity and cross fields over the surface model. These fields guide the automatic placement of strokes while marinating the geometric and topological coherence with the model. Target tones are matched by tonal value maps allowing different hatching and cross-hatching effects. We demonstrate the simplicity and effectiveness of our method with sample renderings obtained for a variety of models.
viscoplastic fluid/solid simulation
A. Paiva, F. Petronetto, T. Lewiner and G. Tavares
Computer-Aided Design, 41(4), pp. 306-314, 2009.
Abstract: Simulations of viscoplastic materials are traditionally governed by continuum mechanics. The viscous behavior is typically modeled as an internal force, defined by diverse quantities. This work introduces a fluid model to simulate the viscoplastic effect of solid materials, such as plastic, wax, clay and polymer. Our method consists in modeling a solid object through a non-Newtonian fluid with high viscosity. This fluid simulation uses the Smoothed Particle Hydrodynamics method and the viscosity is formulated by using the General Newtonian Fluid model. This model concentrates the viscoplasticity in a single parameter. Our results show clear effects of creep, melting, hardening and flowing. Extended version of SIBGRAPI 2006 paper.
non-Newtonian fluid animation for melting objects
A. Paiva, F. Petronetto, T. Lewiner and G. Tavares
Proceedings of SIBGRAPI 2006, pp. 78-85, 2006.
Abstract: This paper presents a new visually realistic animation technique for objects that melt and flow. It simulates viscoplastic properties of materials such as metal, plastic, wax, polymer and lava. The technique consists in modeling the object by the transition of a non-Newtonian fluid with high viscosity to a liquid of low viscosity. During the melting, the viscosity is formulated using the General Newtonian fluids model, whose properties depend on the local temperature. The phase transition is then driven by the heat equation. The fluid simulation framework uses a variation of the Lagrangian method called Smoothed Particle Hydrodynamics. This paper also includes several schemes that improve the efficiency and the numerical stability of the equations.
||Robust adaptive meshes for implicit surfaces
A. Paiva, H. Lopes, T. Lewiner and L.H. Figueiredo
Proceedings of SIBGRAPI 2006, pp. 205-212, 2006.
Abstract: This work introduces a robust algorithm for computing good polygonal approximations of implicit surfaces, where robustness entails recovering the exact topology of the implicit surface. Furthermore, the approximate triangle mesh adapts to the geometry and to the topology of the real implicit surface. This method generates an octree subdivided according to the interval evaluation of the implicit function in order to guarantee the robustness, and to the interval automatic differentiation in order to adapt the octree to the geometry of the implicit surface. The triangle mesh is then generated from that octree through an enhanced dual marching.
field reconstruction from sparse samples with applications
M. Lage, F. Petronetto, A. Paiva, H. Lopes, T. Lewiner and G. Tavares
Proceedings of SIBGRAPI 2006, pp. 297-304, 2006.
Abstract: This work presents a novel algorithm for 2D vector field reconstruction from sparse set of points-vectors pairs. Our approach subdivides the domain adaptively in order to make local piecewise polynomial approximations for the field. It uses partition of unity to blend those local approximations together, generating a global approximation for the field. The flexibility of this scheme allows handling data from very different sources. In particular, this work presents important applications of the proposed method to velocity and acceleration fields analysis, in particular for fluid dynamics visualization. Front cover of the Proceedings of SIBGRAPI 2006.
visualization of strange attractors using affine arithmetic
A. Paiva, L.H. Figueiredo and J. Stolfi
Computers & Graphics, 30(6), pp. 1020-1026, 2006.
Abstract: We propose the use of affine arithmetic in cell-mapping methods for the robust visualization of strange attractors and show that the resulting cellular approximations converge faster than those produced by cell-mapping methods based on classical interval arithmetic.