[8] Subjective
and objective quality
assessment for volumetric video
Evangelos Alexiou, Yana Nehmé, Emin Zerman, Irene Viola,
Guillaume Lavoué, Ali Ak, Aljosa Smolic, Patrick Le Callet,
and Pablo Cesar, Chapter
in Immersive
Video Technologies. Giuseppe Valenzise,
Martin Alain, Emin Zerman, Cagri Ozcinar (Editors), Elsevier, 2023,
Pages 501-552.
[7] Quality
Assessment in Computer Graphics Guillaume
Lavoué, Rafał
Mantiuk, Chapter
in Visual
Signal Quality Assessment – Quality of Experience
(QoE). Deng,
C., Ma, L., Lin, W., Ngan,
K.N. (Editors), Springer,
2014. [Preprint](web)
[6] 3D
Mesh Compression Florent
Dupont, Guillaume
Lavoué, Marc
Antonini, Chapter
in 3D
Video from Capture to Diffusion. Lucas
L., Loscos C.
et Remion Y. (Editors), Wiley-ISTE,
2013.
[5] Compression
de maillages 3D Florent
Dupont, Guillaume
Lavoué, Marc
Antonini, Chapter
in Vidéo
3D:
Capture,
traitement et diffusion. Lucas
L., Loscos C.
et Remion Y. (Editors), Hermès,
2013.
[4] Task-specific
salience for object recognition Jérôme
Revaud, Guillaume
Lavoué, Yasuo Ariki
and Atilla
Baskurt
, Chapter in Innovations in Intelligent
Image
Analysis ,
Series: Studies in Computational
Intelligence, Vol. 339. Halina Kwasnicka, Lakhmi C. Jain (Editors), Springer-Verlag,
ISBN: 978-3-642-17933-4,
February
2011. (web)
[3] Blind
watermarking of three-dimensional meshes: Review, recent advances and
future opportunities Kai
Wang, Guillaume
Lavoué, Florence
Denis
and Atilla
Baskurt
, Chapter in Advanced Techniques
in Multimedia Watermarking: Image, Video and Audio Applications ,
A. Al-Haj (Editor), IGI
Global,
ISBN: 978-1615209033, April 2010. (web)
[2] Basic
background in 3D Object Processing Guillaume
Lavoué, Chapter
in 3D
Object Processing:
Compression, Indexing and Watermarking,
Jean-Luc Dugelay, Atilla Baskurt, Mohamed Daoudi (Editors), John Wiley
& Sons, ISBN:
978-0-470-06542-6, pp. 5-44, April 2008. (web)
[1] 3D
Compression Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, Chapter in 3D Object Processing:
Compression, Indexing and Watermarking,
Jean-Luc Dugelay, Atilla Baskurt, Mohamed Daoudi (Editors), John Wiley
& Sons, ISBN:
978-0-470-06542-6, pp. 45-86,April 2008. (web)
International Journal
Papers
[43] Influence of Scenarios and Player Traits on Flow in Virtual Reality
Elise Lavoué, Sophie Villenave, Audrey Serna, Clémentine Didier, Patrick Baert, Guillaume
Lavoué, IEEE
Transactions on
Visualization and Computer Graphics, accepted, 2023. [Paper]
[42] Modeling
and hexahedral meshing of cerebral arterial networks from centerlines
Méghane Decroocq, Carole Frindel, Pierre Rougé,
Makoto Ohta, and
Guillaume Lavoué,
Medical Image Analysis, Volume
89, 102912, 2023. [Paper][Code]
[41] Does
this virtual food make me hungry? Effects of visual quality and food
type in virtual reality
Florian Ramousse, Pierre Raimbaud, Patrick Baert, Clémentine
Helfenstein-Didier,
Aurélia Gay, Catherine Massoubre, Bogdan Galusca, and Guillaume
Lavoué, Frontiers in Virtual Reality, section Virtual
Reality and Human Behaviour, Volume 4, 2023.[Paper]
[39] Crafting
the MPEG metrics for objective and perceptual quality assessment of
Volumetric Videos
Jean-Eudes Marvie, Yana Nehmé, Danillo Graziosi, and
Guillaume Lavoué,
Quality and User Experience (Springer), Volume 8, Article
No. 4, 2023. [Paper]
[38] Textured
Mesh Quality Assessment: Large-Scale Dataset and Deep Learning-based
Quality Metric
Yana Nehmé, Johanna Delanoy, Florent Dupont, Jean-Philippe
Farrugia, Patrick Le Callet, Guillaume Lavoué, ACM Transactions on Graphics, Volume
42, Issue 3, Article No. 31, pp 1–20, 2023. Presented at SIGGRAPH 2023.[Paper][Supp. Material]
[Code] [Dataset].
[37] Progressive
Compression of Triangle Meshes
Vincent Vidal, Lucas Dubouchet, Guillaume Lavoué,
and Pierre Alliez, Image
Processing On Line, Volume 13, pp. 1–21, 2023.[Paper]
[Code].
[36] Representation
learning of 3D meshes using an Autoencoder in the spectral domain
Clément Lemeunier, Florence Denis, Guillaume
Lavoué, Florent Dupont, Computer & Graphics
(Proceedings of 3DOR 2022), Volume 107, Pages
131-143 2022. Best paper
award.Replicability Stamp. [Paper]
[Code].
Lossy texture compression is increasingly used to reduce GPU memory and
bandwidth consumption. However, as raised by recent studies, evaluating
the quality of compressed textures is a difficult problem. Indeed using
Peak Signal-to-Noise Ratio (PSNR) on texture images, like done in most
applications, may not be a correct way to proceed. In particular, there
is evidence that masking effects apply when the texture image is mapped
on a surface and combined with other textures (e.g., affecting geometry
or normal). These masking effects have to be taken into account when
compressing a set of texture maps, in order to have a real
understanding of the visual impact of the compression artifacts on the
rendered images. In this work, we present the first psychophysical
experiment investigating the perceptual impact of texture compression
on rendered images. We explore the influence of compression bit rate,
light direction, and diffuse and normal map content on the visual
impact of artifacts. The collected data reveal huge masking effects
from normal map to diffuse map artifacts and vice versa, and reveal the
weakness of PSNR applied on individual textures for evaluating
compression quality. The results allow us to also analyze the
performance and failures of image quality metrics for predicting the
visibility of these artifacts. We finally provide some recommendations
for evaluating the quality of texture compression and show a practical
application to approximating the distortion measured on a rendered 3D
shape (Hide...)
Understanding the attentional behavior of the human visual system when
visualizing a rendered 3D shape is of great importance for many
computer graphics applications. Eye tracking remains the only solution
to explore this complex cognitive mechanism. Unfortunately, despite the
large number of studies dedicated to images and videos, only a few eye
tracking experiments have been conducted using 3D shapes. Thus,
potential factors that may influence the human gaze in the specific
setting of 3D rendering, are still to be understood.
In this work, we conduct two eye-tracking experiments involving 3D
shapes, with both static and time-varying camera positions. We propose
a method for mapping eye fixations (i.e., where humans gaze)
onto the 3D shapes with the aim to produce a benchmark of 3D meshes
with fixation density maps, which is publicly available. First, the
collected data is used to study the influence of shape, camera
position, material and illumination on visual attention. We find that
material and lighting have a significant influence on attention, as
well as the camera path in the case of dynamic scenes. Then, we compare
the performance of four representative state-of-the-art mesh saliency
models in predicting ground-truth fixations using two different
metrics. We show that, even combined with a center-bias model, the
performance of 3D saliency algorithms remains poor at predicting human
fixations. To explain their weaknesses, we provide a qualitative
analysis of the main factors that attract human attention. We finally
provide a quantitative comparison of human-eye fixations and Schelling
points and show that their correlation is weak. (Hide...)
Geometric modifications of three-dimensional (3D) digital models are
commonplace for the purpose of efficient renderingor compact storage.
Modifications imply visual distortions that are hard to measure
numerically. They depend not only on
the model itself but also on how the model is visualized. We
hypothesize that the model’s light environment and the way
it reflects incoming light strongly influences perceived quality.
Hence, we conduct a perceptual study demonstrating that
the same modifications can be masked, or conversely highlighted, by
different light-matter interactions. Additionally, we
propose a new metric that predicts the perceived distortion of 3D
modifications for a known interaction. It operates in the
space of 3D meshes with the object’s appearance, that is, the
light emitted by its surface in any direction given a known
incoming light. Despite its simplicity, this metric outperforms 3D mesh
metrics and competes with sophisticated perceptual
image-based metrics in terms of correlation to subjective measurements.
Unlike image-based methods, it has the advantage
of being computable prior to the costly rendering steps of image
projection and rasterization of the scene for given camera
parameters. (Hide...)
Objective visual quality assessment of 3D models is a fundamental issue
in computer graphics. Quality assessment metrics may allow a wide range
of processes to be guided and evaluated, such as level of detail
creation, compression, filtering and so on. Most computer graphics
assets are composed of geometric surfaces on which several texture
images can be mapped to make the rendering more realistic. While some
quality assessment metrics exist for geometric surfaces, almost no
research has been conducted on the evaluation of texture-mapped 3D
models. In this context, we present a new subjective study to evaluate
the perceptual quality of textured meshes, based on a paired comparison
protocol. We introduce both texture and geometry distortions on a set
of 5 reference models to produce a database of 136 distorted models,
evaluated using two rendering protocols. Based on analysis of the
results, we propose two new metrics for visual quality assessment of
textured mesh, as optimized linear combinations of accurate geometry
and texture quality measurements. These proposed perceptual metrics
outperform their counterparts in terms of correlation with human
opinion. The database, along with the associated subjective scores,
will be made publicly available on-line. (Hide...)
In this paper, we present a progressive compression algorithm for
textured surface meshes, which is able to handle polygonal non-manifold
meshes as well as discontinuities in the texture mapping. Our method
applies iterative batched simplifications, which create high quality
levels of detail by preserving both the geometry and the texture
mapping. The main features of our algorithm are (1) generic edge
collapse and vertex split operators suited for polygonal non-manifold
meshes with arbitrary texture seam configurations, and (2) novel
geometry-driven prediction schemes and entropy reduction techniques for
efficient encoding of connectivity and texture mapping. To our
knowledge, our method is the first progressive algorithm to handle
polygonal non-manifold models. For geometry and connectivity encoding
of triangular manifolds and non-manifolds, our method is competitive
with state-of-the-art and even better at low/medium bitrates. Moreover,
our method allows progressive encoding of texture coordinates with
texture seams; it outperforms state-of-the-art approaches for texture
coordinate encoding. We also present a bit-allocation framework which
multiplexes mesh and texture refinement data using a perceptually-based
image metric, in order to optimize the quality of levels of detail. (Hide...)
[26] On
the Efficiency
of Image Metrics for Evaluating the
Visual Quality of 3D Models Guillaume
Lavoué, Mohamed
Chaker
Larabi, Libor
Vasa, IEEE
Transactions on
Visualization and Computer Graphics,
Vol. 22, No. 8, pp. 1987-1999, 2016. (Abstract...)[Paper] (Copyright
IEEE)
3D meshes are deployed in a wide range of application processes (e.g.
transmission, compression, simplification, watermarking and so on)
which inevitably introduce geometric distortions that may alter the
visual quality of the rendered data. Hence, efficient model-based
perceptual metrics, operating on the geometry of the meshes being
compared, have been recently introduced to control and predict these
visual artifacts. However, since the 3D models are ultimately
visualized on 2D screens, it seems legitimate to use images of the
models (i.e. snapshots from different viewpoints) to evaluate their
visual fidelity. In this work we investigate the use of image metrics
to assess the visual quality of 3D models. For this goal, we conduct a
wide-ranging study involving several 2D metrics, rendering algorithms,
lighting conditions and pooling algorithms, as well as several mean
opinion score databases. The collected data allow (1) to determine the
best set of parameters to use for this image-based quality assessment
approach and (2) to compare this approach to the best performing
model-based metrics and determine for which use-case they are
respectively adapted. We conclude by exploring several applications
that illustrate the benefits of image-based quality assessment. (Hide...)
We propose a novel high-level signature for continuous semantic
description of 3D shapes. Given an approximately segmented and labeled
3D mesh, our descriptor consists of a set of geodesic distances to the
different semantic labels. This local multidimensional signature
effectively captures both the semantic information (and relationships
between labels) and the underlying geometry and topology of the shape.
We illustrate its benefits on two applications: automatic
semantic labeling, seen as an inverse problem along with
supervised-learning, and semantic-aware shape editing for
which the isocurves of our harmonic description are particularly
relevant. (Hide...)
3D meshes are commonly used to represent virtual surface and volumes.
However, their raw data representations take a large amount of space.
Hence, 3D mesh compression has been an active research topic since the
mid 1990s. In 2005, two very good review articles describing the
pioneering works were published. Yet, new technologies have emerged
since then. In this article, we summarize the early works and put the
focus on these novel approaches. We classify and describe the
algorithms, evaluate their performance, and provide synthetic
comparisons. We also outline the emerging trends for future research. (Hide...)
567-meshes are a new type of closed and 2-manifold triangular meshes
introduced by Aghdaii et al. in 2012 [1]. Valence < 5 or>
7 vertices are problematic for many mesh processing tasks such as edge
collapse or surface subdivision. However, valence-6
vertices everywhere is most often impossible due to either surface
topology or surface feature preservation, that is why 567-meshes
are particularly of interest.
This paper proposes a 567-remeshing algorithm that locally
retriangulates the mesh considering vertex valence, vertex budget
and mesh fidelity as a whole. This algorithm also offers the
possibility
to preserve a set of feature edges during remeshing. This
results in a framework capable of low budget 567-remeshing where
remeshed models have a much higher fidelity to the original
surface compared to the state of the art.
As applications of this work, we demonstrate that our remeshing
improves the performances of mesh regularization and progressive
mesh compression. (Hide...)
Almost all mesh processing procedures cause some more or less visible
changes in the appearance of objects represented by polygonal meshes.
In many cases, such as mesh watermarking, simplification or lossy
compression, the objective is to make the change in appearance
negligible, or as small as possible, given some other constraints.
Measuring the amount of distortion requires taking into account the
final purpose of the data. In many applications, the final consumer of
the data is a human observer, and therefore the perceptibility of the
introduced appearance change by a human observer should be the
criterion that is taken into account when designing and configuring the
processing algorithms. In this review, we discuss the existing
comparison metrics for static and dynamic (animated) triangle meshes.
We describe the concepts used in perception-oriented metrics used for
2D image comparison, and we show how these concepts are employed in
existing 3D mesh metrics. We describe the character of subjective data
used for evaluation of mesh metrics and provide comparison results
identifying the advantages and drawbacks of each method. Finally, we
also discuss employing the perception-correlated metrics in
perception-oriented mesh processing algorithms. (Hide...)
[21] A
comparison of methods for non-rigid 3D shape retrieval
Zhouhui Lian, Afzal Godil, Benjamin Bustos, Mohamed Daoudi, Jeroen
Hermans, Shun Kawamura,
Yukinori Kurita, Guillaume
Lavoué, Hien Van
Nguyen,
Ryutarou Ohbuchi,
Yuki Ohkita,
Yuya Ohishi,
Fatih Porikli,
Martin Reuter,
Ivan Sipiran,
Dirk Smeets,
Paul Suetens,
Hedi Tabia,
Dirk Vandermeulen, Pattern
Recognition, vol. 46,
No. 1, pp. 449-461, 2013.
[Available
online] (Copyright Elsevier)
[20] Combination
of bag-of-words descriptors for robust partial shape retrieval
Guillaume
Lavoué, The
Visual
Computer, vol. 28, No. 9, pp.
931-942, 2012. (Abstract...)[Paper]
(Copyright Springer)
This paper presents a 3D shape retrieval algorithm based on the Bag of
Words (BoW) paradigm.
For a given 3D shape, the proposed approach considers
a set of feature points uniformly sampled on the surface
and associated with local Fourier descriptors; this
descriptor is computed in the neighborhood of each feature
point by projecting the geometry onto the eigenvectors
of the Laplace-Beltrami operator, it is very informative,
robust to connectivity and geometry changes
and also fast to compute. In a preliminary step, a visual
dictionary is built by clustering a large set of feature
descriptors, then each 3D shape is described by an
histogram of occurrences of these visual words, hence
discarding any spatial information. A spatially-sensitive
algorithm is also presented where the 3D shape is described
by an histogram of pairs of visual words. We
show that these two approaches are complementary and
can be combined to improve the performance and the
robustness of the retrieval. The performances have been
compared against very recent state-of-the-art methods
on several different datasets. For global shape retrieval
our combined approach is comparable to these recent
works, however it clearly outperforms them in the case
of partial shape retrieval.
. (Hide...)
[19] Rate-distortion
optimization for progressive compression of 3D mesh with color
attributes
Ho
Lee, Guillaume
Lavoué and Florent
Dupont, The
Visual
Computer, vol. 28, No. 2, pp.
137-153, 2012. (Abstract...)[Paper][Code]
(Copyright Springer)
We propose a new lossless progressive compression algorithm based on
rate-distortion optimization for meshes with color attributes; the
quantization
precision of both the geometry and the color information is adapted to
each intermediate mesh during the
encoding/decoding process. This quantization precision
can either be optimally determined with the use of a
mesh distortion measure or be quasi-optimally decided
based on an analysis of the mesh complexity in order to
reduce the calculation time. Furthermore, we propose
a new metric which estimates the geometry and color
importance of each vertex during the simplification in
order to preserve faithfully the feature elements. Experimental results
show that our method outperforms the
state-of-the-art algorithm for colored meshes and competes with the
most efficient algorithms for non colored
meshes. (Hide...)
This
survey paper
presents recent advances in
evaluating and measuring the perceived visual quality of 3D
polygonal models. The paper analyzes the general process of
objective quality assessment metrics and subjective user evaluation
methods and presents a taxonomy of existing solutions.
Simple geometric error computed directly on the 3D models does
not necessarily reflect the perceived visual quality; therefore,
integrating perceptual issues for 3D quality assessment is of
great significance. This paper discusses existing metrics, including
perceptually based ones, computed either on 3D data or on 2D
projections, and evaluates their performance for their correlation
with existing subjective studies. (Hide...)
This paper presents a 3D-mesh segmentation algorithm based on a
learning approach. A large database of manually segmented 3D-meshes is
used to learn a boundary edge function. The function is learned using a
classifier
which automatically selects from a pool of geometric features the most
relevant ones to detect candidate boundary
edges. We propose a processing pipeline that produces smooth closed
boundaries using this edge function. This
pipeline successively selects a set of candidate boundary contours,
closes them and optimizes them using a
snake movement. Our algorithm was evaluated quantitatively using two
different segmentation benchmarks and
was shown to outperform most recent algorithms from the
state-of-the-art. (Hide...)
[16] A
Multiscale Metric for 3D Mesh Visual Quality Assessment
Guillaume
Lavoué, Computer
Graphics Forum (Proceedings
of Eurographics
Symposium on Geometry
Processing 2011),
vol. 30, No. 5, pp.
1427-1437, 2011. (Abstract...)[Paper][Code] (Copyright
Wiley) [Erratum]
Many processing operations are nowadays applied on 3D meshes like
compression, watermarking, remeshing and so forth; these processes are
mostly driven and/or evaluated using simple distortion measures like
the Hausdorff
distance and the root mean square error, however these measures do not
correlate with the human visual perception
while the visual quality of the processed meshes is a crucial issue. In
that context we introduce a full-reference
3D mesh quality metric; this metric can compare two meshes with
arbitrary connectivity or sampling density and
produces a score that predicts the distortion visibility between them;
a visual distortion map is also created. Our
metric outperforms its counterparts from the state of the art, in term
of correlation with mean opinion scores coming
from subjective experiments on three existing databases. Additionally,
we present an application of this new
metric to the improvement of rate-distortion evaluation of recent
progressive compression algorithms.
. (Hide...)
[15]
Joint
Reversible Watermarking and Progressive Compression of 3D
Meshes
Ho
Lee, Cagatay
Dikici, Guillaume
Lavoué and Florent
Dupont, The
Visual
Computer (35
best
papers from Computer
Graphics International 2011),
vol. 27, No.
6-8, pp. 781-792,
2011. (Abstract...)[Paper][Code]
(Copyright Springer)
A new reversible 3D mesh watermarking scheme is proposed in
conjunction with progressive compression. Progressive 3D mesh
compression permits a
progressive refinement of the model from a coarse to
a fine representation by using different levels of detail
(LoDs). A reversible watermark is embedded into all
reffinement levels such that (1) the reffinement levels are
copyright protected, and (2) an authorized user is able
to reconstruct the original 3D model after watermark
extraction, hence reversible. The progressive compres-
sion considers a connectivity-driven algorithm to choose
the vertices that are to be refined for each LoD. The
proposed watermarking algorithm modifies the geom-
etry information of these vertices based on histogram
bin shifting technique. An authorized user can extract
the watermark in each LoD and recover the original
3D mesh, while an unauthorized user which has access
to the decompression algorithm can only reconstruct a
distorted version of the 3D model. Experimental results
show that the proposed method is robust to several at-
tack scenarios while maintaining a good compression
ratio. (Hide...)
This
paper presents a
robust and blind watermarking algorithm for threedimensional
(3D) meshes. The watermarking primitive is an intrinsic 3D shape
descriptor: the analytic and continuous geometric volume moment. During
watermark
embedding, the input mesh is first normalized to a canonical and robust
spatial pose by using its global volume moments. Then, the normalized
mesh is
decomposed into patches and the watermark is embedded through a
modified
scalar Costa quantization of the zero-order volume moments of some
selected
candidate patches. Experimental results and comparisons with the state
of the
art demonstrate the effectiveness of the proposed approach.
. (Hide...)
Recent
advances in 3D
graphics technologies have led to an increasing use of processing
techniques on 3D meshes,
such as filtering, compression, watermarking, simplification,
deformation and so forth. Since these processes may modify the
visual appearance of the 3D objects, several metrics have been
introduced so as to properly drive or evaluate them, from classic
geometric ones such as Hausdorff distance, to more complex
perceptually-based measures. This paper presents a survey on existing
perceptually-based metrics for visual impairment of 3D objects and
provides an extensive comparison between them. In particular,
different scenarios which correspond to different perceptual and
cognitive mechanisms are analyzed. The objective is twofold: (1)
catching the behavior of existing measures so as to help Perception
researchers for designing new 3D metrics and (2) providing a
comparison between them so as to inform and help Computer Graphics
researchers for choosing the accurate tool for the design and
the evaluation of their mesh processing algorithms. (Hide...)
In
this paper, we
present an extensive
experimental comparison of existing similarity metrics
addressing the quality assessment problem of
mesh segmentation. We introduce a new metric named the 3D Normalized
Probabilistic Rand Index (3D-NPRI) which outperforms the
others in terms of properties and discriminative power.
This comparative study includes a subjective experiment with human
observers and is based on a corpus of manually segmented models. This
corpus is an improved version of our previous one~\cite{benhabiles09}.
It is composed of a set
of 3D-mesh models grouped in different classes associated with several
manual ground-truth segmentations. Finally the 3D-NPRI is applied to
evaluate six recent segmentation algorithms using our
corpus and the Chen's et al.~\cite{chen09} corpus. (Hide...)
[11] Semi-Sharp
Subdivision Surface Fitting Based on Feature Lines Approximation Guillaume
Lavoué and Florent
Dupont, Computers
&
Graphics, Vol. 33, No. 2, pp.
151-161, 2009. (Abstract...)
[video][Paper](Copyright Elsevier)
This
paper presents
an algorithm
for approximating arbitrary polygonal meshes with subdivision surfaces,
with the objective of preserving the relevant features of the object
while searching the coarsest possible control mesh. The main idea is to
firstly extract the feature lines of the object, and secondly construct
the subdivision surface over this network. Control points are created
by approximating these lines while the connectivity is built with
respect to the anisotropy of the object. Our algorithm reinforces the
similarity between the subdivision surface and the original shape by
affecting an integer sharpness degree to each control edge in order to
accurately reproduce the different curvature radii of corresponding
fillets and blends. (Hide...)
[10] A
Local
Roughness Measure for 3D Meshes and its Application to Visual Masking Guillaume
Lavoué, ACM
Transactions on Applied Perception, Vol.
5, No. 4, Article
21, 2009. (Abstract...)
[exe][Paper]
(Copyright ACM)
3D
models are subject
to a wide variety of processing operations such as compression,
simplification
or watermarking, which may introduce some geometric artifacts on the
shape. The main issue
is to maximize the compression/simplification ratio or the watermark
strength while minimizing
these visual degradations. However few algorithms exploit the human
visual system to hide these
degradations, while perceptual attributes could be quite relevant for
this task. Particularly, the
masking effect defines the fact that one visual pattern can hide the
visibility of another. In this
context we introduce an algorithm for estimating the roughness of a 3D
mesh, as a local measure
of geometric noise on the surface. Indeed, a textured (or rough) region
is able to hide geometric
distortions much better than a smooth one. Our measure is based on
curvature analysis on
local windows of the mesh and is independent of the
resolution/connectivity of the object. The
accuracy and the robustness of our measure, together with its relevance
regarding visual masking
have been demonstrated through extensive comparisons with
state-of-the-art and subjective
experiment. Two applications are also presented, in which the roughness
is used to lead (and
improve) respectively compression and watermarking algorithms. (Hide...)
[9] Improving
Zernike Moments Comparison for
Optimal Similarity and Rotation Angle Retrieval Jérôme
Revaud, Guillaume
Lavoué and Atilla
Baskurt, IEEE
Transactions
on Pattern Analysis and Machine
Intelligence, Vol.
31, No. 4, pp. 627-636, 2009. (Abstract...) [Paper] (Copyright
IEEE)
Zernike
moments
constitute a powerful shape descriptor in terms of robustness and
description
capability. However the classical way of comparing two Zernike
descriptors only takes into account the
magnitude of the moments and loses the phase information. The novelty
of our approach is to take
advantage of the phase information in the comparison process while
still preserving the invariance to
rotation. This new Zernike comparator provides a more accurate
similarity measure together with the
optimal rotation angle between the patterns, while keeping the same
complexity as the classical approach.
This angle information is particularly of interest for many
applications, including 3D scene understanding
through images. Experiments demonstrate that our comparator outperforms
the classical one in terms of
similarity measure. In particular the robustness of the retrieval
against noise and geometric deformation
is greatly improved. Moreover, the rotation angle estimation is also
more accurate than state of the art
algorithms. (Hide...)
[8] Hierarchical
Watermarking of Semi-regular Meshes Based on Wavelet Transform Kai
Wang, Guillaume
Lavoué, Florence
Denis
and Atilla
Baskurt, IEEE
Transactions
on Information Forensics and Security, Vol.
3, No. 4, pp. 620-634, 2008. (Abstract...) [Paper]
(Copyright IEEE)
This
paper presents a
hierarchical watermarking
framework for semi-regular meshes. Three blind watermarks
are inserted in a semi-regular mesh with different purposes: a
geometrically robust watermark for copyright protection, a highcapacity
watermark for carrying a large amount of auxiliary
information, and a fragile watermark for content authentication.
The proposed framework is based on wavelet transform of the
semi-regular mesh. More precisely, the three watermarks are
inserted in different appropriate resolution levels obtained by
wavelet decomposition of the mesh: the robust watermark is
inserted by modifying the norms of the wavelet coefficient vectors
associated with the lowest resolution level; the fragile watermark
is embedded in the high resolution level obtained just after one
wavelet decomposition by modifying the orientations and norms
of the wavelet coefficient vectors; the high-capacity watermark
is inserted in one or several intermediate levels by considering
groups of wavelet coefficient vector norms as watermarking
primitives. Experimental results demonstrate the effectiveness
of the proposed framework: the robust watermark is able to
resist all the common geometric attacks even with relatively
strong amplitude; the fragile watermark is robust to contentpreserving
operations, while sensitive to other attacks of which
it can also provide the precise location; the payload of the
highcapacity
watermark increases rapidly along with the number of
watermarking primitives. (Hide...)
Three-dimensional
meshes have been used more and more in industrial, medical and
entertainment
applications during the last decade. Many researchers, from both the
academic and the industrial sectors,
have become aware of their intellectual property protection and
authentication problems arising with their
increasing use. This paper gives a comprehensive survey on 3D mesh
watermarking, which is considered
an effective solution to the above two emerging problems. Our survey
covers an introduction to the
relevant state of the art, an attack-centric investigation, and a list
of existing problems and potential
solutions. First, the particular difficulties encountered while
applying watermarking on 3D meshes are
discussed, followed by a presentation and an analysis of the existing
algorithms, distinguishing them
between fragile techniques and robust techniques. Since the attacks
play an important role in the design
of 3D mesh watermarking algorithms, we also provide an attack-centric
viewpoint of this state of the
art. Finally, some future working directions are pointed out especially
on the ways of devising robust
and blind algorithms and on some new probably promising watermarking
feature spaces.
. (Hide...)
This
paper
presents a robust non-blind watermarking scheme for subdivision
surfaces. The algorithm works in the frequency domain, by modulating
spectral coefficients of the subdivision control mesh. The compactness
of the watermarking support (a coarse control mesh) has led us to
optimize the trade-off between watermarking redundancy (which insures
robustness) and imperceptibility by introducing two contributions: (1)
Spectral coefficients are perturbed according to a new modulation
scheme analysing the spectrum shape and (2) the redundancy is optimized
by using error correcting codes coming from telecommunication theory.
Since the watermarked surface can be attacked in a subdivided version,
we have introduced an algorithm to retrieve the control polyhedron,
starting from a subdivided, attacked version. Experiments have shown
the high robustness of our scheme against geometry attacks such as
noise addition, quantization or non-uniform scaling and also
connectivity alterations such as remeshing or simplification. (Hide...)
[5] A
framework for
quad/triangle
subdivision surface
fitting: Application to mechanical objects Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, Computer
Graphics Forum, Vol. 26, No.1,
pp. 1-14, 2007. (Abstract...) [Paper] (Copyright
Wiley)
In
this
paper we present a new framework for subdivision surface approximation
of 3D models represented by polygonal meshes. Our approach,
particularly suited for mechanical or CAD parts, produces a mixed
quadrangle-triangle control mesh, optimized in terms of face and vertex
numbers while remaining independent of the connectivity of the input
mesh. Our algorithm begins with a decomposition of the object into
surface patches. Then the main idea is to approximate first the region
boundaries and then the interior data. Thus, for each patch, a first
step approximates the boundaries with subdivision curves (associated
with control polygons) and creates an initial subdivision surface by
linking the boundary control points with respect to the lines of
curvature of the target surface. Then, a second step optimizes the
initial subdivision surface by iteratively moving control points and
enriching regions according to the error distribution. The final
control mesh defining the whole model is then created assembling every
local subdivision control meshes. This control polyhedron is much more
compact than the original mesh and visually represents the same shape
after several subdivision steps, hence it is particularly suitable for
compression and visualization tasks. Experiments conducted on several
mechanical models have proven the coherency and the efficiency of our
algorithm, compared with existing methods. (Hide...)
[4] High
rate compression of CAD meshes based on
subdivision
inversion Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, Annals
of
Telecommunications, Vol.
60, No.11-12, pp. 1286-1310, 2005. (Abstract...)[Paper]
In
this paper we
present a new framework, based on subdivision surface approximation,
for efficient compression and coding of 3D models represented by
polygonal meshes. Our algorithm fits the input 3D model with a
piecewise smooth subdivision surface represented by a coarse control
polyhedron, near optimal in terms of control points number and
connectivity. Our algorithm, which remains independent of the
connectivity of the input mesh, is particularly suited for meshes
issued from mechanical or CAD parts. The found subdivision control
polyhedron is much more compact than the original mesh and visually
represents the same
shape after several subdivision steps, without artifacts or cracks,
like traditional lossy compression schemes. This control polyhedron is
then encoded specifically to give the final compressed stream.
Experiments conducted on several CAD models have proven the coherency
and the efficiency of our algorithm, compared with existing methods. (Hide...)
[3] A
new
subdivision based approach
for piecewise smooth approximation of 3D polygonal curves Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, Pattern
Recognition, Vol.
38, No.8, pp. 1139-1151,
2005. (Abstract...)[Paper]
(Copyright Elsevier)
This paper presents an algorithm dealing with the data reduction and
the approximation of 3D polygonal curves. Our method is able to
approximate efficiently a set of
straight 3D segments or points with a piecewise smooth subdivision
curve, in a near optimal way in terms of control point number.
Our algorithm is a generalization for subdivision rules, including
sharp vertex processing, of the Active B-Spline Curve developed
by Pottmann et al. We have also developed a theoretically demonstrated
approach, analysing curvature properties of B-Splines, which computes a
near optimal evaluation of the initial number and positions of control
points. Moreover, our original Active Footpoint Parameterization method
prevents wrong matching problems occurring particularly for
self-intersecting curves. Thus, the stability of the algorithm is
highly increased. Our method was tested on different sets of curves and
gives satisfying results regarding to approximation error, convergence
speed and compression rate. This method is in line with a larger 3D CAD
object compression scheme by piecewise subdivision surface
approximation. The objective is to fit a subdivision surface on a target
patch by first fitting its boundary with a subdivision curve whose
control polygon will represent the boundary of the surface control
polyhedron.(Hide...)
This paper presents a new and efficient algorithm for the decomposition
of 3D arbitrary triangle meshes and
particularly optimized triangulated CAD meshes. The algorithm
is based on the curvature tensor field analysis and presents
two distinct complementary steps: a region based segmentation, which is
an improvement of that presented by Lavoue et al. [Lavoue G, Dupont F,
Baskurt A. Constant curvature region decomposition
of 3D-meshes by a mixed approach vertex-triangle, J WSCG
2004;12(2):245–52]
and which decomposes the object into near constant curvature patches,
and a boundary rectification based on curvature tensor directions,
which corrects boundaries by suppressing their artefacts or
discontinuities. Experiments conducted on various models including both
CAD and natural objects, show satisfactory results. Resulting segmented
patches,
by virtue of their properties (homogeneous curvature, clean boundaries)
are particularly adapted to computer graphics tasks like parametric
or subdivision surface fitting in an adaptive compression objective. (Hide...)
[1]Object
of Interest based
visual navigation,
retrieval and semantic content identification system Khalid
Idrissi, Guillaume
Lavoué,
Julien Ricard and Atilla
Baskurt, Computer
Vision and Image Understanding,
Vol. 94, No. 1-3, pp.
271-294, 2004. (Abstract...)[Paper]
(Copyright Elsevier)
This study presents a content-based image retrieval system IMALBUM
based on local region of interest called object of interest (OOI). Each
segmented or user-selected OOI is indexed with new local adapted
descriptors associated to color, texture, and shape features. This
local approach is an efficient way to associate the local semantic
content with low-level descriptors (color, texture, shape, etc.)
computed on regions selected by the user. So the user actively takes
part in the indexing process (offline) and can use a selected OOI as a
query for the retrieval system (online). The IMALBUM system proposes
original functionalities. A visual
navigation tool allows to surf in the image database when the user has
no precise idea of what he is really searching for in the database.
Furthermore, when an OOI is selected as a query for retrieval, a
semantic content identification tool indicates to the user the probable
class of this unknown object. The performance of these different
tools are evaluated on different databases.(Hide...)
International
conferences
[61] Immersive
Multisensory Digital Twins: concept, methods and case study
Charles Javerliat, Pierre Raimbaud, Sophie Villenave, Pierre-Philippe
Elst, Eliott Zimmermann, Martin Guesney, Mylène Pardoen,
Patrick Baert, and Guillaume Lavoué, Workshop on Multisensory
Experiences (SENSORYX '23), At ACM IMX 2023. [paper]
[60] ReVBED:
A semi-guided virtual
environment for inducing food craving in a binge-eating therapy process
Florian Ramousse, Guillaume Lavoué, Patrick Baert, Vikesh
Bhoowabul, Séverine Fleury, Baptiste Ravey,
Aurélia Gay, Catherine Massoubre, and Clémentine
Helfenstein-Didier, CARE IMX Workshop
, At ACM IMX 2023. [paper]
[59] Adaptive
streaming of 3D content for web-based virtual reality: an
open-source prototype including several metrics and strategies
Jean-Philippe Farrugia, Luc Billaud, Guillaume Lavoué, ACM Multimedia Systems Conference
(MMSys) 2023 - Open Dataset & Software track. [paper] [code]
[58] Nebula:
An Affordable Open-Source and Autonomous Olfactory
Display for VR Headsets
Charles Javerliat, Pierre-Philippe Elst, Anne-Lise Saive, Patrick
Baert, Guillaume
Lavoué, ACM
Symposium on Virtual Reality Software and Technology (VRST),
November 2022. [paper][code
& materials] (Copyright ACM)
[57] A
Software to Visualize, Edit, Model and Mesh Vascular Networks
Méghane Decroocq, Guillaume Lavoué, Makoto
Ohta, Carole Frindel, International
Conference of the IEEE Engineering in Medicine & Biology
Society (EMBC), July 2022. [paper][code]. (Copyright
IEEE)
[56] XREcho:
A Unity plug-in to record and visualize user behavior during XR sessions
Sophie Villenave, Jonathan Cabezas, Patrick Baert, Florent Dupont,
Guillaume
Lavoué, ACM
Multimedia Systems Conference (MMSys), June
2022. [paper][code]
(Copyright ACM).
[53] PCQM: A
full-reference quality metric for colored 3D point clouds
Gabriel Meynet, Yana
Nehmé, Julie
Digne, Guillaume
Lavoué, International
Conference on Quality of Multimedia Experience (QoMEX), 2020.[Paper][Code][(Copyright
IEEE)Best Student Paper Award.
Numerous methodologies for subjective quality assessment exist in the
field of image processing. In particular, the Absolute Category Rating
with Hidden Reference (ACR-HR) and the Double Stimulus Impairment Scale
(DSIS) are considered two of the most prominent methods for assessing
the visual quality of 2D images and videos. Are these methods
valid/accurate to evaluate the perceived quality of 3D graphics data?
Is
the presence of an explicit reference necessary, due to the lack of
human
prior knowledge on 3D graphics data compared to natural images/videos?
To answer these questions, we compare these two subjective methods
(ACR-HR and DSIS) on a dataset of high-quality colored 3D models,
impaired with various distortions. These subjective experiments were
conducted in a virtual reality (VR) environment. Our results show
differences in the performance of the methods depending on the 3D
contents and the types of distortions. We show that DSIS
outperforms ACR-HR in term of accuracy and points out a stable
performance. Results also yield interesting conclusions on the
importance of a reference for judging the quality of 3D graphics. We
finally provide recommendations regarding the influence of the number
of
observers on the accuracy. (Hide...)
[49]
PC-MSDM: A quality metric for 3D point clouds
Gabriel Meynet, Julie Digne, Guillaume
Lavoué, International
Conference on Quality of Multimedia Experience (QoMEX), short paper,
Berlin, Germany, 2019. (Abstract...)[Paper][Code][(Copyright IEEE)
In this paper, we present PC-MSDM, an objective metric for visual
quality assessment of 3D point clouds. This full-reference metric is
based on local curvature statistics and can be viewed as an extension
for point clouds of the MSDM metric suited for 3D meshes. We evaluate
its performance on an open subjective dataset of point clouds
compressed by octree pruning; results show that the proposed metric
outperforms its counterparts in terms of correlation with mean opinion
scores. (Hide...)
[48]
Least Squares Affine Transitions for Global
Parameterization
Ana Vintescu, Florent Dupont, Guillaume Lavoue, Pooran Memari, Julien
Tierny, International
Conference on Computer Graphics, Visualization and Computer Vision
(WSCG),
, Plzen, Czech Republic, June 2017. (Abstract...)[Paper]
This paper presents an efficient algorithm for a global
parameterization of triangular surface meshes. In contrast
to previous techniques which achieve global parameterization through
the optimization of non-linear systems of
equations, our algorithm is solely based on solving at most two linear
equation systems, in the least square sense.
Therefore, in terms of running time the unfolding procedure is highly
efficient. Our approach is direct – it solves
for the planar UV coordinates of each vertex directly – hence
avoiding any numerically challenging planar recon-
struction in a post-process. This results in a robust unfolding
algorithm. Curvature prescription for user-provided
cone singularities can either be specified manually, or suggested
automatically by our approach. Experiments on a
variety of surface meshes demonstrate the runtime efficiency of our
algorithm and the quality of its unfolding. To
demonstrate the utility and versatility of our approach, we apply it to
seamless texturing. The proposed algorithm
is computationally efficient, robust and results in a parameterization
with acceptable metric distortion. (Hide...)
[47]
Conformal Factor Persistence for Fast Hierarchical Cone Extraction
Ana Vintescu, Florent Dupont, Guillaume Lavoue, Pooran Memari, Julien
Tierny, Eurographics 2017, short paper program,
Lyon,
France, 2017. (Abstract...)[Paper] (Copyright
Eurographics)
This paper presents a new algorithm for the fast extraction of
hierarchies of cone singularities for conformal surface
parameterization.Cone singularities have been shown to greatly improve
the distortion of such parameterizations since they locally
absorb the area distortion. Therefore, existing automatic approaches
aim at inserting cones where large area distortion can
be predicted. However, such approaches are iterative, which results in
slow computations, even often slower than the actual
subsequent parameterization procedure. This becomes even more
problematic as often the user does not know in advance the
right number of needed cones and thus needs to explore cone hierarchies
to obtain a satisfying result. Our algorithm relies on
the key observation that the local extrema of the conformal factor
already provide a good approximation of the cone singularities
extracted with previous techniques, while needing only one linear
solving where previous approaches needed one solving
per hierarchy level. We apply concepts from persistent homology to
organize very efficiently such local extrema into a global
hierarchy. Experiments demonstrate the approximation quality of our
approach quantitatively and report time-performance
improvements of one order of magnitude, which makes our technique well
suited for interactive contexts. (Hide...)
[46]
Semantic correspondence across 3D models for example-based modeling
Vincent Léon, Vincent Itier, Nicolas Bonneel, Guillaume
Lavoué, Jean-Philippe Vandeborre, Eurographics Workshop on 3D
Object Retrieval (3DOR),
Lyon,
France, 2017. (Abstract...)[Paper] (Copyright
Eurographics)
Modeling 3D shapes is a specialized skill not affordable to most novice
artists due to its complexity and tediousness. At the sametime,
databases of complex models ready for use are becoming widespread, and
can help the modeling task in a process called
example-based modeling. We introduce such an example-based mesh
modeling approach which, contrary to prior work, allows
for the replacement of any localized region of a mesh by a region of
similar semantics (but different geometry) within a mesh
database. For that, we introduce a selection tool in a space of
semantic descriptors that co-selects areas of similar semantics
within the database. Moreover, this tool can be used for part-based
retrieval across the database. Then, we show how semantic
information improves the assembly process. This allows for modeling
complex meshes from a coarse geometry and a database
of more detailed meshes, and makes modeling accessible to the novice
user. (Hide...)
[45] Progressive
streaming of textured 3D models in a web browser Guillaume
Lavoué, Laurent
Chevalier, Florian
Caillaud and Florent
Dupont, ACM
SIGGRAPH Symposium on Interactive 3D Graphics and Games, Poster, Redmond,
US, February 2016. [Paper]
Formulations of the Image Decomposition Problem [8] as a Multicut
Problem (MP) w.r.t. a superpixel graph have received considerable
attention. In contrast, instances of the MP w.r.t. a pixel grid graph
have received little attention, firstly, because the MP is NP-hard and
instances w.r.t. a pixel grid graph are hard to solve in practice, and,
secondly, due to the lack of long-range terms in the objective function
of the MP. We propose a generalization of the MP with longrange terms
(LMP). We design and implement two efficient algorithms (primal
feasible heuristics) for the MP and LMP which allow us to study
instances of both problems w.r.t. the pixel grid graphs of the images
in the BSDS-500 benchmark [8]. The decompositions we obtain do not
differ significantly from the state of the art, suggesting that the LMP
is a competitive formulation of the Image Decomposition Problem.
To demonstrate the generality of the LMP, we apply it also to the Mesh
Decomposition Problem posed by the Princeton benchmark [16], obtaining
state-of-the-art decompositions. (Hide...)
Several perceptually-based quality metrics have been introduced to
predict the global impact of geometric artifacts on the visual
appearance of a 3D model. They usually produce a single score that
reflects the global level of annoyance caused by the distortions.
However, beside this global information, it is also important in many
applications to obtain information about the local visibility
of the artifacts (i.e. estimating a localized distortion measure). In
this work we present a psychophysical experiment where observers are
asked to mark areas of 3D meshes that contain noticeable distortions.
The collected per-vertex distortion maps are first used to illustrate
several perceptual mechanisms of the human visual system. They then
serve as ground-truth to evaluate the performance of well-known
geometric attributes and metrics for predicting the visibility of
artifacts. Results show that curvature-based attributes demonstrate
excellent performance. As expected, the Hausdorff distance is a poor
predictor of the perceived local distortion while the recent
perceptually-based metrics provide the best results. (Hide...)
[42] Progressive
compression of generic surface
meshes
Florian
Caillaud, Vincent
Vidal, Florent
Dupont
and Guillaume
Lavoué, Computer
Graphics
International, Short
paper, Strasbourg, France,
June 2015. (Abstract...)[Paper]
This paper presents a progressive compression method for generic
surface meshes (non-manifold and/or polygonal). Two major contributions
are proposed : (1) generic edge collapse and vertex split operators
allowing surface simplification and refinement of a mesh, whatever its
connectivity; (2) a distortion-aware
collapse clustering strategy that adapts the decimation granularity in
order to optimize the rate-distortion tradeoff. (Hide...)
[41] Progressive
Streaming of Compressed 3D Graphics in a Web Browser Guillaume
Lavoué,
Laurent
Chevalier, Florent
Dupont, ACM
SIGGRAPH 2014 - Talk
Program,
Vancouver, Canada, August 2014. [Paper] [Video]
(Copyright ACM).
[40] Perceptual
Quality Metrics for 3D Meshes: Towards an Optimal Multi-Attribute
Computational Model Guillaume
Lavoué
, Irene
Cheng,
Anup
Basu, IEEE
International
Conference on Systems, Man, and Cybernetics (SMC),
Manchester, UK, October 2013. (Abstract...)[Paper] (Copyright
IEEE)
3D graphical data, commonly represented using triangular meshes, are
deployed in a wide range of application processes including
compression, filtering, watermarking, and simplification. These
processes often introduce geometric distortions which affect the visual
quality of the ultimate data visualization. In order to accurately
evaluate perceptual impacts caused by the distortions, assessment
metrics on 3D Mesh Visual Quality (MVQ) have been extensively discussed
in the literature. Researchers recommended various metrics to predict
the adverse effects that visual artifacts can have in applications.
Most of these metrics are based on geometric attributes, \eg
conventional geometric distance, Laplacian coordinates, different types
of curvature computation, and dihedral angles. We hypothesize that an
optimal combination of multiple attributes associated with a 3D mesh
surface can contribute to better perceptual prediction than single
attributes used separately. In this paper, we use two user studies to
validate our hypothesis. Our contributions are: (1) providing a
detailed analysis of the most relevant geometric attributes for mesh
quality assessment, and (2) introducing a new perceptual evaluation
metric based on multiple attributes, with the optimal combination
determined through machine learning techniques. Statistical
quantitative analysis shows that our metric delivers better results
than other state-of-the-art approaches. The proposed method is simple
to implement and fast in execution. Moreover, our framework can easily
be expanded to accommodate additional surface attributes. (Hide...)
[39] Evaluation
of 3D Model Segmentation Techniques based on Animal Anatomy
Nasim Hajari,
Irene
Cheng, Anup
Basu, Guillaume
Lavoué, IEEE
International
Conference on Systems, Man, and Cybernetics (SMC),
Manchester, UK, October 2013. (Abstract...)[Paper] (Copyright
IEEE)
3D model decomposition is a challenging and important problem in
computer graphics. Several semantically based approaches have been
proposed in the literature; however, due to the lack of proper
evaluation criteria, comparison of these techniques is almost
impossible. In this paper we suggest to use animal anatomy as the
ground truth and compare the result of different segmentation
techniques based on that. Differing from previous approaches which
perform the evaluation based on ground truth databases created
subjectively by human observers, we consider expert knowledge on
anatomy of various animals. Based on this knowledge we specify the
ground truth for different animals and compare alternative algorithms. (Hide...)
[38] Streaming
Compressed 3D Data on the Web using JavaScript and WebGL
Guillaume
Lavoué, Laurent
Chevalier, Florent
Dupont, International
Conference on 3D Web Technology (Web3D),
San Sebastian,
Spain, June 2013. (Abstract...)[Paper] [Video]
(Copyright ACM)
With the development of Web3D technologies, the delivery and
visualization of 3D models on the web is now possible and is bound to
increase both in the industry and for the general public. However the
interactive remote visualization of 3D graphic data in a web browser
remains a challenging issue. Indeed, most of existing systems suffer
from latency (due to the data downloading time) and lack of adaptation
to heterogeneous networks and client devices (i.e. the lack of levels
of details); these drawbacks seriously affect the quality of user
experience.This paper presents a technical solution for streaming and
visualization of compressed 3D data on the web. Our approach leans upon
three strong features: (1) a dedicated progressive compression
algorithm for 3D graphic data with colors producing a binary compressed
format which allows a progressive decompression with several levels of
details; (2) the introduction of a JavaScript halfedge data structure
allowing complex geometrical and topological operations on a 3D mesh;
(3) the multi-thread JavaScript / WebGL implementation of the
decompression scheme allowing 3D data streaming in a web browser.
Experiments and comparison with existing solutions show promising
results in terms of latency, adaptability and quality of user
experience.
. (Hide...)
[37] Investigating
the Rate-Distortion Performance of a Wavelet-Based Mesh Compression
Algorithm by Perceptual and Geometric Distortion Metrics
Maja Krivokuća, Burkhard Wuensche, Waleed Abdulla, Guillaume
Lavoué, International
Conference on
Computer Graphics, Visualization and Computer Vision
(WSCG), Plzen,
Czech Republic, June
2012. [Paper]
3D mesh segmentation is a fundamental process in many applications such
as shape retrieval, compression,deformation, etc. The objective of this
track is to evaluate the performance of recent segmentation methods
using
a ground-truth corpus and an accurate similarity metric. The
ground-truth corpus is composed of 28 watertight
models, grouped in five classes (animal, furniture, hand, human and
bust) and each associated with 4 ground-truth
segmentations done by human subjects. 3 research groups have
participated to this track, the accuracy of their
segmentation algorithms have been evaluated and compared with 4 other
state-of-the-art methods. (Hide...)
This paper presents a precise kinematic skeleton extraction method for
3D dynamic meshes. Contrary to previousmethods, our method is based on
the computation of motion boundaries instead of detecting object parts
characterized
by rigid transformations. Thanks to a learned boundary edge function,
we are able to compute efficiently
a set of motion boundaries which in fact correspond to all possible
articulations of the 3D object. Moreover, the
boundaries are detected even if the parts linked to an
object’s articulation are immobile over time. The different
boundaries are then used to extract the kinematic skeleton.
Experiments show that our algorithm produces more precise skeletons
compared to previous methods. (Hide...)
Almost all mesh processing procedures cause some more or less visible
changes in the appearance of objects representedby polygonal meshes. In
many cases, such as mesh watermarking, simplification or lossy
compression,
the objective is to make the change in appearance negligible, or as
small as possible, given some other constraints.
Measuring the amount of distortion requires taking into account the
final purpose of the data. In many
applications, the final consumer of the data is a human observer, and
therefore the perceptibility of the introduced
appearance change by a human observer should be the criterion that is
taken into account when designing and
configuring the processing algorithms.
In this review, we discuss the existing comparison metrics for static
and dynamic (animated) triangle meshes. We
describe the concepts used in perception-oriented metrics used for 2D
image comparison, and we show how these
concepts are employed in existing 3D mesh metrics. We describe the
character of subjective data used for evaluation
of mesh metrics and provide comparison results identifying the
advantages and drawbacks of each method.
Finally, we also discuss employing the perception-correlated metrics in
perception-oriented mesh processing algorithms. (Hide...)
[33]
MEPP -
3D Mesh Processing Platform
Guillaume
Lavoué, Martial
Tola,
Florent
Dupont,
International
Conference on Computer Graphics Theory and Applications (GRAPP), Rome,
Italy, February 2012. (Abstract...)[Paper]
This paper presents MEPP, an open source platform for 3D mesh
processing. This platform already contains alarge set of processing
tools from classical ones (simplification, subdivision, segmentation)
to more technical
algorithms (compression, watermarking, Boolean operation, perceptual
metrics, etc.). Its main objective is to
allow a quick start for both users and developers by providing highly
detailed tutorials and simple integration
mechanisms, through a modular architecture where components are
implemented as dynamic plugins.
. (Hide...)
[32]
Bag of
Words and Local Spectral Descriptor for 3D Partial Shape Retrieval
Guillaume
Lavoué, Eurographics
Workshop on 3D
Object Retrieval (3DOR), Llandudno,
UK, April 2011. (Abstract...)[Paper] (Copyright
Eurographics)
This paper presents a 3D shape retrieval algorithm based on the Bag of
Words (BoW) paradigm. For a given
3D shape, the proposed approach considers a set of feature points
uniformly sampled on the surface and associated
with local Fourier descriptors; this descriptor is computed in the
neighborhood of each feature point by
projecting the geometry onto the eigenvectors of the Laplace-Beltrami
operator, it is highly discriminative, robust
to connectivity and geometry changes and also fast to compute. In a
preliminary step, a visual dictionary is built
by clustering a large set of feature descriptors, then each 3D shape is
described by an histogram of occurrences
of these visual words. The performances of our approach have been
compared against very recent state-of-theart
methods on several different datasets. For global shape retrieval our
approach is comparable to these recent
works, however it clearly outperforms them in the case of partial shape
retrieval. (Hide...)
[31] SHREC'11
Track: Shape retrieval on Non-Rigid 3D Watertight Meshes
Z. Lian, A. Godil, B. Bustos, M. Daoudi, J. Hermans, S. Kawamura, Y.
Kurita, G. Lavoué, H.V. Nguyen, R. Ohbuchi, Y. Ohkita, Y.
Ohishi, F. Porikli, M. Reuter, I. Sipiran, D. Smeets, P. Suetens, H.
Tabia, and D. Vandermeulen, Eurographics
Workshop on 3D
Object Retrieval (3DOR), Llandudno,
UK, April 2011.
[Paper] (Copyright
Eurographics)
[30] A
subjective
experiment for 3D-mesh segmentation evaluation
Halim Benhabiles, Guillaume
Lavoué, Jean-Philippe
Vandeborre
and Mohamed
Daoudi,
IEEE
International
Workshop
on Multimedia Signal Processing (MMSP) ,
St-Malo, France, October 2010. (Abstract...)[Paper] (Copyright
IEEE)
In this paper we present a subjective quality
assessmentexperiment for 3D-mesh segmentation. For this end,
we carefully designed a protocol with respect to several factors
namely the rendering conditions, the possible interactions, the
rating range, and the number of human subjects. To carry out the
subjective experiment, more than 40 human observers have rated
a set of 250 segmentation results issued from various algorithms.
The obtained Mean Opinion Scores, which represent the
human subjects’ point of view toward the quality of each
segmentation, have then been used to evaluate both the quality of
automatic segmentation algorithms and the quality of similarity
metrics used in recent mesh segmentation benchmarking systems. (Hide...)
[29] Recognizing
and localizing individual activities through graph matching Anh-Phuong
Ta, Christian
Wolf, Guillaume
Lavoué
and Atilla
Baskurt, International
Conference on Advanced Video and Signal-Based Surveillance (AVSS),
Boston, USA,
August 2010.(Abstract...)[Paper] (Copyright
IEEE)Best
Paper Award for the Recognition Track
In this paper we tackle the problem of detecting individual human
actions in video sequences. While the most successful methods are based
on local features, which proved that they can deal with changes in
background, scale and illumination, most existing methods have two main
shortcomings: first, they are mainly based on the individual power of
spatio-temporal interest points (STIP), and therefore ignore the
spatio-temporal relationships between them. Second, these methods
mainly focus on direct classification techniques to classify the human
activities, as opposed to detection and localization. In order to
overcome these limitations, we propose a new approach, which is based
on a graph matching algorithm for activity recognition. In contrast to
most previous methods which classify entire video sequences, we design
a video matching method from two sets of ST-points for human activity
recognition. First, points are extracted, and a hyper graphs are
constructed from them, i.e. graphs with edges involving more than 2
nodes (3 in our case). The activity recognition problem is then
transformed into a problem of finding instances of model graphs in the
scene graph. By matching local features instead of classifying entire
sequences, our method is able to detect multiple different activities
which occur simultaneously in a video sequence. Experiments on two
standard datasets demonstrate that our method is comparable to the
existing techniques on classification, and that it can, additionally,
detect and localize activities. (Hide...)
[28] Remote
scientific visualization of progressive 3D meshes
with X3D
Adrien Maglo, Guillaume
Lavoué, Céline
Hudelot, Ho
Lee, Christophe Mouton, Florent
Dupont, International
Conference on 3D Web Technology (Web3D),
Los
Angeles, USA,
July 2010. (Abstract...)[Paper]
(Copyright ACM)
This paper presents a framework, integrated into the X3D format, for
the streaming of 3D content in the context
of remote scientific visualization; a progressive mesh com-
pression method is proposed that can handle 3D objects
associated with attributes like colors, while producing high
quality intermediate Levels Of Detail (LOD). Efficient adap-
tation mechanisms are also proposed so as to optimize the
LOD management of the 3D scene according to different
constraints like the network bandwidth, the device graphic
capability, the display resolution and the user preferences.
Experiments demonstrate the efficiency of our approach in
scientific visualization scenarii.
. (Hide...)
[27] Learning
an efficient and robust graph matching
procedure for specific object recognition
Jérôme
Revaud, Guillaume
Lavoué, Yasuo Ariki
and Atilla
Baskurt, Jean-Michel
Jolion, International
Conference on
Pattern Recognition (ICPR),
Istanbul, Turkey,
August 2010. (Abstract...)[Paper]
(Copyright IEEE)
We present a fast and robust graph matching approachfor 2D specific
object recognition in images.
From a small number of training images, a model graph
of the object to learn is automatically built. It contains
its local keypoints as well as their spatial proximity relationships.
Training is based on a selection of the most
efficient subgraphs using the mutual information. The
detection uses dynamic programming with a lattice and
thus is very fast. Experiments demonstrate that the proposed
method outperforms the specific object detectors
of the state-of-the-art in realistic noise conditions.
. (Hide...)
Existing action recognition approaches mainly relyon the discriminative
power of individual local descriptors
extracted from spatio-temporal interest points
(STIP), while the geometric relationships among the local
features1 are ignored. This paper presents new features,
called pairwise features (PWF), which encode
both the appearance and the spatio-temporal relations
of the local features for action recognition. First STIPs
are extracted, then PWFs are constructed by grouping
pairs of STIPs which are both close in space and close
in time. We propose a combination of two codebooks
for video representation. Experiments on two standard
human action datasets: the KTH dataset and the Weizmann
dataset show that the proposed approach outperforms
most existing methods (Hide...)
[25] Scale-Invariant
Proximity Graph for Fast Probabilistic Object Recognition
Jérôme
Revaud, Guillaume
Lavoué, Yasuo Ariki
and Atilla
Baskurt, ACM
International
Conference on
Image and Video Retrieval (CIVR),
Xi'an, China,
July 2010.(Abstract...)[Paper]
(Copyright ACM)
A pseudo-hierarchical graph matching procedure dedicatedto object
recognition is presented in this paper. From a
single model image, a graph is built by extracting invariant
local features and linking them according to a so-called
proximity rule. The resulting graph presents several interesting
properties including invariance to scale, robustness
to various distortions and empirical linearity of the number
of edges with respect to the number of nodes. The matching
process is made hierarchical in order to increase both
speed and detection performances. It relies on progressively
incorporating the smaller model features as the hierarchy
level increases. As a result, even a matching between graphs
containing thousands of nodes is very fast (a few milliseconds).
Experiments demonstrates that the method outperforms
state-of-the-art specific object detectors in terms of
precision-recall measures and detection time.
. (Hide...)
Abstract—This paper presents a benchmarkingsystem for the
evaluation of robust mesh watermarking
methods. The proposed benchmark has three
different components: a “standard” mesh model
collection,
a software tool and two application-oriented
evaluation protocols. The software tool integrates
both geometric and perceptual measurements of the
distortion induced by watermark embedding, and
also the implementation of a variety of attacks on
watermarked meshes. The two evaluation protocols
define the main steps to follow when conducting
the evaluation experiments. The efficiency of the
benchmark is demonstrated through the evaluation
and comparison of two recent robust algorithms. (Hide...)
[23] A
Framework for Data-Driven Progressive Mesh Compression Gabriel
Cirio, Guillaume
Lavoué and Florent
Dupont, International
Conference on Computer Graphics Theory and Applications
(GRAPP),
Anger,
France, May 2010. (Abstract...)[Paper]
Progressive mesh compression techniques have reached very high
compression ratios. However, these techniquesusually do not take into
account associated properties of meshes such as colors or normals, no
matter
their size, nor do they try to improve the quality of the intermediate
decompression meshes. In this work, we
propose a framework that uses the associated properties of the mesh to
drive the compression process, resulting
in an improved quality of the intermediate decompression meshes. Based
on a kd-tree geometry compression
algorithm, the framework is generic enough to allow any property or set
of properties to drive the compression
process provided the user defines a distance function for each
property. The algorithm builds the kd-tree structure
using a voxelisation process, which recursively separates the set of
vertices according to the associated
properties distances. We evaluate our method by comparing its
compression ratios to recent algorithms. In
order to evaluate the visual quality of the intermediate meshes, we
carried a perceptive evaluation with human
subjects. Results show that at equal rates, our method delivers an
overall better visual quality. The algorithm
is particularly well suited for the compression of meshes where
geometry and topology play a secondary role
compared to associated properties, such as with many scientific
visualization models. (Hide...)
[22] New
methods for progressive compression of colored 3D Mesh Ho
Lee, Guillaume
Lavoué and Florent
Dupont, International
Conference on Computer Graphics, Visualization and
Computer Vision
(WSCG), Plzen,
Czech Republic, February 2010. (Abstract...)[Paper]
In this paper, we present two methods to compress colored 3D triangular
meshes in a progressive way. Although many progressive algorithms exist
for efficient encoding of connectivity and geometry, none of these
techniques
consider the color data in spite of its considerable size. Based on the
powerful progressive algorithm from Alliez
and Desbrun [All01a], we propose two extensions for progressive
encoding and reconstruction of vertex colors: a
prediction-based method and a mapping table method. In the first one,
after transforming the initial RGB space
into the Lab space, each vertex color is predicted by a specific scheme
using information of its neighboring
vertices. The second method considers a mapping table with reduced
number of possible colors in order to
improve the rate-distortion tradeoff. Results show that the prediction
method produces quite good results even in
low resolutions, while the mapping table method delivers similar visual
results but with a fewer amount of bits
transmitted depending on the color complexity of the model. (Hide...)
[21] Adaptive
coarse-to-fine quantization for optimizing
rate-distortion of progressive mesh compression Ho
Lee, Guillaume
Lavoué and Florent
Dupont, Vision,
Modeling, and Visualization Workshop (VMV), Braunschweig,
Germany, November 2009. (Abstract...)[Paper]
We propose a new connectivity-based progressive compression approach
for
triangle meshes. The key
idea is to adapt the quantization precision to the resolution
of each intermediate mesh so as to optimize
the rate-distortion trade-off. This adaptation is automatically
determined during the encoding process
and the overhead is efficiently encoded using geometrical
prediction techniques. We also introduce
an optimization of the geometry coding by using
a bijective discrete rotation. Results show that our
approach delivers a better rate-distortion behavior
than both connectivity-based and geometry-based
compression state of the art methods. (Hide...)
[19] Local
patch blind spectral watermarking method for 3D graphics Ming
Luo, Kai
Wang, Adrian
G.
Bors and Guillaume
Lavoué, International
Workshop on Digital Watermarking (IWDW), Lecture
Notes on
Computer Science,
Guildford,
UK, August 2009. (Abstract...)[Paper] (Copyright
Springer)
In this paper, we propose a blind watermarking algorithm for 3D meshes.
The proposed algorithm embeds spectral domain constraints
in segmented patches. After aligning the 3D object using volumetric
moments
, the patches are extracted using a robust segmentation method
which ensures that they have equal areas. One bit is then embedded in
each patch by enforcing specific constraints in the distribution of its
spectral coefficients by using Principal Component Analysis (PCA). A
series
of experiments and comparisons with state-of-the-art in 3D graphics wa-
termarking have been performed; they show that the proposed scheme
provides a very good robustness against both geometry and connectivity
attacks, while introducing a low level of dis torsion. (Hide...)
[18] 3D
Object detection and viewpoint selection in sketch images using local
patch-based Zernike moments Anh-Phuong
Ta, Christian
Wolf, Guillaume
Lavoué
and Atilla
Baskurt, IEEE
Workshop on Content Based Multimedia Indexing (CBMI), Crete,
Greece, June 2009. (Abstract...)[Paper] (Copyright
IEEE)
In this paper we present a new approach to detect and recognize 3D
models in 2D storyboards which have been drawn during the production
process of animated cartoons. Our method is robust to occlusion, scale
and rotation. The lack of texture and color makes it difficult to
extract local features of the target object from the sketched
storyboard. Therefore the existing approaches using local descriptors
like interest points can fail in such images. We propose a new
framework which combines patch-based Zernike descriptors with a method
enforcing spatial constraints for exactly detecting 3D models
represented as a set of 2D views in the storyboards. Experimental
results show that the proposed method can deal with partial object
occlusion and is suitable for poorly textured objects. (Hide...)
[17] A
framework for the objective evaluation of segmentation
algorithms using a ground-truth of human segmented 3D-models
Halim Benhabiles, Jean-Philippe
Vandeborre, Guillaume
Lavoué and Mohamed
Daoudi,
IEEE
Shape
Modeling International (SMI),
Beijing, China, June 2009. (Abstract...)[Paper] (Copyright
IEEE)
In this paper, we present an evaluation method of
3D-mesh segmentation algorithms based
on a ground-truth corpus. This corpus is composed of
a set of 3D-models grouped in different classes (animals,
furnitures, etc.) associated with several manual
segmentations produced by human observers. We
define a measure that quantifies the consistency
between two segmentations of a 3D-model, whatever
their granularity. Finally, we propose an objective
quality score for the automatic evaluation of 3D-mesh
segmentation algorithms based on these measures
and on the ground-truth corpus. Thus the quality
of segmentations obtained by automatic algorithms is
evaluated in a quantitative way thanks to the quality
score, and on an objective basis thanks to the ground-truth
corpus. Our approach is illustrated through
the evaluation of two recent 3D-mesh segmentation
methods. (Hide...)
[16]
Markov
Random Fields for Improving 3D Mesh Analysis and Segmentation
Guillaume
Lavoué and Christian
Wolf, Eurographics
2008 Workshop on 3D
Object Retrieval,
pp. 25-32, Crete,
Greece, April 2008. (Abstract...)[exe][Paper] (Copyright
Eurographics)
Mesh analysis and clustering have became important issues in order to
improve the efficiency of common processingoperations like compression,
watermarking or simplification. In this context we present a new method
for
clustering / labeling a 3D mesh given any field of scalar values
associated with its vertices (curvature, density,
roughness etc.). Our algorithm is based on Markov Random Fields,
graphical probabilistic models. This Bayesian
framework allows (1) to integrate both the attributes and the geometry
in the clustering, and (2) to obtain an optimal
global solution using only local interactions, due to the Markov
property of the random field. We have defined
new observation and prior models for 3D meshes, adapted from image
processing which achieve very good results
in terms of spatial coherency of the labeling. All model parameters are
estimated, resulting in a fully automatic
process (the only required parameter is the number of clusters) which
works in reasonable time (several seconds). (Hide...)
This paper presents a fragile watermarking scheme for authentication of
3D semi-regular meshes. After onewavelet decomposition, the watermark
is inserted by slightly modifying the norms and orientations of the
obtained
wavelet coefficient vectors. The inserted watermark is robust to the
so-called content-preserving attacks
including vertex reordering and similarity transformations. However, it
is vulnerable to others attacks such as
local and global geometric modifications and remeshing since the
objective is to check the integrity of the mesh.
Additionally, according to the watermark extraction result, these
attacks can be precisely located on the surface
of the attacked mesh in a blind way. Sufficient security level is also
achieved by introducing secret keys and by
using scalar Costa quantization scheme with appropriate parameter
values. Experimental results demonstrate the
efficacy of the proposed watermarking scheme. (Hide...)
[13] A
Roughness Measure for 3D Mesh Visual
Masking
Guillaume
Lavoué, ACM
SIGGRAPH
Symposium on Applied
Perception in Graphics and Visualization
(APVG),
pp. 57-60,
Tübingen, Germany,
July 2007.(Abstract...)[exe][Paper]
(Copyright ACM)
3D models are subject to a wide variety of processing operations such
as compression, simplification or watermarking, which introduce slight
geometric modifications on the shape. The main issue is to maximize the
compression/simplification ratio or the watermark strength while
minimizing these visual degradations. However few algorithms exploit
the human visual system to \textit{hide} these degradations, while
perceptual attributes could be quite relevant for this task.
Particularly, the Masking Effect defines the fact that a signal can be
masked by the presence of another signal with similar frequency or
orientation. In this context we introduce the notion of
\textit{roughness} for a 3D mesh, as a local measure of geometric noise
on the surface. Indeed, a textured (or \textit{rough}) region is able
to hide geometric distortions much better than a smooth one. Our
measure is based on curvature analysis on local windows of the mesh and
is independent of the resolution/connectivity of the object. An
application to Visual Masking is presented and discussed. (Hide...)
[12] Fast
and cheap object recognition by linear combination of
views
Jérôme
Revaud, Guillaume
Lavoué, Yasuo Ariki
and Atilla
Baskurt, ACM
International
Conference on
Image and Video Retrieval (CIVR),
pp. 194-201, Amsterdam, The
Netherlands,
July 2007.(Abstract...)[Paper]
(Copyright ACM)
In this paper, we present a real-time algorithm for 3D object detection
in images. Our method relies on the Ullman and Basri theory which
claims that the same object under different transformations can often
be expressed as the linear combinations of a small number of its views.
Thus, in our framework the 3D object is modelized by two 2D images
associated with spatial relationships described by localinvariant
feature points. The recognition is based on feature points detection
and alignment with the model. Important theoretical optimizations have
been introduced in order to speed up the original full alignment scheme
and to reduce the model size in memory. The recognition process is
based on a very fast recognition loop which quickly eliminates
outliers. The proposed approach does not require a segmentation stage,
and it is applicable to cluttered scenes. The small size of the model
and the rapidity of the detection make this algorithm articularly
suitable for real-time applications on mobile devices. (Hide...)
An original hierarchical watermarking scheme is proposed in this paper.
A geometrically robust watermark and a high-capacity watermark are
inserted in different resolution levels of the wavelet decomposition of
a semi-regular mesh by modifying the norms of wavelet coefficients.
Both watermarks are blind and invariant to similarity transformations.
The robustness of the first watermark is achieved by synchronizing and
quantizing watermark primitives according to edges lengths of the
coarsest level, which are quite insensible to geometrical attacks. The
high capacity of the second watermark is obtained by considering the
permutation of the norms of a group of wavelet coefficients.
Experiments have proven the high robustness of the first watermark
under common geometrical attacks. To our knowledge, the capacity of the
second method, which can attain the factorial of the candidate
coefficients number, is the highest for 3D meshes in the literature. (Hide...)
[10] Three-Dimensional
Meshes Watermarking:
Review and Attack-Centric Investigation
Kai
Wang, Guillaume
Lavoué, Florence
Denis
and Atilla
Baskurt, Information
Hiding
(IH), Lecture
Notes on
Computer Science, Vol. 4567, pp.
50-64, Saint Malo, France,
June 2007. (Abstract...)[Preprint]
(Copyright Springer)
The recent decade has seen the emergence of 3D meshes in industrial,
medical and entertainment applications. Therefore, their
intellectual property protection problem has attracted more and more
attention in both the research and industrial realms. This paper gives
a synthetic review of 3D mesh watermarking techniques, which are deemed
to be a potential effective solution to the above problem. We begin
with a discussion on the particular difficulties encountered in
applying
watermarking on 3D meshes. Then some typical algorithms are presented
and
analyzed, classifying them in two categories: spatial and spectral.
Considering the important impact of the different attacks on the design
of 3D
mesh watermarking algorithms, we provide an attack-centric viewpoint of
this state of the art. Finally, some special issues and possible future
working directions are discussed. (Hide...)
[9] A
Watermarking Framework for Subdivision Surfaces Guillaume
Lavoué, Florence
Denis, Florent
Dupont and Atilla
Baskurt, International
Workshop on
Multimedia Content
Representation, Classification and Security (IWMRCS), Lecture Notes on
Computer Science, Vol. 4105, pp.
223-231, Istanbul, Turkey,
September 2006. (Abstract...)[Paper]
(Copyright Springer)
This paper presents a robust watermarking scheme for 3D subdivision
surfaces. Our proposal is based on a frequency domain decomposition of
the subdivision control mesh and on spectral coefficients modulation.
The compactness of the cover object (the coarse control mesh) has led
us to optimize the trade-off between watermarking redundancy (which
insures robustness) and imperceptibility by introducing two
contributions: (1) Spectral coefficients are perturbed according to a
new modulation scheme analyzing the spectrum shape and (2) the
redundancy is optimized by using error correcting codes. Since the
watermarked surface can be attacked in a subdivided version, we have
introduced a so-called synchronization algorithm to retrieve the
control polyhedron, starting from a subdivided, attacked version.
Through the experiments, we have demonstrated the high robustness of
our scheme against both geometry and connectivity alterations. (Hide...)
This paper presents an objective structural distorsion measure which
reflects the visual similarity between 3D meshes and thus can be used
for quality assessment. The proposed tool is not linked to any specific
application and thus can be used to evaluate any kinds of 3D mesh
processing algorithms (simplification, compression, watermarking etc.).
This measure follows the concept of structural similarity recently
introduced for 2D image quality assessment by Wang \emph{et al.}
\cite{WANG:2004} and is based on curvature analysis (mean, standard
deviation, covariance) on local windows of the meshes. Evaluation and
comparison with geometric metrics are done through a subjective
experiment based on human evaluation of a set of distorted objects. A
quantitative perceptual metric is also derived from the proposed
structural distorsion measure, for the specific case of watermarking
quality assessment, and is compared with recent state of the art
algorithms. Both visual and quantitative results demonstrate the
robustness of our approach and its strong correlation with subjective
ratings. (Hide...)
This paper presents a robust watermarking algorithm applied to 3D
compressed polygonal meshes. Copyright protection of 3D models becomes
very important for many applications using public networks. As some
recent compression techniques allow very high compression rates, it is
of interest to verify that watermarking techniques support this kind of
attack. In this paper we present the complete scheme developed: the
compression algorithm, the watermarking algorithm and the mark
extraction process. (Hide...)
[6] High
rate compression of
3D
meshes using a subdivision scheme Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, European
Signal Processing
Conference (EUSIPCO’2005),
Antalya, Turkey,
September, 2005. [Paper]
[5] Subdivision
surface fitting for efficient compression and coding of 3D models Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, SPIE
Visual Communications and Image Processing (VCIP'2005),
Vol.
5960, pp. 1159-1170, Beijing, China, July, 2005. (Abstract...)[Paper]
(Copyright SPIE)
In this paper we present a new framework, based on subdivision surface
fitting, for high rate compression and coding of 3D models. Our
algorithm fits the input 3D model, represented by a polygonal mesh,
with a piecewise smooth subdivision surface represented by a coarse
control polyhedron. Our fitting scheme, particularly suited for meshes
issued from mechanical or CAD parts, aims at getting close to the
optimality in terms of control points number, while remaining
independent of the connectivity of the input mesh. The found
subdivision control polyhedron is much more compact than the original
mesh and visually represents the same shape after several subdivision
steps, without artifacts or cracks, like traditional lossy compression
schemes. This
control polyhedron is then encoded specifically to give the final
compressed stream. Experiments conducted on several 3D models have
proven the
coherency and the efficiency of our framework, compared with existing
compression methods.(Hide...)
In this paper we present a new framework for subdivision surface
fitting of arbitrary surfaces (not closed objects) represented by
polygonal meshes. Our approach is particularly suited for
output surfaces from a mechanical or CAD object segmentation for
a piecewise subdivision surface approximation. Our algorithm
produces a mixed quadrangle-triangle control mesh, near optimal in
terms of face and vertex numbers while remaining independent of
the connectivity of the input mesh. The first step approximates the
boundaries with subdivision curves and creates an initial subdivision
surface by optimally linking the boundary control points with respect
to the lines of curvature of the target surface. Then, a second step
optimizes the initial control polyhedron by iteratively moving control
points and enriching regions according to the error distribution.
Experiments
conducted on several surfaces and on a whole segmented mechanical
object,
have proven the coherency and the efficiency of our algorithm, compared
with existing methods.(Hide...)
This paper presents a new and efficient algorithm
for the decomposition of 3D arbitrary triangle meshes into surface
patches. The algorithm is based on the curvature tensor field analysis
and presents two distinct complementary steps: a region based
segmentation, which is an improvement of that presented
in Lavoue et al. [Lavoue G, Dupont F, Baskurt A. Constant
curvature region decomposition
of 3D-meshes by a mixed approach vertex-triangle, J WSCG
2004;12(2):245–52] and which decomposes the object into known
and near constant curvature patches, and a boundary rectification based
on curvature tensor directions, which corrects boundaries by
suppressing their artifacts or discontinuities. Experiments were
conducted on various models including both CAD and natural objects,
results are satisfactory. Resulting segmented patches, by virtue of
their properties (known curvature, clean boundaries) are particularly
adapted to computer graphics tasks like parametric or subdivision
surface fitting in an adaptive compression objective. (Hide...)
[2] Constant
Curvature Region
Decomposition of 3D-Meshes by a Mixed Approach Vertex-Triangle Guillaume
Lavoué, Florent
Dupont
and Atilla
Baskurt, Journal
of WSCG(WSCG’2004),
Vol.12,
No.2, pp. 245-252, ISSN 1213-6972, February 2-6, 2004, Plzen, Czech
Republic. (Abstract...)[Paper]
We present a new and efficient algorithm for decomposition of arbitrary
triangle meshes into connected subsets of meshes called regions. Our
method, based on discrete curvature analysis decomposes the object into
almost constant curvature surfaces and not only
“cut” the object along its hard edges like
traditional methods. This algorithm is an hybrid approach
vertex-triangle, it is based on three major steps: vertices are first
classified using their discrete curvature values, then connected
triangle regions are extracted via a region growing process and finally
similar regions are merged using a region adjacency graph in order to
obtain final patches. Experiments were conducted on both
CAD and natural models, results are satisfactory. Segmented patches can
then be used instead of the complete complex model to facilitate
computer graphic tasks such as smoothing, surface fitting or
compression. (Hide...)
[1] Système
de
Recherche d'Images et de Navigation Visuelle
Basé sur une Approche Locale Khalid
Idrissi, Guillaume
Lavoué,
Julien Ricard, Int.
Conf.
on Image
and Signal Processing (ICISP'2003),
pp. 119-128, Agadir,
Maroc, June 2003.
Habilitation
Lavoué,
G., Compression, tatouage et reconnaissance d'objets 3D,
Apports de la perception, Habilitation
soutenue le 4 Avril 2013 à L'INSA
de Lyon. Manuscrit (PDF)
PhD
Thesis
Lavoué,
G.,
Compression de surfaces, basée sur la subdivision inverse,
pour
la transmission bas débit et la visualisation
progressive, Thèse
soutenue le 1er Décembre
2005, à l'université Claude Bernard, Lyon
1. Manuscrit (PDF)