Search results for "methodologie"
showing 10 items of 2141 documents
Surface Reconstruction Based on a Descriptive Approach
2000
The design of complex surfaces is generally hard to achieve. A natural method consists in the subdivision of the global surface into basic surface elements. The different elements are independently designed and then assembled together to represent the final surface. This method requires a classification and a formal description of the basic elements. This chapter presents a general framework for surface description, based on a constructive tree approach. In this tree the leaves are surface primitives and the nodes are constructive operators.
A Fuzzy Logic C-Means Clustering Algorithm to Enhance Microcalcifications Clusters in Digital Mammograms
2011
The detection of microcalcifications is a hard task, since they are quite small and often poorly contrasted against the background of images. The Computer Aided Detection (CAD) systems could be very useful for breast cancer control. In this paper, we report a method to enhance microcalcifications cluster in digital mammograms. A Fuzzy Logic clustering algorithm with a set of features is used for clustering microcalcifications. The method described was tested on simulated clusters of microcalcifications, so that the location of the cluster within the breast and the exact number of microcalcifications is known.
Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering
2017
A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. The multiplicities are pres…
Measurement of the lifetime of tau-lepton
1996
The tau lepton lifetime is measured with the L3 detector at LEP using the complete data taken at centre-of-mass energies around the Z pole resulting in tau_tau = 293.2 +/- 2.0 (stat) +/- 1.5 (syst) fs. The comparison of this result with the muon lifetime supports lepton universality of the weak charged current at the level of six per mille. Assuming lepton universality, the value of the strong coupling constant, alpha_s is found to be alpha_s(m_tau^2) = 0.319 +/- 0.015(exp.) +/- 0.014 (theory). The tau lepton lifetime is measured with the L3 detector at LEP using the complete data taken at centre-of-mass energies around the Z pole resulting in τ τ =293.2 ± 2.0 (stat) ± 1.5 (syst) fs . The c…
Stronger proprioceptive BOLD-responses in the somatosensory cortices reflect worse sensorimotor function in adolescents with and without cerebral pal…
2020
Graphical abstract
A Fast GPU-Based Motion Estimation Algorithm for H.264/AVC
2012
H.264/AVC is the most recent predictive video compression standard to outperform other existing video coding standards by means of higher computational complexity. In recent years, heterogeneous computing has emerged as a cost-efficient solution for high-performance computing. In the literature, several algorithms have been proposed to accelerate video compression, but so far there have not been many solutions that deal with video codecs using heterogeneous systems. This paper proposes an algorithm to perform H.264/AVC inter prediction. The proposed algorithm performs the motion estimation, both with full-pixel and sub-pixel accuracy, using CUDA to assist the CPU, obtaining remarkable time …
Fuzzy subgroup mining for gene associations
2004
When studying the therapeutic efficacy of potential new drugs, it would be much more efficient to use predictors in order to assess their toxicity before going into clinical trials. One promising line of research has focused on the discovery of sets of candidate gene profiles to be used as toxicity indicators in future drug development. In particular genomic microarrays may be used to analyze the causality relationship between the administration of the drugs and the so-called gene expression, a parameter typically used by biologists to measure its influence at gene level. This kind of experiments involves a high throughput analysis of noisy and particularly unreliable data, which makes the …
Unmanned aerial system imagery and photogrammetric canopy height data in area-based estimation of forest variables
2015
In this paper we examine the feasibility of data from unmanned aerial vehicle (UAV)-borne aerial imagery in stand-level forest inventory. As airborne sensor platforms, UAVs offer advantages cost and flexibility over traditional manned aircraft in forest remote sensing applications in small areas, but they lack range and endurance in larger areas. On the other hand, advances in the processing of digital stereo photography make it possible to produce three-dimensional (3D) forest canopy data on the basis of images acquired using simple lightweight digital camera sensors. In this study, an aerial image orthomosaic and 3D photogrammetric canopy height data were derived from the images acquired …
GridNet with Automatic Shape Prior Registration for Automatic MRI Cardiac Segmentation
2018
In this paper, we propose a fully automatic MRI cardiac segmentation method based on a novel deep convolutional neural network (CNN) designed for the 2017 ACDC MICCAI challenge. The novelty of our network comes with its embedded shape prior and its loss function tailored to the cardiac anatomy. Our model includes a cardiac center-of-mass regression module which allows for an automatic shape prior registration. Also, since our method processes raw MR images without any manual preprocessing and/or image cropping, our CNN learns both high-level features (useful to distinguish the heart from other organs with a similar shape) and low-level features (useful to get accurate segmentation results).…
Rethinking the sGLOH Descriptor
2018
sGLOH (shifting GLOH) is a histogram-based keypoint descriptor that can be associated to multiple quantized rotations of the keypoint patch without any recomputation. This property can be exploited to define the best distance between two descriptor vectors, thus avoiding computing the dominant orientation. In addition, sGLOH can reject incongruous correspondences by adding a global constraint on the rotations either as an a priori knowledge or based on the data. This paper thoroughly reconsiders sGLOH and improves it in terms of robustness, speed and descriptor dimension. The revised sGLOH embeds more quantized rotations, thus yielding more correct matches. A novel fast matching scheme is a…