0000000000026396
AUTHOR
Enrique Vidal
ARC A computerized system for urban garbage collection
In this paper we present ARC a computerized system developed for urban garbage collection. The package is intended to help the planners in the design of efficient collection routes and to facilitate the study and evaluation of alternatives concerning issues such as the type and number of vehicles, frequency of collection and type and location of refuse containers. The final product is a “user friendly” system designed to be used by the planners without outside assistance.
Two independent epigenetic biomarkers predict survival in neuroblastoma.
Background Neuroblastoma (NB) is the most common extracranial pediatric solid tumor with a highly variable clinical course, ranging from spontaneous regression to life-threatening disease. Survival rates for high-risk NB patients remain disappointingly low despite multimodal treatment. Thus, there is an urgent clinical need for additional biomarkers to improve risk stratification, treatment management, and survival rates in children with aggressive NB. Results Using gene promoter methylation analysis in 48 neuroblastoma tumors with microarray technology, we found a strong association between survival and gene promoter hypermethylation (P = 0.036). Hypermethylation of 70 genes significantly …
Colour image segmentation and labeling through multiedit-condensing
Abstract A new method is proposed for detecting and locating objects of interest within a colour scene under very strong variabilities in lighting conditions, object shape and pigmentation. The method is based on Nearest Neighbour classification and Multiedit-Condensing techniques and is applied to implement the vision subsystem of a robotic citric harvesting device. Experiments and results are reported showing the effectiveness of the method and illustrating its appropriateness to the proposed task.
Understanding disease mechanisms with models of signaling pathway activities
Background Understanding the aspects of the cell functionality that account for disease or drug action mechanisms is one of the main challenges in the analysis of genomic data and is on the basis of the future implementation of precision medicine. Results Here we propose a simple probabilistic model in which signaling pathways are separated into elementary sub-pathways or signal transmission circuits (which ultimately trigger cell functions) and then transforms gene expression measurements into probabilities of activation of such signal transmission circuits. Using this model, differential activation of such circuits between biological conditions can be estimated. Thus, circuit activation s…
Application of the Error Correcting Grammatical Inference Method (ECGI) to Multi-Speaker Isolated Word Recognition
It is well known that speech signals constitute highly structured objects which are composed of different kinds of subobjects such as words, phonemes, etc. This fact has motivated several researchers to propose different models which more or less explicitly assume the structural nature of speech. Notable examples of these models are Markov models /Bak 75/, /Jel 76/; the famous Harpy /Low 76/; Scriber and Lafs /Kla 80/; and many others works in which the convenience of some structural model of the speech objects considered is explicitly claimed /Gup 82/, /Lev 83/, /Cra 84/, /Sca 85/, /Kam 85/, /Sau 85/, /Rab 85/, /Kop 85/, /Sch 85/, /Der 86/, /Tan 86/.
A General Fuzzy-Parsing Scheme for Speech Recognition
In this paper a Speech Recognition Methodology is proposed which is based on the general assumption of ‘fuzzyness’ of both speech-data and knowledge-sources. Besides this general principle, there are other fundamental assumptions which are also the bases of the proposed methodology: ‘Modularity’ in the knowledge organization, ‘Homogeneity’ in the representation of data and knowledge, ‘Passiveness’ of the ‘understanding flow’ (no backtraking or feedback), and ‘Parallelism’ in the recognition activity.
On the use of a metric-space search algorithm (AESA) for fast DTW-based recognition of isolated words
The approximating and eliminating search algorithm (AESA) presented was recently introduced for finding nearest neighbors in metric spaces. Although the AESA was originally developed for reducing the time complexity of dynamic time-warping isolated word recognition (DTW-IWR), only rather limited experiments had been previously carried out to check its performance in this task. A set of experiments aimed at filling this gap is reported. The main results show that the important features reflected in previous simulation experiments are also true for real speech samples. With single-speaker dictionaries of up to 200 words, and for most of the different speech parameterizations, local metrics, a…
Analisis de heuristicos para el problema del cartero rural
En este articulo se estudia el comportamiento en el peor de los casos de dos algoritmos heuristicos propuestos para el Problema del Cartero Rural definido sobre un grafo no dirigido (RPP) y sobre un grafo dirigido (DRPP). En ambos problemas se determina el radio del peor caso de los heuristicos estudiados, que para el RPP es 3/2, mientras que para el DRPP no esta acotado. Para conseguir cotas que sean mas significativas, se ha determinado tambien este radio en funcion de ciertos parametros que se pueden calcular a partir de los datos particulares de cada ejemplo, lo que ha permitido obtener una cota finita para el comportamiento en el peor caso del algoritmo heuristico para el DRPP.
On the metric properties of dynamic time warping
Recently, some new and promising methods have been proposed to reduce the number of Dynamic Time Warping (DTW) computations in Isolated Word Recognition. For these methods to be properly applicable, the verification of the Triangle Inequality (TI) by the DTW-based Dissimilarity Measure utilized seems to be an important prerequisite.
Learning the structure of HMM's through grammatical inference techniques
A technique is described in which all the components of a hidden Markov model are learnt from training speech data. The structure or topology of the model (i.e. the number of states and the actual transitions) is obtained by means of an error-correcting grammatical inference algorithm (ECGI). This structure is then reduced by using an appropriate state pruning criterion. The statistical parameters that are associated with the obtained topology are estimated from the same training data by means of the standard Baum-Welch algorithm. Experimental results showing the applicability of this technique to speech recognition are presented. >
Intratumoral immunosuppression profiles in 11q-deleted neuroblastomas provide new potential therapeutic targets
In this issue, Coronado et al. attempt to improve our understanding of the factors affecting the response to immunotherapy in a large subset of high‐risk neuroblastoma with hemizygous deletion of chromosome 11q. By using several computational approaches, the authors study potential transcriptional and post‐transcriptional pathways that may affect the response to immunotherapy and further be leveraged therapeutically in a biomarker‐directed fashion.
An efficient prototype merging strategy for the condensed 1-NN rule through class-conditional hierarchical clustering
Abstract A generalized prototype-based classification scheme founded on hierarchical clustering is proposed. The basic idea is to obtain a condensed 1-NN classification rule by merging the two same-class nearest clusters, provided that the set of cluster representatives correctly classifies all the original points. Apart from the quality of the obtained sets and its flexibility which comes from the fact that different intercluster measures and criteria can be used, the proposed scheme includes a very efficient four-stage procedure which conveniently exploits geometric cluster properties to decide about each possible merge. Empirical results demonstrate the merits of the proposed algorithm t…
Case-studies on average-case analysis for an elementary course on algorithms
Average-case algorithm analysis is usually viewed as a tough subject by students in the first courses in computer science. Traditionally, these topics are fully developed in advanced courses with a clear mathematical orientation. The work presented here is not an alternative to this, rather, it presents the analysis of algorithms (and average-case in particular) adapted to the mathematical background of students in an elementary course on algorithms or programming by using two selected case-studies.