Search results for "K-Nearest Neighbor"
showing 10 items of 59 documents
An improved distance-based relevance feedback strategy for image retrieval
2013
Most CBIR (content based image retrieval) systems use relevance feedback as a mechanism to improve retrieval results. NN (nearest neighbor) approaches provide an efficient method to compute relevance scores, by using estimated densities of relevant and non-relevant samples in a particular feature space. In this paper, particularities of the CBIR problem are exploited to propose an improved relevance feedback algorithm based on the NN approach. The resulting method has been tested in a number of different situations and compared to the standard NN approach and other existing relevance feedback mechanisms. Experimental results evidence significant improvements in most cases.
Interactive Image Retrieval Using Smoothed Nearest Neighbor Estimates
2010
Relevance feedback has been adopted by most recent Content Based Image Retrieval systems to reduce the semantic gap that exists between the subjective similarity among images and the similarity measures computed in a given feature space. Distance-based relevance feedback using nearest neighbors has been recently presented as a good tradeoff between simplicity and performance. In this paper, we analyse some shortages of this technique and propose alternatives that help improving the efficiency of the method in terms of the retrieval precision achieved. The resulting method has been evaluated on several repositories which use different feature sets. The results have been compared to those obt…
Assessment of Deep Learning Methodology for Self-Organizing 5G Networks
2019
In this paper, we present an auto-encoder-based machine learning framework for self organizing networks (SON). Traditional machine learning approaches, for example, K Nearest Neighbor, lack the ability to be precisely predictive. Therefore, they can not be extended for sequential data in the true sense because they require a batch of data to be trained on. In this work, we explore artificial neural network-based approaches like the autoencoders (AE) and propose a framework. The proposed framework provides an advantage over traditional machine learning approaches in terms of accuracy and the capability to be extended with other methods. The paper provides an assessment of the application of …
Entropy measures, entropy estimators, and their performance in quantifying complex dynamics: Effects of artifacts, nonstationarity, and long-range co…
2017
Entropy measures are widely applied to quantify the complexity of dynamical systems in diverse fields. However, the practical application of entropy methods is challenging, due to the variety of entropy measures and estimators and the complexity of real-world time series, including nonstationarities and long-range correlations (LRC). We conduct a systematic study on the performance, bias, and limitations of three basic measures (entropy, conditional entropy, information storage) and three traditionally used estimators (linear, kernel, nearest neighbor). We investigate the dependence of entropy measures on estimator- and process-specific parameters, and we show the effects of three types of …
Entropy-Based Detection of Complexity and Nonlinearity in Short-Term Heart Period Variability under different Physiopathological States
2020
We compare different estimators of a popular en-tropy-based nonlinear dynamic measure, i.e. the conditional entropy (CE), as regards their ability to assess the complexity and nonlinearity of short-term heart rate variability (HRV). The CE is computed using binning, kernel and nearest neighbor entropy estimators in HRV time series measured from young, old and post-myocardial infarction patients studied at rest and during orthostatic stress. We find that the three estimators yield similar patterns of CE, but different patterns of nonlinear dynamics, across groups and conditions. These results suggest that the strategy for CE estimation is not crucial for the quantification of complexity, but…
A finite size scaling study of the five-dimensional Ising model
1994
For systems above the marginal dimension d*, where mean field theory starts to become valid, such as Ising models in d = 5 for which d* = 4, hyperscaling is invalid and hence it was suggested that finite size scaling is not ruled by the correlation length ξ (∝ |t| −1/2 in Landau theory, t being the distance from the critical point) but by a “thermodynamic length” l (∝ |t| −2/d). Early simulation work by Binder et al. using nearest neighbor hypercubic L5 lattices with L ⩽ 7 yielded some evidence for this prediction, but the renormalized coupling constant gL = −3 + 〈M4〉/〈M2〉2 at Tc was gL ≈ −1.0 instead of the prediction of Brezin and Zinn-Justin, gL(Tc) = −3 + Γ4(1/4)/(8 π2) ≈ −0.812. In the…
Improving Nearest Neighbor Based Multi-target Prediction Through Metric Learning
2017
The purpose of this work is to learn specific distance functions to be applied for multi-target regression problems using nearest neighbors. The idea of preserving the order relation between input and output vectors considering their corresponding distances is used along a maximal margin criterion to formulate a specific metric learning problem. Extensive experiments and the corresponding discussion try to put forward the advantages of the proposed algorithm that can be considered as a generalization of previously proposed approaches. Preliminary results suggest that this line of work can lead to very competitive algorithms with convenient properties.
Feature extraction for classification in knowledge discovery systems
2003
Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of "the curse of dimensionality". We consider three different eigenvector-based feature extraction approaches for classification. The summary of obtained results concerning the accuracy of classification schemes is presented and the issue of search for the most appropriate feature extraction method for a given data set is considered. A decision support system to aid in the integration of the feature extraction and classification processes is proposed. The goals and requirements set for the d…
Ensemble of Hankel Matrices for Face Emotion Recognition
2015
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification withi…
Combining feature extraction and expansion to improve classification based similarity learning
2017
Abstract Metric learning has been shown to outperform standard classification based similarity learning in a number of different contexts. In this paper, we show that the performance of classification similarity learning strongly depends on the data format used to learn the model. We then present an Enriched Classification Similarity Learning method that follows a hybrid approach that combines both feature extraction and feature expansion. In particular, we propose a data transformation and the use of a set of standard distances to supplement the information provided by the feature vectors of the training samples. The method is compared to state-of-the-art feature extraction and metric lear…