Search results for "Training set"

showing 10 items of 68 documents

Putting the user into the active learning loop : Towards realistic but efficient photointerpretation

2012

In recent years, several studies have been published about the smart definition of training set using active learning algorithms. However, none of these works consider the contradiction between the active learning methods, which rank the pixels according to their uncertainty, and the confidence of the user in labeling, which is related both to the homogeneity of the pixel context and to the knowledge of the user of the scene. In this paper, we propose a two-steps procedure based on a filtering scheme to learn the confidence of the user in labeling. This way, candidate training pixels are ranked according both to their uncertainty and to the chances of being labeled correctly by the user. In…

Training setContextual image classificationComputer sciencebusiness.industryActive learning (machine learning)Machine learningcomputer.software_genreActive learningLife ScienceArtificial intelligenceData miningbusinesscomputer
researchProduct

2004

This paper presents the use of Support Vector Machines (SVMs) for prediction and analysis of antisense oligonucleotide (AO) efficacy. The collected database comprises 315 AO molecules including 68 features each, inducing a problem well-suited to SVMs. The task of feature selection is crucial given the presence of noisy or redundant features, and the well-known problem of the curse of dimensionality. We propose a two-stage strategy to develop an optimal model: (1) feature selection using correlation analysis, mutual information, and SVM-based recursive feature elimination (SVM-RFE), and (2) AO prediction using standard and profiled SVM formulations. A profiled SVM gives different weights to …

Training setCorrelation coefficientMean squared errorComputer sciencebusiness.industryApplied MathematicsFeature selectionMutual informationMachine learningcomputer.software_genreBiochemistryComputer Science ApplicationsSupport vector machineStructural BiologyFeature (machine learning)Artificial intelligencebusinessMolecular BiologycomputerEnergy (signal processing)Curse of dimensionalityBMC Bioinformatics
researchProduct

Multilayer neural networks: an experimental evaluation of on-line training methods

2004

Artificial neural networks (ANN) are inspired by the structure of biological neural networks and their ability to integrate knowledge and learning. In ANN training, the objective is to minimize the error over the training set. The most popular method for training these networks is back propagation, a gradient descent technique. Other non-linear optimization methods such as conjugate directions set or conjugate gradient have also been used for this purpose. Recently, metaheuristics such as simulated annealing, genetic algorithms or tabu search have been also adapted to this context.There are situations in which the necessary training data are being generated in real time and, an extensive tr…

Training setGeneral Computer ScienceArtificial neural networkbusiness.industryComputer scienceComputer Science::Neural and Evolutionary ComputationMathematicsofComputing_NUMERICALANALYSISContext (language use)Management Science and Operations ResearchMachine learningcomputer.software_genreBackpropagationTabu searchModeling and SimulationConjugate gradient methodGenetic algorithmSimulated annealingArtificial intelligencebusinessGradient descentcomputerMetaheuristicComputers & Operations Research
researchProduct

Towards to deep neural network application with limited training data: synthesis of melanoma's diffuse reflectance spectral images

2019

The goal of our study is to train artificial neural networks (ANN) using multispectral images of melanoma. Since the number of multispectral images of melanomas is limited, we offer to synthesize them from multispectral images of benign skin lesions. We used the previously created melanoma diagnostic criterion p'. This criterion is calculated from multispectral images of skin lesions captured under 526nm, 663nm, and 964nm LED illumination. We synthesize these three images from multispectral images of nevus so that the p' map matches the melanoma criteria (the values in the lesion area is >1, respectively). Demonstrated results show that by transforming multispectral images of benign nevus i…

Training setLed illuminationArtificial neural networkbusiness.industryComputer scienceMelanomaMultispectral imagePattern recognitionmedicine.diseasemedicineNevusBenign nevusArtificial intelligenceSkin cancerbusinessDiffuse Optical Spectroscopy and Imaging VII
researchProduct

Intelligent Sampling for Vegetation Nitrogen Mapping Based on Hybrid Machine Learning Algorithms

2021

Upcoming satellite imaging spectroscopy missions will deliver spatiotemporal explicit data streams to be exploited for mapping vegetation properties, such as nitrogen (N) content. Within retrieval workflows for real-time mapping over agricultural regions, such crop-specific information products need to be derived precisely and rapidly. To allow fast processing, intelligent sampling schemes for training databases should be incorporated to establish efficient machine learning (ML) models. In this study, we implemented active learning (AL) heuristics using kernel ridge regression (KRR) to minimize and optimize a training database for variational heteroscedastic Gaussian processes regression (V…

Training setMean squared errorActive learning (machine learning)Data stream miningComputer scienceFrame (networking)0211 other engineering and technologiesSampling (statistics)02 engineering and technologyVegetation15. Life on landGeotechnical Engineering and Engineering Geologycomputer.software_genreArticleEuclidean distancesymbols.namesakesymbolsData miningElectrical and Electronic EngineeringGaussian processcomputer021101 geological & geomatics engineering
researchProduct

Null Space Based Image Recognition Using Incremental Eigendecomposition

2011

An incremental approach to the discriminative common vector (DCV) method for image recognition is considered. Discriminative projections are tackled in the particular context in which new training data becomes available and learned subspaces may need continuous updating. Starting from incremental eigendecomposition of scatter matrices, an efficient updating rule based on projections and orthogonalization is given. The corresponding algorithm has been empirically assessed and compared to its batch counterpart. The same good properties and performance results of the original method are kept but with a dramatic decrease in the computation needed.

Training setbusiness.industryComputationContext (language use)Pattern recognitionRule-based systemLinear subspaceDiscriminative modelComputer visionArtificial intelligencebusinessOrthogonalizationEigendecomposition of a matrixMathematics
researchProduct

Learning the structure of HMM's through grammatical inference techniques

2002

A technique is described in which all the components of a hidden Markov model are learnt from training speech data. The structure or topology of the model (i.e. the number of states and the actual transitions) is obtained by means of an error-correcting grammatical inference algorithm (ECGI). This structure is then reduced by using an appropriate state pruning criterion. The statistical parameters that are associated with the obtained topology are estimated from the same training data by means of the standard Baum-Welch algorithm. Experimental results showing the applicability of this technique to speech recognition are presented. >

Training setbusiness.industryComputer scienceEstimation theorySpeech recognitionMarkov processComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)Pattern recognitionGrammar inductionsymbols.namesakeRule-based machine translationsymbolsArtificial intelligencePruning (decision trees)businessBaum–Welch algorithmHidden Markov modelError detection and correctionInternational Conference on Acoustics, Speech, and Signal Processing
researchProduct

Ensemble Feature Selection Based on Contextual Merit and Correlation Heuristics

2001

Recent research has proven the benefits of using ensembles of classifiers for classification problems. Ensembles of diverse and accurate base classifiers are constructed by machine learning methods manipulating the training sets. One way to manipulate the training set is to use feature selection heuristics generating the base classifiers. In this paper we examine two of them: correlation-based and contextual merit -based heuristics. Both rely on quite similar assumptions concerning heterogeneous classification problems. Experiments are considered on several data sets from UCI Repository. We construct fixed number of base classifiers over selected feature subsets and refine the ensemble iter…

Training setbusiness.industryComputer scienceFeature selectionPattern recognitionBase (topology)Machine learningcomputer.software_genreExpert systemRandom subspace methodComputingMethodologies_PATTERNRECOGNITIONEnsembles of classifiersFeature (machine learning)Artificial intelligencebusinessHeuristicscomputerCascading classifiers
researchProduct

One-Class Classifiers : A Review and Analysis of Suitability in the Context of Mobile-Masquerader Detection

2007

One-class classifiers employing for training only the data from one class are justified when the data from other classes is difficult to obtain. In particular, their use is justified in mobile-masquerader detection, where user characteristics are classified as belonging to the legitimate user class or to the impostor class, and where collecting the data originated from impostors is problematic. This paper systematically reviews various one-class classification methods, and analyses their suitability in the context of mobile-masquerader detection. For each classification method, its sensitivity to the errors in the training set, computational requirements, and other characteristics are consi…

Training setbusiness.industryComputer scienceMasquerader DetectionContext (language use)General Medicine[MATH] Mathematics [math]Mobile Terminal Security[INFO] Computer Science [cs]Machine learningcomputer.software_genreClass (biology)Computer ScienceClassification methodsSensitivity (control systems)Artificial intelligencebusinesscomputerOne-class ClassifiersMathematics
researchProduct

Learning Similarity Scores by Using a Family of Distance Functions in Multiple Feature Spaces

2017

There exist a large number of distance functions that allow one to measure similarity between feature vectors and thus can be used for ranking purposes. When multiple representations of the same object are available, distances in each representation space may be combined to produce a single similarity score. In this paper, we present a method to build such a similarity ranking out of a family of distance functions. Unlike other approaches that aim to select the best distance function for a particular context, we use several distances and combine them in a convenient way. To this end, we adopt a classical similarity learning approach and face the problem as a standard supervised machine lea…

Training setbusiness.industryFeature vectorSimilarity heuristicPattern recognition02 engineering and technologyMachine learningcomputer.software_genreSemantic similarityArtificial Intelligence020204 information systemsNormalized compression distance0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionArtificial intelligenceJaro–Winkler distancebusinesscomputerClassifier (UML)SoftwareSimilarity learningMathematicsInternational Journal of Pattern Recognition and Artificial Intelligence
researchProduct