Search results for "machine learning."

showing 10 items of 1455 documents

An Optimized Design of Choice Experiments: A New Approach for Studying Decision Behavior in Choice Task Experiments

2014

In this paper, we present a new approach for the optimal experimental design problem of generating diagnostic choice tasks, where the respondent's decision strategy can be unambiguously deduced from the observed choice. In this new approach, we applied a genetic algorithm that creates a one-to-one correspondence between a set of predefined decision strategies and the alternatives of the choice task; it also manipulates the characteristics of the choice tasks. In addition, this new approach takes into account the measurement errors that can occur when the preferences of the decision makers are being measured. The proposed genetic algorithm is capable of generating diagnostic choice tasks eve…

Choice setOperationalizationSociology and Political Sciencebusiness.industryComputer scienceStrategy and ManagementGeneral Decision SciencesContrast (statistics)Space (commercial competition)Machine learningcomputer.software_genreTask (project management)Arts and Humanities (miscellaneous)Similarity (psychology)Genetic algorithmArtificial intelligenceSet (psychology)businesscomputerApplied PsychologyJournal of Behavioral Decision Making
researchProduct

Incremental linear model trees on massive datasets

2013

The existence of massive datasets raises the need for algorithms that make efficient use of resources like memory and computation time. Besides well-known approaches such as sampling, online algorithms are being recognized as good alternatives, as they often process datasets faster using much less memory. The important class of algorithms learning linear model trees online (incremental linear model trees or ILMTs in the following) offers interesting options for regression tasks in this sense. However, surprisingly little is known about their performance, as there exists no large-scale evaluation on massive stationary datasets under equal conditions. Therefore, this paper shows their applica…

Class (computer programming)Computer scienceProcess (engineering)business.industryComputationLinear modelSampling (statistics)computer.software_genreMachine learningKISS principleData miningArtificial intelligenceOnline algorithmbusinesscomputerProceedings of the 28th Annual ACM Symposium on Applied Computing
researchProduct

Stability-Based Model Selection for High Throughput Genomic Data: An Algorithmic Paradigm

2012

Clustering is one of the most well known activities in scien- tific investigation and the object of research in many disciplines, ranging from Statistics to Computer Science. In this beautiful area, one of the most difficult challenges is the model selection problem, i.e., the identifi- cation of the correct number of clusters in a dataset. In the last decade, a few novel techniques for model selection, representing a sharp departure from previous ones in statistics, have been proposed and gained promi- nence for microarray data analysis. Among those, the stability-based methods are the most robust and best performing in terms of predic- tion, but the slowest in terms of time. Unfortunately…

Class (computer programming)Settore INF/01 - Informaticabusiness.industryComputer scienceHeuristic (computer science)Model selectionStability (learning theory)Machine learningcomputer.software_genreIdentification (information)Algorithm designArtificial intelligenceCluster analysisbusinessAlgorithms and Data StructuresThroughput (business)computer
researchProduct

One-Sided Prototype Selection on Class Imbalanced Dissimilarity Matrices

2012

In the dissimilarity representation paradigm, several prototype selection methods have been used to cope with the topic of how to select a small representation set for generating a low-dimensional dissimilarity space. In addition, these methods have also been used to reduce the size of the dissimilarity matrix. However, these approaches assume a relatively balanced class distribution, which is grossly violated in many real-life problems. Often, the ratios of prior probabilities between classes are extremely skewed. In this paper, we study the use of renowned prototype selection methods adapted to the case of learning from an imbalanced dissimilarity matrix. More specifically, we propose the…

Class (computer programming)business.industryPattern recognitionPattern RecognitionMachine learningcomputer.software_genreSet (abstract data type)Matrix (mathematics)Distribution (mathematics)DissimilarityOne sidedPattern recognition (psychology)Artificial intelligenceRepresentation (mathematics)businesscomputerSelection (genetic algorithm)Mathematics
researchProduct

On Duality in Learning and the Selection of Learning Teams

1996

AbstractPrevious work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn collections of functions. Herein we start with a class of functions and considerthe learner setof all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion ofdiversificationfor a class of functions. This enable us to distinguish between several flavours of IIMs all of which must be represented in a team learning the given class.

Class (computer programming)business.industryPerspective (graphical)Duality (mathematics)InferenceInductive reasoningMachine learningcomputer.software_genreTheoretical Computer ScienceComputer Science ApplicationsTeam learningComputational Theory and MathematicsSelection (linguistics)Artificial intelligencebusinesscomputerMathematicsInformation SystemsInformation and Computation
researchProduct

A local complexity based combination method for decision forests trained with high-dimensional data

2012

Accurate machine learning with high-dimensional data is affected by phenomena known as the “curse” of dimensionality. One of the main strategies explored in the last decade to deal with this problem is the use of multi-classifier systems. Several of such approaches are inspired by the Random Subspace Method for the construction of decision forests. Furthermore, other studies rely on estimations of the individual classifiers' competence, to enhance the combination in the multi-classifier and improve the accuracy. We propose a competence estimate which is based on local complexity measurements, to perform a weighted average combination of the decision forest. Experimental results show how thi…

Clustering high-dimensional dataComputational complexity theorybusiness.industryComputer scienceDecision treeMachine learningcomputer.software_genreRandom forestRandom subspace methodArtificial intelligenceData miningbusinessCompetence (human resources)computerClassifier (UML)Curse of dimensionality2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)
researchProduct

Data Analysis and Bioinformatics

2007

Data analysis methods and techniques are revisited in the case of biological data sets. Particular emphasis is given to clustering and mining issues. Clustering is still a subject of active research in several fields such as statistics, pattern recognition, and machine learning. Data mining adds to clustering the complications of very large data-sets with many attributes of different types. And this is a typical situation in biology. Some cases studies are also described.

Clustering high-dimensional dataFuzzy clusteringComputer sciencebusiness.industryCorrelation clusteringConceptual clusteringMachine learningcomputer.software_genreComputingMethodologies_PATTERNRECOGNITIONCURE data clustering algorithmConsensus clusteringCanopy clustering algorithmData miningArtificial intelligenceCluster analysisbusinesscomputer
researchProduct

Distance Functions, Clustering Algorithms and Microarray Data Analysis

2010

Distance functions are a fundamental ingredient of classification and clustering procedures, and this holds true also in the particular case of microarray data. In the general data mining and classification literature, functions such as Euclidean distance or Pearson correlation have gained their status of de facto standards thanks to a considerable amount of experimental validation. For microarray data, the issue of which distance function works best has been investigated, but no final conclusion has been reached. The aim of this extended abstract is to shed further light on that issue. Indeed, we present an experimental study, involving several distances, assessing (a) their intrinsic sepa…

Clustering high-dimensional dataFuzzy clusteringSettore INF/01 - Informaticabusiness.industryCorrelation clusteringMachine learningcomputer.software_genrePearson product-moment correlation coefficientRanking (information retrieval)Euclidean distancesymbols.namesakeClustering distance measuressymbolsArtificial intelligenceData miningbusinessCluster analysiscomputerMathematicsDe facto standard
researchProduct

Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs

2014

We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of networ…

Clustering high-dimensional databusiness.industryjel:C41jel:C13Machine learningcomputer.software_genreRegressionhigh-dimensional data gene expression data pathway information penalized regressionConnection (mathematics)Set (abstract data type)Lasso (statistics)CovariateArtificial intelligenceSensitivity (control systems)businessCoordinate descentAlgorithmcomputerMathematics
researchProduct

Bayesian versus data driven model selection for microarray data

2014

Clustering is one of the most well known activities in scientific investigation and the object of research in many disciplines, ranging from Statistics to Computer Science. In this beautiful area, one of the most difficult challenges is a particular instance of the model selection problem, i.e., the identification of the correct number of clusters in a dataset. In what follows, for ease of reference, we refer to that instance still as model selection. It is an important part of any statistical analysis. The techniques used for solving it are mainly either Bayesian or data-driven, and are both based on internal knowledge. That is, they use information obtained by processing the input data. A…

Clustering Model selection Bayesian information criterion Akaike information criterion Minimum message length BioinformaticsSettore INF/01 - InformaticaComputer sciencebusiness.industryModel selectionBayesian probabilitycomputer.software_genreMachine learningComputer Science ApplicationsData-drivenDetermining the number of clusters in a data setIdentification (information)Bayesian information criterionData miningArtificial intelligenceAkaike information criterionCluster analysisbusinesscomputer
researchProduct