0000000000248737
AUTHOR
Alexey Tsymbal
Diversity in search strategies for ensemble feature selection
Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. Ensembles allow us to achieve higher accuracy, which is often not achievable with single models. It was shown theoretically and experimentally that in order for an ensemble to be effective, it should consist of base classifiers that have diversity in their predictions. One technique, which proved to be effective for constructing an ensemble of diverse base classifiers, is the use of different feature subsets, or so-called ensemble feature selection. Many ensemble feature selection strategies incorporate diversity as an objective in the search for the best collection of feature subse…
Dynamic Integration of Decision Committees
Decision committee learning has demonstrated outstanding success in reducing classification error with an ensemble of classifiers. In a way a decision committee is a classifier formed upon an ensemble of subsidiary classifiers. Voting, which is commonly used to produce the final decision of committees has, however, a shortcoming. It is unable to take into account local expertise. When a new instance is difficult to classify, then it easily happens that only the minority of the classifiers will succeed, and the majority voting will quite probably result in a wrong classification. We suggest that dynamic integration of classifiers is used instead of majority voting in decision committees. Our…
Does Relevance Matter to Data Mining Research?
Data mining (DM) and knowledge discovery are intelligent tools that help to accumulate and process data and make use of it. We review several existing frameworks for DM research that originate from different paradigms. These DM frameworks mainly address various DM algorithms for the different steps of the DM process. Recent research has shown that many real-world problems require integration of several DM algorithms from different paradigms in order to produce a better solution elevating the importance of practice-oriented aspects also in DM research. In this paper we strongly emphasize that DM research should also take into account the relevance of research, not only the rigor of it. Under…
Decision Committee Learning with Dynamic Integration of Classifiers
Decision committee learning has demonstrated spectacular success in reducing classification error from learned classifiers. These techniques develop a classifier in the form of a committee of subsidiary classifiers. The combination of outputs is usually performed by majority vote. Voting, however, has a shortcoming. It is unable to take into account local expertise. When a new instance is difficult to classify, then the average classifier will give a wrong prediction, and the majority vote will more probably result in a wrong prediction. Instead of voting, dynamic integration of classifiers can be used, which is based on the assumption that each committee member is best inside certain subar…
The decision support system for telemedicine based on multiple expertise
This paper discusses the application of artificial intelligence in telemedicine and some of our research results in this area. The main goal of our research is to develop methods and systems to collect, analyse, distribute and use medical diagnostics knowledge from multiple knowledge sources and areas of expertise. Use of modern communication tools enable a physician to collect and analyse information obtained from experts worldwide with the help of a decision support medical system. In this paper we discuss a multilevel representation and processing of medical data using a system which evaluates and exploits knowledge about the behaviour of statistical diagnostics methods. The presented te…
Correlation-Based and Contextual Merit-Based Ensemble Feature Selection
Recent research has proved the benefits of using an ensemble of diverse and accurate base classifiers for classification problems. In this paper the focus is on producing diverse ensembles with the aid of three feature selection heuristics based on two approaches: correlation and contextual merit -based ones. We have developed an algorithm and experimented with it to evaluate and compare the three feature selection heuristics on ten data sets from UCI Repository. On average, simple correlation-based ensemble has the superiority in accuracy. The contextual merit -based heuristics seem to include too many features in the initial ensembles and iterations were most successful with it.
Feature Extraction for Classification in Knowledge
Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of ?the curse of dimensionality?. We consider three different eigenvector-based feature extraction approaches for classification. The summary of obtained results concerning the accuracy of classification schemes is presented and the issue of search for the most appropriate feature extraction method for a given data set is considered. A decision support system to aid in the integration of the feature extraction and classification processes is proposed. The goals and requirements set for the d…
Ensemble Feature Selection Based on Contextual Merit and Correlation Heuristics
Recent research has proven the benefits of using ensembles of classifiers for classification problems. Ensembles of diverse and accurate base classifiers are constructed by machine learning methods manipulating the training sets. One way to manipulate the training set is to use feature selection heuristics generating the base classifiers. In this paper we examine two of them: correlation-based and contextual merit -based heuristics. Both rely on quite similar assumptions concerning heterogeneous classification problems. Experiments are considered on several data sets from UCI Repository. We construct fixed number of base classifiers over selected feature subsets and refine the ensemble iter…
Dynamic Integration of Classifiers in the Space of Principal Components
Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the integration procedure in the ensemble should properly utilize the ensemble diversity. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique of dynamic integration, in which local accuracy estimates are calculated for each base classifier of an ensemble, in the neighborhood of a new instance to be pr…
Towards more relevance-oriented data mining research
Data mining (DM) research has successfully developed advanced DM techniques and algorithms over the last few decades, and many organisations have great expectations to take more benefit of their data warehouses in decision making. Currently, the strong focus of most DM-researchers is still only on technology-oriented topics. Commonly the DM research has several stakeholders, the major of which can be divided into internal and external ones each having their own point of view, and which are at least partly conflicting. The most important internal groups of stakeholders are the DM research community and academics in other disciplines. The most important external stakeholder groups are manager…
Sequential Genetic Search for Ensemble Feature Selection
Ensemble learning constitutes one of the main directions in machine learning and data mining. Ensembles allow us to achieve higher accuracy, which is often not achievable with single models. One technique, which proved to be effective for constructing an ensemble of diverse classifiers, is the use of feature subsets. Among different approaches to ensemble feature selection, genetic search was shown to perform best in many domains. In this paper, a new strategy GAS-SEFS, Genetic Algorithm-based Sequential Search for Ensemble Feature Selection, is introduced. Instead of one genetic process, it employs a series of processes, the goal of each of which is to build one base classifier. Experiment…
Class Noise and Supervised Learning in Medical Domains: The Effect of Feature Extraction
Inductive learning systems have been successfully applied in a number of medical domains. It is generally accepted that the highest accuracy results that an inductive learning system can achieve depend on the quality of data and on the appropriate selection of a learning algorithm for the data. In this paper we analyze the effect of class noise on supervised learning in medical domains. We review the related work on learning from noisy data and propose to use feature extraction as a pre-processing step to diminish the effect of class noise on the learning process. Our experiments with 8 medical datasets show that feature extraction indeed helps to deal with class noise. It clearly results i…
Local Feature Selection with Dynamic Integration of Classifiers
Multidimensional data is often feature space heterogeneous so that individual features have unequal importance in different sub areas of the feature space. This motivates to search for a technique that provides a strategic splitting of the instance space being able to identify the best subset of features for each instance to be classified. Our technique applies the wrapper approach where a classification algorithm is used as an evaluation function to differentiate between different feature subsets. In order to make the feature selection local, we apply the recent technique for dynamic integration of classifiers. This allows to determine which classifier and which feature subset should be us…
Keynote Paper: Data Mining Researcher, Who is Your Customer? Some Issues Inspired by the Information Systems Field
Data mining as an applied research field is still causing great expectations among organizations which want to raise the utility they are getting from their huge databases and data warehouses. There exist too few success stories about organizations having managed to satisfy even some of those expectations. This situation is very similar to the one inside the information systems (IS) field, especially earlier but even currently. The recent lively debate about the identity of the IS discipline included also the analysis concerning the customers of IS research. Inspired by IS researchers' insights related to the topic, we ask the question "who is our customer?" as data mining researchers. With…
Feature Extraction for Dynamic Integration of Classifiers
Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique of dynamic integration, in which local accuracy estimates are calculated for each base classifier of an ensemble, in the neighborhood of a new instance to be processed. Generally, the whole space of original features is used to find the neighborhood of a new instance for local accuracy estimates in dynamic integration. However, when dynamic integration takes place in high dimensions the search f…
Feature Selection for Ensembles of Simple Bayesian Classifiers
A popular method for creating an accurate classifier from a set of training data is to train several classifiers, and then to combine their predictions. The ensembles of simple Bayesian classifiers have traditionally not been a focus of research. However, the simple Bayesian classifier has much broader applicability than previously thought. Besides its high classification accuracy, it also has advantages in terms of simplicity, learning speed, classification speed, storage space, and incrementality. One way to generate an ensemble of simple Bayesian classifiers is to use different feature subsets as in the random subspace method. In this paper we present a technique for building ensembles o…
Local dimensionality reduction and supervised learning within natural clusters for biomedical data analysis
Inductive learning systems were successfully applied in a number of medical domains. Nevertheless, the effective use of these systems often requires data preprocessing before applying a learning algorithm. This is especially important for multidimensional heterogeneous data presented by a large number of features of different types. Dimensionality reduction (DR) is one commonly applied approach. The goal of this paper is to study the impact of natural clustering--clustering according to expert domain knowledge--on DR for supervised learning (SL) in the area of antibiotic resistance. We compare several data-mining strategies that apply DR by means of feature extraction or feature selection w…
Arbiter Meta-Learning with Dynamic Selection of Classifiers and its Experimental Investigation
In data mining, the selection of an appropriate classifier to estimate the value of an unknown attribute for a new instance has an essential impact to the quality of the classification result. Recently promising approaches using parallel and distributed computing have been presented. In this paper, we consider an approach that uses classifiers trained on a number of data subsets in parallel as in the arbiter meta-learning technique. We suggest that information is collected during the learning phase about the performance of the included base classifiers and arbiters and that this information is used during the application phase to select the best classifier dynamically. We evaluate our techn…
<title>Dynamic integration of multiple data mining techniques in a knowledge discovery management system</title>
One of the most important directions in improvement of data mining and knowledge discovery, is the integration of multiple classification techniques of an ensemble of classifiers. An integration technique should be able to estimate and select the most appropriate component classifiers from the ensemble. We present two variations of an advanced dynamic integration technique with two distance metrics. The technique is one variation of the stacked generalization method, with an assumption that each of the component classifiers is the best one, inside a certain sub area of the entire domain area. Our technique includes two phases: the learning phase and the application phase. During the learnin…
The impact of sample reduction on PCA-based feature extraction for supervised learning
"The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic raise of computational complexity and classification error in high dimensions. In this paper, different feature extraction (FE) techniques are analyzed as means of dimensionality reduction, and constructive induction with respect to the performance of Naive Bayes classifier. When a data set contains a large number of instances, some sampling approach is applied to address the computational complexity of FE and classification processes. The main goal of this paper is to show the impact of sample reduction on the process of FE for supervised learning. In our study we analyzed the conventional PC…
Bagging and Boosting with Dynamic Integration of Classifiers
One approach in classification tasks is to use machine learning techniques to derive classifiers using learning instances. The co-operation of several base classifiers as a decision committee has succeeded to reduce classification error. The main current decision committee learning approaches boosting and bagging use resampling with the training set and they can be used with different machine learning techniques which derive base classifiers. Boosting uses a kind of weighted voting and bagging uses equal weight voting as a combining method. Both do not take into account the local aspects that the base classifiers may have inside the problem space. We have proposed a dynamic integration tech…
Dynamic Integration with Random Forests
Random Forests are a successful ensemble prediction technique that combines two sources of randomness to generate base decision trees; bootstrapping instances for each tree and considering a random subset of features at each node. Breiman in his introductory paper on Random Forests claims that they are more robust than boosting with respect to overfitting noise, and are able to compete with boosting in terms of predictive performance. Multiple recently published empirical studies conducted in various application domains confirm these claims. Random Forests use simple majority voting to combine the predictions of the trees. However, it is clear that each decision tree in a random forest may …
Local dimensionality reduction within natural clusters for medical data analysis
Inductive learning systems have been successfully applied in a number of medical domains. Nevertheless, the effective use of these systems requires data preprocessing before applying a learning algorithm. Especially it is important for multidimensional heterogeneous data, presented by a large number of features of different types. Dimensionality reduction is one commonly applied approach. The goal of this paper is to study the impact of natural clustering on dimensionality reduction for classification. We compare several data mining strategies that apply dimensionality reduction by means of feature extraction or feature selection for subsequent classification. We show experimentally on micr…
Effectiveness of local feature selection in ensemble learning for prediction of antimicrobial resistance
In the real world concepts are often not stable but change over time. A typical example of this in the biomedical context is antibiotic resistance, where pathogen sensitivity may change over time as pathogen strains develop resistance to antibiotics that were previously effective. This problem, known as concept drift (CD), complicates the task of learning a robust model. Different ensemble learning (EL) approaches (that instead of learning a single classifier try to learn and maintain a set of classifiers over time) have been shown to perform reasonably well in the presence of concept drift. In this paper we study how much local feature selection (FS) can improve ensemble performance for da…
Knowledge management challenges in knowledge discovery systems
Current knowledge discovery systems are armed with many data mining techniques that can be potentially applied to a new problem. However, a system faces a challenge of selecting the most appropriate technique(s) for a problem at hand, since in the real domain area it is infeasible to perform a comparison of all applicable techniques. The main goal of this paper is to consider the limitations of data-driven approaches and propose a knowledge-driven approach to enhance the use of multiple data-mining strategies in a knowledge discovery system. We introduce the concept of (meta-) knowledge management, which is aimed to organize a systematic process of (meta-) knowledge capture and refinement o…
Ensemble feature selection with the simple Bayesian classification
Abstract A popular method for creating an accurate classifier from a set of training data is to build several classifiers, and then to combine their predictions. The ensembles of simple Bayesian classifiers have traditionally not been a focus of research. One way to generate an ensemble of accurate and diverse simple Bayesian classifiers is to use different feature subsets generated with the random subspace method. In this case, the ensemble consists of multiple classifiers constructed by randomly selecting feature subsets, that is, classifiers constructed in randomly chosen subspaces. In this paper, we present an algorithm for building ensembles of simple Bayesian classifiers in random sub…
<title>Distance functions in dynamic integration of data mining techniques</title>
One of the most important directions in the improvement of data mining and knowledge discovery is the integration of multiple data mining techniques. An integration method needs to be able either to evaluate and select the most appropriate data mining technique or to combine two or more techniques efficiently. A recent integration method for the dynamic integration of multiple data mining techniques is based on the assumption that each of the data mining techniques is the best one inside a certain subarea of the whole domain area. This method uses an instance-based learning approach to collect information about the competence areas of the mining techniques and applies a distance function to…
Handling local concept drift with dynamic integration of classifiers : domain of antibiotic resistance in nosocomial infections
In the real world concepts and data distributions are often not stable but change with time. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the target concept. Among the most popular and effective approaches to handle concept drift is ensemble learning, where a set of models built over different time periods is maintained and the best model is selected or the predictions of models are combined. In this paper we consider the use of an ensemble integration technique that helps to better handle concept drift at t…
Dynamic Integration of Classifiers for Tracking Concept Drift in Antibiotic Resistance Data
In the real world concepts are often not stable but change with time. A typical example of this in the medical context is antibiotic resistance, where pathogen sensitivity may change over time as new pathogen strains develop resistance to antibiotics which were previously effective. This problem, known as concept drift, complicates the task of learning a model from medical data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the final concept. The underlying data distribution may change as well, making previously built models useless, which is known as virtual concept drift. These changes make regu…
A dynamic integration algorithm for an ensemble of classifiers
Numerous data mining methods have recently been developed, and there is often a need to select the most appropriate data mining method or methods. The method selection can be done statically or dynamically. Dynamic selection takes into account characteristics of a new instance and usually results in higher classification accuracy. We discuss a dynamic integration algorithm for an ensemble of classifiers. Our algorithm is a new variation of the stacked generalization method and is based on the basic assumption that each basic classifier is best inside certain subareas of the application domain. The algorithm includes two main phases: a learning phase, which collects information about the qua…
Dynamic integration of data mining methods in knowledge discovery systems
Search strategies for ensemble feature selection in medical diagnostics
The goal of this paper is to propose, evaluate, and compare four search strategies for ensemble feature selection, and to consider their application to medical diagnostics, with a focus on the problem of the classification of acute abdominal pain. Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. Ensembles allow us to get higher accuracy, sensitivity, and specificity, which are often not achievable with single models. One technique, which proved to be effective for ensemble construction, is feature selection. Lately, several strategies for ensemble feature selection were proposed, including random subspacing, hill-climbing-based se…
Ensemble Feature Selection Based on the Contextual Merit
Recent research has proved the benefits of using ensembles of classifiers for classification problems. Ensembles constructed by machine learning methods manipulating the training set are used to create diverse sets of accurate classifiers. Different feature selection techniques based on applying different heuristics for generating base classifiers can be adjusted to specific domain characteristics. In this paper we consider and experiment with the contextual feature merit measure as a feature selection heuristic. We use the diversity of an ensemble as evaluation function in our new algorithm with a refinement cycle. We have evaluated our algorithm on seven data sets from UCI. The experiment…
Diversity in Ensemble Feature Selection
Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. Ensembles allow us to achieve higher accuracy, which is often not achievable with single models. It was shown theoretically and experimentally that in order for an ensemble to be effective, it should consist of high-accuracy base classifiers that should have high diversity in their predictions. One technique, which proved to be effective for constructing an ensemble of accurate and diverse base classifiers, is to use different feature subsets, or so-called ensemble feature selection. Many ensemble feature selection strategies incorporate diversity as a component of the fitness funct…