Search results for "Data mining"
showing 10 items of 907 documents
The effect of automated taxa identification errors on biological indices
2017
In benthic macroinvertebrate biomonitoring systems, the target is to determine the status of ecosystems based on several biological indices. To increase cost-efficiency, computer-based taxa identification for image data has recently been developed. Taxa identification errors can, however, have strong effects on the indices and thus on the determination of the ecological status. In order to shift the biomonitoring process towards automated expert systems, we need a clear understanding on the bias caused by automation. In this paper, we examine eleven classification methods in the case of macroinvertebrate image data and show how their classification errors propagate into different biological…
SCCF Parameter and Similarity Measure Optimization and Evaluation
2019
Neighborhood-based Collaborative Filtering (CF) is one of the most successful and widely used recommendation approaches; however, it suffers from major flaws especially under sparse environments. Traditional similarity measures used by neighborhood-based CF to find similar users or items are not suitable in sparse datasets. Sparse Subspace Clustering and common liking rate in CF (SCCF), a recently published research, proposed a tunable similarity measure oriented towards sparse datasets; however, its performance can be maximized and requires further analysis and investigation. In this paper, we propose and evaluate the performance of a new tuning mechanism, using the Mean Absolute Error (MA…
Reestimating a minimum acceptable geocoding hit rate for conducting a spatial analysis
2019
Geocoding consists in converting a textual description of a location into coordinates. Hence, geocoding a dataset of events has to be carried out before performing a spatial analysis of some data. ...
Large-scale random features for kernel regression
2015
Kernel methods constitute a family of powerful machine learning algorithms, which have found wide use in remote sensing and geosciences. However, kernel methods are still not widely adopted because of the high computational cost when dealing with large scale problems, such as the inversion of radiative transfer models. This paper introduces the method of random kitchen sinks (RKS) for fast statistical retrieval of bio-geo-physical parameters. The RKS method allows to approximate a kernel matrix with a set of random bases sampled from the Fourier domain. We extend their use to other bases, such as wavelets, stumps, and Walsh expansions. We show that kernel regression is now possible for data…
Revisitation of Nonorthogonal Spin Adaptation in Coupled Cluster Theory.
2015
The benefits of what is alternatively called a nonorthogonally spin-adapted, spin-free, or orbital representation of the coupled cluster equations is discussed relative to orthogonally spin-adapted, spin-orbital, and spin-integrated theories. In particular, specific linear combinations of the orbital cluster amplitudes, denoted spin-summed amplitudes, are shown to reduce the number of contractions that must be explicitly performed and to simplify the expressions and their derivation. The computational efficiency of the spin-summed approach is discussed and compared to orthogonally spin-adapted and spin-integrated approaches. The spin-summed approach is shown to have significant computationa…
Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.
2013
Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less). Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing…
Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment.
2007
Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rath…
Machine Learning Techniques for Intrusion Detection: A Comparative Analysis
2016
International audience; With the growth of internet world has transformed into a global market with all monetary and business exercises being carried online. Being the most imperative resource of the developing scene, it is the vulnerable object and hence needs to be secured from the users with dangerous personality set. Since the Internet does not have focal surveillance component, assailants once in a while, utilizing varied and advancing hacking topologies discover a path to bypass framework " s security and one such collection of assaults is Intrusion. An intrusion is a movement of breaking into the framework by compromising the security arrangements of the framework set up. The techniq…
Combining conjunctive rule extraction with diffusion maps for network intrusion detection
2013
Network security and intrusion detection are important in the modern world where communication happens via information networks. Traditional signature-based intrusion detection methods cannot find previously unknown attacks. On the other hand, algorithms used for anomaly detection often have black box qualities that are difficult to understand for people who are not algorithm experts. Rule extraction methods create interpretable rule sets that act as classifiers. They have mostly been combined with already labeled data sets. This paper aims to combine unsupervised anomaly detection with rule extraction techniques to create an online anomaly detection framework. Unsupervised anomaly detectio…
Vibrational spectroscopy provides a green tool for multi-component analysis
2010
Abstract Based on the literature published in the past decade, we focus on the possibilities offered by vibrational-spectroscopy-based techniques to make multi-component analysis of samples independently of their physical state. We discuss the main chemometric tools proposed for developing calibration models and solving problems derived from spectroscopic non-idealities (e.g., highly overlapped spectral bands or the presence of spectral non-linearity), and the benefits provided by vibrational-spectroscopy-based multi-component analysis in industry. Our main objective is to show that vibrational spectroscopy provides fast analytical methods that enable non-destructive analysis and permits, i…