Search results for "Nonparametric statistics"
showing 10 items of 80 documents
The impact of sample reduction on PCA-based feature extraction for supervised learning
2006
"The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic raise of computational complexity and classification error in high dimensions. In this paper, different feature extraction (FE) techniques are analyzed as means of dimensionality reduction, and constructive induction with respect to the performance of Naive Bayes classifier. When a data set contains a large number of instances, some sampling approach is applied to address the computational complexity of FE and classification processes. The main goal of this paper is to show the impact of sample reduction on the process of FE for supervised learning. In our study we analyzed the conventional PC…
Response Determination Criteria for ELISPOT: Toward a Standard that Can Be Applied Across Laboratories
2011
ELISPOT assay readout is often dichomized as positive or negative responses according to prespecified criteria. However, these criteria can vary widely across institutions. The adoption of a common response criterion is a key step toward cross-laboratory comparability. This chapter describes the two main approaches to response determination, identifying the strengths and limitations of each. Nonparametric statistical tests and consideration of data quality are recommended and instructions provided for their ready implementation by nonstatisticians and statisticians alike.
Feature selection using support vector machines and bootstrap methods for ventricular fibrillation detection
2012
Early detection of ventricular fibrillation (VF) is crucial for the success of the defibrillation therapy in automatic devices. A high number of detectors have been proposed based on temporal, spectral, and time-frequency parameters extracted from the surface electrocardiogram (ECG), showing always a limited performance. The combination ECG parameters on different domain (time, frequency, and time-frequency) using machine learning algorithms has been used to improve detection efficiency. However, the potential utilization of a wide number of parameters benefiting machine learning schemes has raised the need of efficient feature selection (FS) procedures. In this study, we propose a novel FS…
Continuity correction of pearson’s chi-square test in 2x2 contingency tables: A mini-review on recent development
2022
The Pearson’s chi-square test represents a nonparametric test more used in Biomedicine and Social Sciences, but it introduces an error for 2 x 2 contingency tables, when a discrete probability distribution is approximated with a continuous distribution. The first author to introduce the continuity correction of Pearson’s chi-square test has been Yates F. (1934). Unfortunately, Yates’s correction may tend to overcorrect of p-value, this can implicate an overly conservative result. Therefore many authors have introduced variants Pearson’s chi-square statistic, as alternative continuity correction to Yates’s correction. The goal of this paper is to describe the most recent continuity correctio…
Measuring Spatiotemporal Dependencies in Bivariate Temporal Random Sets with Applications to Cell Biology
2008
Analyzing spatiotemporal dependencies between different types of events is highly relevant to many biological phenomena (e.g., signaling and trafficking), especially as advances in probes and microscopy have facilitated the imaging of dynamic processes in living cells. For many types of events, the segmented areas can overlap spatially and temporally, forming random clumps. In this paper, we model the binary image sequences of two different event types as a realization of a bivariate temporal random set and propose a nonparametric approach to quantify spatial and spatiotemporal interrelations using the pair correlation, cross-covariance, and the Ripley K functions. Based on these summary st…
Parameter Rating by Diffusion Gradient
2014
Anomaly detection is a central task in high-dimensional data analysis. It can be performed by using dimensionality reduction methods to obtain a low-dimensional representation of the data, which reveals the geometry and the patterns that exist and govern it. Usually, anomaly detection methods classify high-dimensional vectors that represent data points as either normal or abnormal. Revealing the parameters (i.e., features) that cause detected abnormal behaviors is critical in many applications. However, this problem is not addressed by recent anomaly-detection methods and, specifically, by nonparametric methods, which are based on feature-free analysis of the data. In this chapter, we provi…
An Analysis of Earthquakes Clustering Based on a Second-Order Diagnostic Approach
2009
A diagnostic method for space–time point process is here introduced and applied to seismic data of a fixed area of Japan. Nonparametric methods are used to estimate the intensity function of a particular space–time point process and on the basis of the proposed diagnostic method, second-order features of data are analyzed: this approach seems to be useful to interpret space–time variations of the observed seismic activity and to focus on its clustering features.
Positive solutions for singular (p, 2)-equations
2019
We consider a nonlinear nonparametric Dirichlet problem driven by the sum of a p-Laplacian and of a Laplacian (a (p, 2)-equation) and a reaction which involves a singular term and a $$(p-1)$$ -superlinear perturbation. Using variational tools and suitable truncation and comparison techniques, we show that the problem has two positive smooth solutions.
Decomposing changes in the conditional variance of GDP over time
2017
A well established fact in the growth empirics literature is the increasing (unconditional) variation in output per capita across countries. We propose a nonparametric decomposition of the conditional variation of output per capita across countries to capture different channels over which the variation might be increasing. We find that OECD countries have experienced diminishing conditional variation while other regions have experienced increasing conditional variation. Our decomposition suggests that most of these changes in the conditional variance of output are due to unobserved factors not accounted for by the traditional growth determinants. In addition to this we show that these facto…
Residual-based block bootstrap for cointegration testing
2010
We propose a new testing procedure to determine the rank of cointegration. This new method is based on the nonparametric resampling procedure, so-called Residual-Based Block Bootstrap (RBB), which is developed by Paparoditis and Politis (2003) in the context of unit root testing. Through Monte Carlo experiments we show that, in small samples, the RBB cointegration test has good power properties in relation to the other two well-known tests for cointegration, such as the Augmented Dickey–Fuller (ADF), applied to the residual of a cointegrating regression, and the Johansen's maximum eigenvalue tests. Likewise, this article looks at the influence played by the correlation of the ‘X’ variables …