Search results for "Cursive"
showing 10 items of 126 documents
Adaptive Feed-Forward Neural Network for Wind Power Delivery
2022
This paper describes a grid connected wind energy conversion system. The interconnecting filter is a simple inductor with a series resistor to minimize three-phase current Total Harmonic Distortion (THD). Using the Recursive Least Squares (RLS) Estimator, an online grid impedance technique is proposed in the stationary reference frame using the Recursive Least Squares (RLS) Estimator. An Adaptive Feedforward Neural (AFN) Controller has also been developed using the inverse of the system to improve the performance of the current Proportional-Integral controller under dynamical conditions and provide better DC link voltage stability. The neural network weights are computed in real-time using …
Time delay induced effects on control of linear systems under random excitation
2001
Recursive formulas in terms of statistics of the response of linear systems with time delay under normal white noise input are developed. Two alternative methods are presented, in order to capture the time delay effects. The first is given in an approximate solution obtained by expanding the control force in a Taylor series. The second, available for the stationary solution (if it exists) gets the variance of the controlled system, with time delay in an analytical form. The efficacy loss in terms of statistics of the response is discussed in detail.
A fast and recursive algorithm for clustering large datasets with k-medians
2012
Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…
Fast Estimation of the Median Covariation Matrix with Application to Online Robust Principal Components Analysis
2017
International audience; The geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data. We define estimators, based on recursive algorithms, that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory. Asymptotic convergence properties of the recursive algorithms are studied under weak conditions. The computation of the principal components can also be performed online and this approach can be useful for online outlier detection. A simulation study clearly shows that this robust indicat…
Is there a cost at encoding words with joined letters during visual word recognition?
2018
Abstract For simplicity, models of visual-word recognition have focused on printed words composed of separated letters, thus overlooking the processing of cursive words. Manso de Zuniga, Humphreys, and Evett (1991) claimed that there is an early “cursive normalization” encoding stage when processing written words with joined letters. To test this claim, we conducted a lexical decision experiment in which words were presented either with separated or joined letters. To examine if the cost of letter segmentation occurs early in processing, we also manipulated a factor (i.e., word-frequency) that is posited to affect subsequent lexical processing. Results showed faster response times for the w…
Stochastic algorithms for robust statistics in high dimension
2016
This thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the me…
Teaching How to Write Essays in Secondary Education: Didactic Sequences Analysis
2021
[EN] The development of the writing expression competence, and in particular, the argumentative competence has awoken interest among the different educational levels. The essay has been studied from different pers-pectives in Secondary Education, however there are not any studies on how books teach the writing process at this level, where the mini-essay format is actually worked on more. Therefore, the main objective of this paper is to learn more about the examples of didactic sequences to teach how to write essays/mini-essays in textbooks of third and fourth year of Secondary Education. The methodology applied is the analysis of the qualitative content and the gathering of information fro…
Probabilistic versus deterministic memory limited learning
1995
Random Tanglegram Partitions (Random TaPas): An Alexandrian Approach to the Cophylogenetic Gordian Knot
2018
Abstract Symbiosis is a key driver of evolutionary novelty and ecological diversity, but our understanding of how macroevolutionary processes originate extant symbiotic associations is still very incomplete. Cophylogenetic tools are used to assess the congruence between the phylogenies of two groups of organisms related by extant associations. If phylogenetic congruence is higher than expected by chance, we conclude that there is cophylogenetic signal in the system under study. However, how to quantify cophylogenetic signal is still an open issue. We present a novel approach, Random Tanglegram Partitions (Random TaPas) that applies a given global-fit method to random partial tanglegrams of …
Learning small programs with additional information
1997
This paper was inspired by [FBW 94]. An arbitrary upper bound on the size of some program for the target function suffices for the learning of some program for this function. In [FBW 94] it was discovered that if “learning” is understood as “identification in the limit,” then in some programming languages it is possible to learn a program of size not exceeding the bound, while in some other programming languages this is not possible.