0000000000193471
AUTHOR
Betty Edelman
Experimental design and analysis for psychology
International audience; A complete course in experimental design and analysis for those students looking to build a working understanding of data collection and analysis in a research context.The authors' lively, entertaining writing style helps to engage and motivate students while they study these often challenging concepts and skills.A focus on examples and exercises throughout the text encourages the development of a proper understanding through hands-on learning.The development and use of definitional formulas throughout provides for increased understanding of statistical procedures and enables the serious student to continue to expand statistical knowledge.Inclusion of Monte Carlo sim…
A Widrow–Hoff Learning Rule for a Generalization of the Linear Auto-associator
Abstract A generalization of the linear auto-associator that allows for differential importance and nonindependence of both the stimuli and the units has been described previously by Abdi (1988). This model was shown to implement the general linear model of multivariate statistics. In this note, a proof is given that the Widrow–Hoff learning rule can be similarly generalized and that the weight matrix will converge to a generalized pseudo-inverse when the learning parameter is properly chosen. The value of the learning parameter is shown to be dependent only upon the (generalized) eigenvalues of the weight matrix and not upon the eigenvectors themselves. This proof provides a unified framew…
DISTATIS: The Analysis of Multiple Distance Matrices
In this paper we present a generalization of classical multidimensional scaling called DISTATIS which is a new method that can be used to compare algorithms when their outputs consist of distance matrices computed on the same set of objects. The method first evaluates the similarity between algorithms using a coefficient called the RV coefficient. From this analysis, a compromise matrix is computed which represents the best aggregate of the original matrices. In order to evaluate the differences between algorithms, the original distance matrices are then projected onto the compromise. We illustrate this method with a "toy example" in which four different "algorithms" (two computer programs …
What represents a face? A computational approach for the integration of physiological and psychological data.
Empirical studies of face recognition suggest that faces might be stored in memory by means of a few canonical representations. The nature of these canonical representations is, however, unclear. Although psychological data show a three-quarter-view advantage, physiological studies suggest profile and frontal views are stored in memory. A computational approach to reconcile these findings is proposed. The pattern of results obtained when different views, or combinations of views, are used as the internal representation of a two-stage identification network consisting of an autoassociative memory followed by a radial-basis-function network are compared. Results show that (i) a frontal and a…
Introduction
Principal Component and Neural Network Analyses of Face Images: What Can Be Generalized in Gender Classification?
We present an overview of the major findings of the principal component analysis (pca) approach to facial analysis. In a neural network or connectionist framework, this approach is known as the linear autoassociator approach. Faces are represented as a weighted sum of macrofeatures (eigenvectors or eigenfaces) extracted from a cross-product matrix of face images. Using gender categorization as an illustration, we analyze the robustness of this type of facial representation. We show that eigenvectors representing general categorical information can be estimated using a very small set of faces and that the information they convey is generalizable to new faces of the same population and to a l…
Sex Classification of Face Areas
Human subjects and an artificial neural network, composed of an autoassociative memory and a perceptron, gender classified the same 160 frontal face images (80 male and 80 female). All 160 face images were presented under three conditions (1) full face image with the hair cropped (2) top portion only of the Condition 1 image (3) bottom portion only of the Condition 1 image. Predictions from simulations using Condition 1 stimuli for training and testing novel stimuli in Conditions 1, 2, and 3, were compared to human subject performance. Although the network showed a fair ability to generalize learning to new stimuli under the three conditions, performing from 66 to 78% correctly on novel fa…