Search results for "generalization"
showing 10 items of 250 documents
On Inductive Generalization in Monadic First-Order Logic With Identity
1966
Publisher Summary The chapter examines the results obtained by means of a system when the relation of identity is used in addition to monadic predicates. The chapter compares the new system of inductive logic sketched by Jaakko Hintikka with Carnap's system. The main advantage of Hintikka's system is that it gives natural degrees of confirmation to inductive generalizations, whereas Carnap's confirmation function c * enables one to deal satisfactorily with singular inductive inference only. According to Carnap's system, general sentences that are not logically true receive nonnegligible degrees of confirmation only if the evidence contains a large part of the individuals in the whole univer…
NeutroAlgebra is a Generalization of Partial Algebra
2020
In this paper we recall, improve, and extend several definitions, properties and applications of our previous 2019 research referred to NeutroAlgebras and AntiAlgebras (also called NeutroAlgebraic Structures and respectively AntiAlgebraic Structures). Let <A> be an item (concept, attribute, idea, proposition, theory, etc.). Through the process of neutrosphication, we split the nonempty space we work on into three regions {two opposite ones corresponding to <A> and <antiA>, and one corresponding to neutral (indeterminate) <neutA> (also denoted <neutroA>) between the opposites}, which may or may not be disjoint – depending on the application, but they are …
On ordered categories as a framework for fuzzification of algebraic and topological structures
2009
Using the framework of ordered categories, the paper considers a generalization of the fuzzification machinery of algebraic structures introduced by Rosenfeld as well as provides a new approach to fuzzification of topological structures, which amounts to fuzzifying the underlying ''set'' of a structure in a suitably compatible way, leaving the structure itself crisp. The latter machinery allows the so-called ''double fuzzification'', i.e., a fuzzification of something that is already fuzzified.
A New Unsupervised Neural Network for Pattern Recognition with Spiking Neurons
2006
In this paper we propose a three-layered neural network for binary pattern recognition and memorization. Unlike the classic approach to pattern recognition, our net works organizing itself in an unsupervised way, to distinguish beetween different patterns or to recognize similar ones. If we present a binary input to the first layer, after some time steps we could read the output of the net in the third layer, as one and only one neuron activating with high firing rate; the middle layer will act as a generalization layer, i.e. similar pattern will have similar (or the same) representation in the middle layer. We used learning algorithms inspired from other works or from biological data to ac…
Assigning discounts in a marketing campaign by using reinforcement learning and neural networks
2009
In this work, RL is used to find an optimal policy for a marketing campaign. Data show a complex characterization of state and action spaces. Two approaches are proposed to circumvent this problem. The first approach is based on the self-organizing map (SOM), which is used to aggregate states. The second approach uses a multilayer perceptron (MLP) to carry out a regression of the action-value function. The results indicate that both approaches can improve a targeted marketing campaign. Moreover, the SOM approach allows an intuitive interpretation of the results, and the MLP approach yields robust results with generalization capabilities.
Optimal Pruned K-Nearest Neighbors: OP-KNN Application to Financial Modeling
2008
The paper proposes a methodology called OP-KNN, which builds a one hidden-layer feed forward neural network, using nearest neighbors neurons with extremely small computational time. The main strategy is to select the most relevant variables beforehand, then to build the model using KNN kernels. Multi-response sparse regression (MRSR) is used as the second step in order to rank each k-th nearest neighbor and finally as a third step leave-one-out estimation is used to select the number of neighbors and to estimate the generalization performances. This new methodology is tested on a toy example and is applied to financial modeling.
Enhancing Visuomotor Adaptation by Reducing Error Signals: Single-step (Aware) versus Multiple-step (Unaware) Exposure to Wedge Prisms
2007
Abstract Neglect patients exhibit both a lack of awareness for the spatial distortions imposed during visuomanual prism adaptation procedures, and exaggerated postadaptation negative after-effects. To better understand this unexpected adaptive capacity in brain-lesioned patients, we investigated the contribution of awareness for the optical shift to the development of prism adaptation. The lack of awareness found in neglect was simulated in a multiple-step group where healthy subjects remained unaware of the optical deviation because of its progressive stepwise increase from 2° to 10°. We contrasted this method with the classical single-step group in which subjects were aware of the visual …
Symmetry patterns in the (N, Delta) spectrum
2007
We revise the role played by symmetry in the study of the low-lying baryon spectrum and comment on the difficulties when trying to generalize the symmetry pattern to higher energy states. We show that for the $(N,\Delta)$ part such a generalization is plausible allowing the identification of spectral regularities and the prediction of until now non-identified resonances.
Learning to Navigate in the Gaussian Mixture Surface
2021
In the last years, deep learning models have achieved remarkable generalization capability on computer vision tasks, obtaining excellent results in fine-grained classification problems. Sophisticated approaches based-on discriminative feature learning via patches have been proposed in the literature, boosting the model performances and achieving the state-of-the-art over well-known datasets. Cross-Entropy (CE) loss function is commonly used to enhance the discriminative power of the deep learned features, encouraging the separability between the classes. However, observing the activation map generated by these models in the hidden layer, we realize that many image regions with low discrimin…
An output-only stochastic parametric approach for the identification of linear and nonlinear structures under random base excitations: Advances and c…
2014
In this paper a time domain output-only Dynamic Identification approach for Civil Structures (DICS) first formulated some years ago is reviewed and presented in a more generalized form. The approach in question, suitable for multi- and single-degrees-of-freedom systems, is based on the statistical moments and on the correlation functions of the response to base random excitations. The solving equations are obtained by applying the Itô differential stochastic calculus to some functions of the response. In the previous version ([21] Cavaleri, 2006; [22] Benfratello et al., 2009), the DICS method was based on the use of two classes of models (Restricted Potential Models and Linear Mass Proport…