Search results for "computer.software_genre"
showing 10 items of 3858 documents
Adaptive Continuous Feature Binarization for Tsetlin Machines Applied to Forecasting Dengue Incidences in the Philippines
2020
The Tsetlin Machine (TM) is a recent interpretable machine learning algorithm that requires relatively modest computational power, yet attains competitive accuracy in several benchmarks. TMs are inherently binary; however, many machine learning problems are continuous. While binarization of continuous data through brute-force thresholding has yielded promising accuracy, such an approach is computationally expensive and hinders extrapolation. In this paper, we address these limitations by standardizing features to support scale shifts in the transition from training data to real-world operation, typical for e.g. forecasting. For scalability, we employ sampling to reduce the number of binariz…
A Review of Kernel Methods in Remote Sensing Data Analysis
2011
Kernel methods have proven effective in the analysis of images of the Earth acquired by airborne and satellite sensors. Kernel methods provide a consistent and well-founded theoretical framework for developing nonlinear techniques and have useful properties when dealing with low number of (potentially high dimensional) training samples, the presence of heterogenous multimodalities, and different noise sources in the data. These properties are particularly appropriate for remote sensing data analysis. In fact, kernel methods have improved results of parametric linear methods and neural networks in applications such as natural resource control, detection and monitoring of anthropic infrastruc…
Daily Peak Temperature Forecasting with Elman Neural Networks
2005
This work presents a forecaster based on an Elman artificial neural network trained with resilient backpropagation algorithm for predicting the daily peak temperatures one day ahead. The available time series was recorded at Petrosino (TP), in the west coast of Sicily, Italy and it is composed by temperature (min and max values), the humidity (min and max values) and the rainfall value between January 1st, 1995 and May 14th, 2003. Performances and reliabilities of the proposed model were evaluated by a number of measures, comparing different neural models. Experimental results show very good prediction performances.
Integrating genomic binding site predictions using real-valued meta classifiers
2008
Currently the best algorithms for predicting transcription factor binding sites in DNA sequences are severely limited in accuracy. There is good reason to believe that predictions from different classes of algorithms could be used in conjunction to improve the quality of predictions. In this paper, we apply single layer networks, rules sets, support vector machines and the Adaboost algorithm to predictions from 12 key real valued algorithms. Furthermore, we use a ‘window’ of consecutive results as the input vector in order to contextualise the neighbouring results. We improve the classification result with the aid of under- and over-sampling techniques. We find that support vector machines …
An Application of Hybrid Models in Credit Scoring
2000
The predictive capability of parametric and non-parametric models in solving problems related to financial classification has been widely proved in empirical research carried out in the financial field, particulary in problems like bond rating, bankruptcy prediction and credit scoring. However, recently, it has been shown that a combination of different models generally reduces the prediction error, so that the best alternative to consider may not be a specific model but a combination of them. In this paper, we study hybrid systems based on the aggregation of individual (parametric and nonparametric) models. Our hybrids are built by using both parametric and non parametric models as the sys…
Neural Network-Based Process Analysis in Sport
2011
Processes in sport like motions or games are influenced by communication, interaction, adaptation, and spontaneous decisions. Therefore, on the one hand, those processes are often fuzzy and unpredictable and so have not extensively been dealt with, yet. On the other hand, most of those processes structurally are roughly determined by intention, rules, and context conditions and so can be classified by means of information patterns deduced from data models of the processes. Self organizing neural networks of type Kohonen Feature Map (KFM) help for classifying information patterns – either by mapping whole processes to corresponding neurons (see Perl & Lames, 2000; McGarry & Perl, 200…
State classification for autonomous gas sample taking using deep convolutional neural networks
2017
Despite recent rapid advances and successful large-scale application of deep Convolutional Neural Networks (CNNs) using image, video, sound, text and time-series data, its adoption within the oil and gas industry in particular have been sparse. In this paper, we initially present an overview of opportunities for deep CNN methods within oil and gas industry, followed by details on a novel development where deep CNN have been used for state classification of autonomous gas sample taking procedure utilizing an industrial robot. The experimental results — using a deep CNN containing six layers — show accuracy levels exceeding 99 %. In addition, the advantages of using parallel computing with GP…
Increasing sample efficiency in deep reinforcement learning using generative environment modelling
2020
Educational Software Based on Matlab GUIs for Neural Networks Courses
2016
Neural Networks (NN) are one of the most used machine learning techniques in different areas of knowledge. This has led to the emergence of a large number of courses of Neural Networks around the world and in areas where the users of this technique do not have a lot of programming skills. Current software that implements these elements, such as Matlab®, has a number of important limitations in teaching field. In some cases, the implementation of a MLP requires a thorough knowledge of the software and of the instructions that train and validate these systems. In other cases, the architecture of the model is fixed and they do not allow an automatic sweep of the parameters that determine the a…
Classical Training Methods
2006
This chapter reviews classical training methods for multilayer neural networks. These methods are widely used for classification and function modelling tasks. Nevertheless, they show a number of flaws or drawbacks that should be addressed in the development of such systems. They work by searching the minimum of an error function which defines the optimal behaviour of the neural network. Different standard problems are used to show the capabilities of these models; in particular, we have benchmarked the algorithms in a nonlinear classification problem and in three function modelling problems.