Search results for "NEURAL NETWORK"

showing 10 items of 1385 documents

On the Computational Complexity of Binary and Analog Symmetric Hopfield Nets

2000

We investigate the computational properties of finite binary- and analog-state discrete-time symmetric Hopfield nets. For binary networks, we obtain a simulation of convergent asymmetric networks by symmetric networks with only a linear increase in network size and computation time. Then we analyze the convergence time of Hopfield nets in terms of the length of their bit representations. Here we construct an analog symmetric network whose convergence time exceeds the convergence time of any binary Hopfield net with the same representation length. Further, we prove that the MIN ENERGY problem for analog Hopfield nets is NP-hard and provide a polynomial time approximation algorithm for this p…

Computational complexity theoryCognitive NeuroscienceComputationBinary numberHopfield networkTuring machinesymbols.namesakeRecurrent neural networkArts and Humanities (miscellaneous)Convergence (routing)symbolsTime complexityAlgorithmMathematicsNeural Computation
researchProduct

RNN- and LSTM-Based Soft Sensors Transferability for an Industrial Process

2021

The design and application of Soft Sensors (SSs) in the process industry is a growing research field, which needs to mediate problems of model accuracy with data availability and computational complexity. Black-box machine learning (ML) methods are often used as an efficient tool to implement SSs. Many efforts are, however, required to properly select input variables, model class, model order and the needed hyperparameters. The aim of this work was to investigate the possibility to transfer the knowledge acquired in the design of a SS for a given process to a similar one. This has been approached as a transfer learning problem from a source to a target domain. The implementation of a transf…

Computational complexity theoryProcess (engineering)Computer sciencesulfur recovery unit02 engineering and technologytransfer learningMachine learningcomputer.software_genrelcsh:Chemical technologyBiochemistryRNNField (computer science)ArticleAnalytical ChemistryDomain (software engineering)0202 electrical engineering electronic engineering information engineeringlcsh:TP1-1185Electrical and Electronic EngineeringInstrumentationsystem identificationHyperparameterbusiness.industry020208 electrical & electronic engineeringdynamical modelsSystem identificationAtomic and Molecular Physics and OpticsNonlinear systemRecurrent neural networksoft sensors020201 artificial intelligence & image processingArtificial intelligenceTransfer of learningbusinessLSTMcomputerDynamical models; LSTM; RNN; Soft sensors; Sulfur recovery unit; System identification; Transfer learningSensors
researchProduct

Convolutional Regression Tsetlin Machine: An Interpretable Approach to Convolutional Regression

2021

The Convolutional Tsetlin Machine (CTM), a variant of Tsetlin Machine (TM), represents patterns as straightforward AND-rules, to address the high computational complexity and the lack of interpretability of Convolutional Neural Networks (CNNs). CTM has shown competitive performance on MNIST, Fashion-MNIST, and Kuzushiji-MNIST pattern classification benchmarks, both in terms of accuracy and memory footprint. In this paper, we propose the Convolutional Regression Tsetlin Machine (C-RTM) that extends the CTM to support continuous output problems in image analysis. C-RTM identifies patterns in images using the convolution operation as in the CTM and then maps the identified patterns into a real…

Computational complexity theorybusiness.industryComputer scienceMemory footprintPattern recognitionArtificial intelligenceNoise (video)businessConvolutional neural networkRegressionMNIST databaseConvolutionInterpretability2021 6th International Conference on Machine Learning Technologies
researchProduct

On the effect of analog noise in discrete-time analog computations

1998

We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise.

Computational modelFinite-state machineArtificial neural networkComputer scienceCognitive NeuroscienceComputationanalog noiseAnalog signal processingUpper and lower boundsArts and Humanities (miscellaneous)Discrete time and continuous timeNoise (video)Algorithmanalog computations
researchProduct

Superior Performances of the Neural Network on the Masses Lesions Classification through Morphological Lesion Differences

2007

Purpose of this work is to develop an automatic classification system that could be useful for radiologists in the breast cancer investigation. The software has been designed in the framework of the MAGIC-5 collaboration. In an automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs). Each ROI is characterized by some features based generally on morphological lesion differences. A study in the space features representation is made and some classifiers are tested to distinguish the pathological regions from the healthy ones. The results provided in terms of sensitivity and specificity will be p…

Computer Aided DetectionSupport Vector MachineNeural NetworksK-Nearest Neighbours
researchProduct

BELM: Bayesian Extreme Learning Machine

2011

The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap…

Computer Networks and CommunicationsComputer scienceComputer Science::Neural and Evolutionary ComputationBayesian probabilityOverfittingMachine learningcomputer.software_genrePattern Recognition AutomatedReduction (complexity)Artificial IntelligenceComputer SimulationRadial basis functionExtreme learning machineArtificial neural networkbusiness.industryEstimation theoryBayes TheoremGeneral MedicineComputer Science ApplicationsMultilayer perceptronNeural Networks ComputerArtificial intelligencebusinesscomputerAlgorithmsSoftwareIEEE Transactions on Neural Networks
researchProduct

A Fly-Inspired Mushroom Bodies Model for Sensory-Motor Control Through Sequence and Subsequence Learning

2016

Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies n…

Computer Networks and CommunicationsComputer scienceDecision MakingModels NeurologicalAction PotentialsContext (language use)Insect mushroom bodies bio-inspired control spiking neurons02 engineering and technologyVariation (game tree)Motor Activitybio-inspired control03 medical and health sciences0302 clinical medicineRewardSubsequence0202 electrical engineering electronic engineering information engineeringAnimalsLearningComputer SimulationMushroom BodiesTRACE (psycholinguistics)NeuronsSequencebio-inspired control; Insect mushroom bodies; learning; neural model; resonant neurons; spiking neurons; Action Potentials; Animals; Computer Simulation; Decision Making; Drosophila melanogaster; Learning; Motor Activity; Mushroom Bodies; Neurons; Perception; Reward; Robotics; Models Neurological; Neural Networks Computerspiking neuronsbusiness.industryRoboticsGeneral MedicineInsect mushroom bodiesComplex dynamicsDrosophila melanogasterMushroom bodiesPerception020201 artificial intelligence & image processingNeural Networks ComputerArtificial intelligenceSequence learningbusiness030217 neurology & neurosurgery
researchProduct

Sequence Learning in a Single Trial: A Spiking Neurons Model Based on Hippocampal Circuitry.

2020

ABSTRACTIn contrast with our everyday experience using brain circuits, it can take a prohibitively long time to train a computational system to produce the correct sequence of outputs in the presence of a series of inputs. This suggests that something important is missing in the way in which models are trying to reproduce basic cognitive functions. In this work, we introduce a new neuronal network architecture that is able to learn, in a single trial, an arbitrary long sequence of any known objects. The key point of the model is the explicit use of mechanisms and circuitry observed in the hippocampus, which allow the model to reach a level of efficiency and accuracy that, to the best of our…

Computer Networks and CommunicationsComputer scienceModels NeurologicalHippocampusAction PotentialsBrain modeling; Computer architecture; Hippocampus; Learning systems; Microprocessors; Navigation; Neurons; Persistent firing (PF); robot navigation; spike-timing-dependent-plasticity synapse; spiking neurons.Hippocampal formationHippocampus03 medical and health sciences0302 clinical medicineArtificial IntelligenceBiological neural network030304 developmental biologyNeurons0303 health sciencesSequenceSeries (mathematics)business.industryBasic cognitive functionsContrast (statistics)CognitionComputer Science ApplicationsSequence learningArtificial intelligenceNeural Networks ComputerbusinessSoftware030217 neurology & neurosurgeryIEEE transactions on neural networks and learning systems
researchProduct

Moving Learning Machine Towards Fast Real-Time Applications: A High-Speed FPGA-based Implementation of the OS-ELM Training Algorithm

2018

Currently, there are some emerging online learning applications handling data streams in real-time. The On-line Sequential Extreme Learning Machine (OS-ELM) has been successfully used in real-time condition prediction applications because of its good generalization performance at an extreme learning speed, but the number of trainings by a second (training frequency) achieved in these continuous learning applications has to be further reduced. This paper proposes a performance-optimized implementation of the OS-ELM training algorithm when it is applied to real-time applications. In this case, the natural way of feeding the training of the neural network is one-by-one, i.e., training the neur…

Computer Networks and CommunicationsComputer scienceReal-time computingParameterized complexitylcsh:TK7800-836002 engineering and technologyextreme learning machine0202 electrical engineering electronic engineering information engineeringSensitivity (control systems)Electrical and Electronic EngineeringEnginyeria d'ordinadorsField-programmable gate arrayFPGAExtreme learning machineEnginyeria elèctricaArtificial neural networkData stream mininglcsh:Electronics020206 networking & telecommunicationsOS-ELMreal-time learningHardware and ArchitectureControl and Systems Engineeringon-chip trainingSignal Processingon-line learning020201 artificial intelligence & image processingDistributed memoryonline sequential ELMhardware implementationAlgorithm
researchProduct

Perceptual adaptive insensitivity for support vector machine image coding.

2005

Support vector machine (SVM) learning has been recently proposed for image compression in the frequency domain using a constant epsilon-insensitivity zone by Robinson and Kecman. However, according to the statistical properties of natural images and the properties of human perception, a constant insensitivity makes sense in the spatial domain but it is certainly not a good option in a frequency domain. In fact, in their approach, they made a fixed low-pass assumption as the number of discrete cosine transform (DCT) coefficients to be used in the training was limited. This paper extends the work of Robinson and Kecman by proposing the use of adaptive insensitivity SVMs [2] for image coding u…

Computer Networks and CommunicationsImage processingPattern Recognition AutomatedArtificial IntelligenceDistortionImage Interpretation Computer-AssistedDiscrete cosine transformComputer SimulationMathematicsModels StatisticalArtificial neural networkbusiness.industryPattern recognitionSignal Processing Computer-AssistedGeneral MedicineData CompressionComputer Science ApplicationsSupport vector machineFrequency domainVisual PerceptionA priori and a posterioriArtificial intelligencebusinessSoftwareAlgorithmsImage compressionIEEE transactions on neural networks
researchProduct