Search results for "Neural Networks"

showing 10 items of 599 documents

The Ultimate Fate of Supercooled Liquids

2010

In recent years it has become widely accepted that a dynamical length scale {\xi}_{\alpha} plays an important role in supercooled liquids near the glass transition. We examine the implications of the interplay between the growing {\xi}_{\alpha} and the size of the crystal nucleus, {\xi}_M, which shrinks on cooling. We argue that at low temperatures where {\xi}_{\alpha} > {\xi}_M a new crystallization mechanism emerges enabling rapid development of a large scale web of sparsely connected crystallinity. Though we predict this web percolates the system at too low a temperature to be easily seen in the laboratory, there are noticeable residual effects near the glass transition that can account …

Length scaleFOS: Physical sciencesCrystal growth02 engineering and technologyCondensed Matter - Soft Condensed Matter010402 general chemistry01 natural sciencesCondensed Matter::Disordered Systems and Neural NetworksArticlelaw.inventionCrystalCrystallinitylawPhysical and Theoretical ChemistryCrystallizationSupercoolingCondensed Matter - Statistical MechanicsPhysicsCondensed matter physicsStatistical Mechanics (cond-mat.stat-mech)Disordered Systems and Neural Networks (cond-mat.dis-nn)Condensed Matter - Disordered Systems and Neural Networks021001 nanoscience & nanotechnology0104 chemical sciencesCondensed Matter::Soft Condensed MatterQuantum TheoryThermodynamicsSoft Condensed Matter (cond-mat.soft)0210 nano-technologyGlass transitionCrystallization
researchProduct

Growing length scales in a supercooled liquid close to an interface

2002

We present the results of molecular dynamics computer simulations of a simple glass former close to an interface between the liquid and the frozen amorphous phase of the same material. By investigating F_s(q,z,t), the incoherent intermediate scattering function for particles that have a distance z from the wall, we show that the relaxation dynamics of the particles close to the wall is much slower than the one for particles far away from the wall. For small z the typical relaxation time for F_s(q,z,t) increases like exp(Delta/(z-z_p)), where Delta and z_p are constants. We use the location of the crossover from this law to the bulk behavior to define a first length scale tilde{z}. A differe…

Length scaleScattering functionStatistical Mechanics (cond-mat.stat-mech)010304 chemical physicsCondensed matter physicsChemistryGeneral Chemical EngineeringRelaxation (NMR)FOS: Physical sciencesGeneral Physics and AstronomyDisordered Systems and Neural Networks (cond-mat.dis-nn)Condensed Matter - Disordered Systems and Neural Networks01 natural sciencesAmorphous phaseMolecular dynamics[PHYS.COND.CM-GEN]Physics [physics]/Condensed Matter [cond-mat]/Other [cond-mat.other]0103 physical sciences010306 general physicsSupercoolingCondensed Matter - Statistical MechanicsAnsatzPhilosophical Magazine B
researchProduct

Classical and ab-initio molecular dynamic simulation of an amorphous silica surface

2001

We present the results of a classical molecular dynamic simulation as well as of an ab initio molecular dynamic simulation of an amorphous silica surface. In the case of the classical simulation we use the potential proposed by van Beest et al. (BKS) whereas the ab initio simulation is done with a Car-Parrinello method (CPMD). We find that the surfaces generated by BKS have a higher concentration of defects (e.g. concentration of two-membered rings) than those generated with CPMD. In addition also the distribution functions of the angles and of the distances are different for the short rings. Hence we conclude that whereas the BKS potential is able to reproduce correctly the surface on the …

Length scaleSurface (mathematics)Car–Parrinello molecular dynamicsMaterials scienceStatistical Mechanics (cond-mat.stat-mech)Ab initioFOS: Physical sciencesGeneral Physics and AstronomyDisordered Systems and Neural Networks (cond-mat.dis-nn)Condensed Matter - Disordered Systems and Neural NetworksApproxCondensed Matter::Disordered Systems and Neural NetworksMolecular dynamicsDistribution functionHardware and ArchitectureChemical physicsAmorphous silicaCondensed Matter - Statistical MechanicsComputer Physics Communications
researchProduct

An Artificial Neural Network Assisted Dynamic Light Scattering Procedure for Assessing Living Cells Size in Suspension

2020

Dynamic light scattering (DLS) is an essential technique used for assessing the size of the particles in suspension, covering the range from nanometers to microns. Although it has been very well established for quite some time, improvement can still be brought in simplifying the experimental setup and in employing an easier to use data processing procedure for the acquired time-series. A DLS time series processing procedure based on an artificial neural network is presented with details regarding the design, training procedure and error analysis, working over an extended particle size range. The procedure proved to be much faster regarding time-series processing and easier to use than fitti…

LightComputer sciencesimulated time-series02 engineering and technologySaccharomyces cerevisiaelcsh:Chemical technology01 natural sciencesBiochemistryArticleAnalytical Chemistry010309 optics<i>Saccharomyces cerevisiae</i>Dynamic light scatteringSuspensions0103 physical sciencesRange (statistics)Scattering Radiationlcsh:TP1-1185Electrical and Electronic EngineeringParticle SizeSuspension (vehicle)InstrumentationfermentationCell SizeAqueous solutionArtificial neural networkdynamic light scatteringFunction (mathematics)021001 nanoscience & nanotechnologyAtomic and Molecular Physics and OpticsParticle sizeNeural Networks Computer0210 nano-technologyBiological systemartificial neural networkSensors
researchProduct

Five Ways in Which Computational Modeling Can Help Advance Cognitive Science

2019

Abstract There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.

Linguistics and LanguageArtificial grammar learningComputer scienceCognitive Neuroscience[SHS.PSY]Humanities and Social Sciences/PsychologyExperimental and Cognitive PsychologyBayesian inferenceArtificial grammar learningArticle050105 experimental psychology03 medical and health sciences0302 clinical medicineArtificial IntelligenceHumans0501 psychology and cognitive sciencesCognitive scienceComputational modelPsycholinguisticsArtificial neural networkLift (data mining)Model selection05 social sciencesComputational modelingModels TheoreticalArtificial language learningFormal grammarsExperimental researchBayesian modelingVisualizationHuman-Computer InteractionCognitive ScienceNeural Networks ComputerForthcoming Topic: Learning Grammatical Structures: Developmental Cross‐species and Computational Approaches030217 neurology & neurosurgeryNeural networksTopics in Cognitive Science
researchProduct

Understanding dynamic scenes

2000

We propose a framework for the representation of visual knowledge in a robotic agent, with special attention to the understanding of dynamic scenes. According to our approach, understanding involves the generation of a high level, declarative description of the perceived world. Developing such a description requires both bottom-up, data driven processes that associate symbolic knowledge representation structures with the data coming out of a vision system, and top-down processes in which high level, symbolic information is in its turn employed to drive and further refine the interpretation of a scene. On the one hand, the computer vision community approached this problem in terms of 2D/3D s…

Linguistics and LanguageKnowledge representation and reasoningComputer scienceMachine visionProcess (engineering)media_common.quotation_subjectRepresentation levelsLanguage and LinguisticsMotion (physics)Data-drivenArtificial IntelligenceHuman–computer interactionPerceptionConceptual spacesArtificial visionLanguage and Linguisticmedia_commonSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniHybrid processingbusiness.industryRepresentation (systemics)RoboticsProcessesAction (philosophy)PerceptionArtificial intelligencebusinessActionsNeural networksArtificial Intelligence
researchProduct

Finite-time boundedness for uncertain discrete neural networks with time-delays and Markovian jumps

2014

This paper is concerned with stochastic finite-time boundedness analysis for a class of uncertain discrete-time neural networks with Markovian jump parameters and time-delays. The concepts of stochastic finite-time stability and stochastic finite-time boundedness are first given for neural networks. Then, applying the Lyapunov approach and the linear matrix inequality technique, sufficient criteria on stochastic finite-time boundedness are provided for the class of nominal or uncertain discrete-time neural networks with Markovian jump parameters and time-delays. It is shown that the derived conditions are characterized in terms of the solution to these linear matrix inequalities. Finally, n…

Lyapunov functionDiscrete-time systems; Linear matrix inequalities; Markovian jump systems; Neural networks; Stochastic finite-time boundedness; Artificial Intelligence; Computer Science Applications1707 Computer Vision and Pattern Recognition; Cognitive NeuroscienceArtificial neural networkMarkov chainStochastic processCognitive NeuroscienceMarkovian jump systemsLinear matrix inequalitiesLinear matrix inequalityComputer Science Applications1707 Computer Vision and Pattern RecognitionComputer Science Applicationssymbols.namesakeDiscrete time and continuous timeArtificial IntelligenceDiscrete-time systemssymbolsCalculusApplied mathematicsStochastic neural networkJump processNeural networksStochastic finite-time boundednessMathematics
researchProduct

Exponential Transients in Continuous-Time Symmetric Hopfield Nets

2001

We establish a fundamental result in the theory of continuous-time neural computation, by showing that so called continuous-time symmetric Hopfield nets, whose asymptotic convergence is always guaranteed by the existence of a Liapunov function may, in the worst case, possess a transient period that is exponential in the network size. The result stands in contrast to e.g. the use of such network models in combinatorial optimization applications. peerReviewed

Lyapunov functionHopfield netsstabilityneural networksExponential functionHopfield networksymbols.namesakeModels of neural computationRecurrent neural networkConvergence (routing)symbolsApplied mathematicsCombinatorial optimizationdynaamiset systeemitAlgorithmMathematicsNetwork model
researchProduct

Adaptive Neural Stabilizing Controller for a Class of Mismatched Uncertain Nonlinear Systems by State and Output Feedback

2015

In this paper, first, an adaptive neural network (NN) state-feedback controller for a class of nonlinear systems with mismatched uncertainties is proposed. By using a radial basis function NN (RBFNN), a bound of unknown nonlinear functions is approximated so that no information about the upper bound of mismatched uncertainties is required. Then, an observer-based adaptive controller based on RBFNN is designed to stabilize uncertain nonlinear systems with immeasurable states. The state-feedback and observer-based controllers are based on Lyapunov and strictly positive real-Lyapunov stability theory, respectively, and it is shown that the asymptotic convergence of the closed-loop system to ze…

Lyapunov functionObserver (quantum physics)Computer Simulation; Feedback; Neural Networks (Computer); Nonlinear Dynamics; Control and Systems Engineering; Software; Information Systems; Human-Computer Interaction; Computer Science Applications1707 Computer Vision and Pattern Recognition; Electrical and Electronic EngineeringComputer Science Applications1707 Computer Vision and Pattern RecognitionNeural Networks (Computer)Nonlinear controlUpper and lower boundsFeedbackComputer Science ApplicationsHuman-Computer InteractionNonlinear systemsymbols.namesakeNonlinear DynamicsControl and Systems EngineeringControl theoryAdaptive systemStability theorysymbolsComputer SimulationNeural Networks ComputerElectrical and Electronic EngineeringSoftwareInformation SystemsMathematicsIEEE Transactions on Cybernetics
researchProduct

Region of interest detection using MLP

2014

A novel technique to detect regions of interest in a time series as deviation from the characteristic behavior is proposed. The deterministic form of a signal is obtained using a reliably trained MLP neural network with detailed complexity management and cross-validation based generalization assurance. The proposed technique is demonstrated with simulated and real data. peerReviewed

MLPneural networks
researchProduct