Search results for "PROBABILITY"

showing 10 items of 3417 documents

A new paradigm for pattern classification: Nearest Border Techniques

2013

Published version of a chapter in the book: AI 2013: Advances in Artificial Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/978-3-319-03680-9_44 There are many paradigms for pattern classification. As opposed to these, this paper introduces a paradigm that has not been reported in the literature earlier, which we shall refer to as the Nearest Border (NB) paradigm. The philosophy for developing such a NB strategy is as follows: Given the training data set for each class, we shall first attempt to create borders for each individual class. After that, we advocate that testing is accomplished by assigning the test sample to the class whose border it lies closest to…

Class (set theory)Training setPattern ClassificationComputer sciencebusiness.industrySVMVDP::Mathematics and natural science: 400::Information and communication science: 420::Algorithms and computability theory: 422Centroid02 engineering and technology01 natural sciencesVDP::Mathematics and natural science: 400::Mathematics: 410::Analysis: 411Support vector machine010104 statistics & probabilityExperimental testingOutlier0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingArtificial intelligence0101 mathematics10. No inequalitySet (psychology)businessTest sampleBorder Identification
researchProduct

Adaptive trial design: a general methodology for censored time to event data.

2008

Adaptive designs allow a clinical trial design to be changed according to interim findings without inflating type I error. The Inverse Normal method can be considered as an adaptive generalization of classical group sequential designs. The use of the Inverse Normal method for censored survival data was demonstrated only for the logrank statistic. However, the logrank statistic is inefficient in the presence of nuisance covariates affecting survival. We demonstrate, how the Inverse Normal method can be applied to Cox regression analysis. The required independence between test statistics of the different stages of the trial can be obtained by two different approaches. One is using the indepen…

Clinical Trials as Topicbusiness.industryProportional hazards modelNormal DistributionRegression analysisGeneral MedicineSurvival AnalysisTimeNormal distributionResearch DesignData Interpretation StatisticalStatisticsCovariateEconometricsMedicineHumansPharmacology (medical)Computer SimulationbusinessStatisticIndependence (probability theory)Statistical hypothesis testingType I and type II errorsProportional Hazards ModelsRandomized Controlled Trials as TopicContemporary clinical trials
researchProduct

Bayesian versus data driven model selection for microarray data

2014

Clustering is one of the most well known activities in scientific investigation and the object of research in many disciplines, ranging from Statistics to Computer Science. In this beautiful area, one of the most difficult challenges is a particular instance of the model selection problem, i.e., the identification of the correct number of clusters in a dataset. In what follows, for ease of reference, we refer to that instance still as model selection. It is an important part of any statistical analysis. The techniques used for solving it are mainly either Bayesian or data-driven, and are both based on internal knowledge. That is, they use information obtained by processing the input data. A…

Clustering Model selection Bayesian information criterion Akaike information criterion Minimum message length BioinformaticsSettore INF/01 - InformaticaComputer sciencebusiness.industryModel selectionBayesian probabilitycomputer.software_genreMachine learningComputer Science ApplicationsData-drivenDetermining the number of clusters in a data setIdentification (information)Bayesian information criterionData miningArtificial intelligenceAkaike information criterionCluster analysisbusinesscomputer
researchProduct

Neural networks with non-uniform embedding and explicit validation phase to assess Granger causality

2015

A challenging problem when studying a dynamical system is to find the interdependencies among its individual components. Several algorithms have been proposed to detect directed dynamical influences between time series. Two of the most used approaches are a model-free one (transfer entropy) and a model-based one (Granger causality). Several pitfalls are related to the presence or absence of assumptions in modeling the relevant features of the data. We tried to overcome those pitfalls using a neural network approach in which a model is built without any a priori assumptions. In this sense this method can be seen as a bridge between model-free and model-based approaches. The experiments perfo…

Cognitive NeuroscienceEntropyFOS: Physical sciencesOverfittingcomputer.software_genreMachine learningGranger causalityArtificial IntelligenceMedicine and Health SciencesEntropy (information theory)Non-uniform embeddingComputer SimulationMathematicsArtificial neural networkbusiness.industryProbability and statisticsModels TheoreticalNeural Networks (Computer)ClassificationNeural networkAlgorithmCausalityPhysics - Data Analysis Statistics and ProbabilitySettore ING-INF/06 - Bioingegneria Elettronica E InformaticaGranger causalityEmbeddingA priori and a posterioriTransfer entropyNeural Networks ComputerArtificial intelligenceData miningbusinesscomputerAlgorithmsNeural networksData Analysis Statistics and Probability (physics.data-an)
researchProduct

Channel selection in Cognitive Radio Networks: A Switchable Bayesian Learning Automata approach

2013

We consider the problem of a user operating within a Cognitive Radio Network (CRN) which involves N channels each associated with a Primary User (PU). The problem consists of allocating a channel which, at any given time instant is not being used by a PU, to a Secondary User (SU). Within our study, we assume that a SU is allowed to perform “channel switching”, i.e., to choose an alternate channel S times (where S +1 ≤ N) if the previous choice does not lead to a channel which is vacant. The paper first presents a formal probabilistic model for the problem itself, referred to as the Formal Secondary Channel Selection (FSCS) problem, and the characteristics of the FSCS are then analyzed. Ther…

Cognitive radioTheoretical computer sciencebusiness.industryComputer scienceBayesian probabilitySampling (statistics)Statistical modelArtificial intelligenceBayesian inferencebusinessProbability vectorCommunication channelAutomaton2013 IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC)
researchProduct

Social Practices based characters in a Robotic Storytelling System

2020

In this work, we present a robotic storytelling system, where the characters have been modelled as cognitive agents embodied in Pepper and NAO robots. The characters have been designed by exploiting the ACT-R architecture, taking into account knowledge, behaviours, norms, and expectations typical of social practices and desires resulting from their personality. The characters explain their reasoning processes during the narration, through a sort of internal dialogue that generate a high level of credibility experienced over the audience.

Cognitive science0209 industrial biotechnologycognitive architecturesmedia_common.quotation_subjectDialogical selfCognition02 engineering and technologyhumanoid robots01 natural sciences010104 statistics & probability020901 industrial engineering & automationEmbodied cognitionsocial roboticsCredibilityPersonalityNarrativehuman robot interactioncognitive systems0101 mathematicsArchitecturePsychologyact-rmedia_commonStorytelling
researchProduct

Individual Variability and Average Reliability in Parallel Networks of Heterogeneous Biological and Artificial Nanostructures

2013

We simulate the collective electrical response of heterogeneous ensembles of biological and artificial nanostructures whose individual threshold potentials show a significant variability. This problem is of current interest because nanotechnology is bound to produce nanostructures with a significant experimental variability in their individual physical properties. This diversity is also present in biological systems that are however able to process information efficiently. The nanostructures considered are the ion channels of biological membranes, nanowire field-effect transistors, and metallic nanoparticle-based single electron transistors. These systems are simulated with canonical models…

Collective behaviorThreshold potentialParallel algorithmNanowireElectronic engineeringCanonical modelNanobiotechnologyProbability distributionField-effect transistorElectrical and Electronic EngineeringBiological systemComputer Science ApplicationsIEEE Transactions on Nanotechnology
researchProduct

Construction and stability of a close-packed structure observed in thin colloidal crystals

2007

We have characterized a close-packed structure of confined charged colloidal spheres, which has been recently discovered. Using different microscopy experiments, the vertically arranged hexagonal planes of n - hcp perpendicular are found to continuously evolve from the horizontally oriented stacks of n hexagonal planes (nDelta) following the maximum packing criterion, but discontinuously transform to a stack of n+1 square planes [(n+1)[SHAPE OF A SQUARE]]. Large mechanically stable domains with threefold twin structures are regularly observed in the suspended state at packing fractions between 0.4 and 0.58.

ColloidMaterials sciencegenetic structuresStack (abstract data type)MicroscopyPerpendicularSPHERESNanotechnologyColloidal crystalMolecular physicsStability (probability)Square (algebra)Physical Review E
researchProduct

Explicit Upper Bound for Entropy Numbers

2004

We give an explicit upper bound for the entropy numbers of the embedding I : W r,p(Ql) → C(Ql) where Ql = (−l, l)m ⊂ Rm, r ∈ N, p ∈ (1,∞) and rp > m.

CombinatoricsApplied MathematicsMaximum entropy probability distributionEmbeddingEntropy (information theory)Min entropyUpper and lower boundsAnalysisEntropy rateQuantum relative entropyJoint quantum entropyMathematicsZeitschrift für Analysis und ihre Anwendungen
researchProduct

Bayesian hypothesis testing: A reference approach

2002

Summary For any probability model M={p(x|θ, ω), θeΘ, ωeΩ} assumed to describe the probabilistic behaviour of data xeX, it is argued that testing whether or not the available data are compatible with the hypothesis H0={θ=θ0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|θ0, ω), ωeΩ}, where the loss difference L(a0, θ, ω) –L(a0, θ, ω) is proportional to the amount of information δ(θ0, ω), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M0 wh…

CombinatoricsBinomial distributionStatistics and ProbabilityBayes' theoremDistribution (mathematics)Prior probabilityStatisticsMultivariate normal distributionContext (language use)Statistics Probability and UncertaintyLindley's paradoxMathematicsStatistical hypothesis testing
researchProduct