Search results for " and Control"

showing 10 items of 385 documents

Deep Q-Learning With Q-Matrix Transfer Learning for Novel Fire Evacuation Environment

2021

We focus on the important problem of emergency evacuation, which clearly could benefit from reinforcement learning that has been largely unaddressed. Emergency evacuation is a complex task which is difficult to solve with reinforcement learning, since an emergency situation is highly dynamic, with a lot of changing variables and complex constraints that makes it difficult to train on. In this paper, we propose the first fire evacuation environment to train reinforcement learning agents for evacuation planning. The environment is modelled as a graph capturing the building structure. It consists of realistic features like fire spread, uncertainty and bottlenecks. We have implemented the envir…

FOS: Computer and information sciencesComputer Science - Machine LearningComputer Science - Artificial IntelligenceComputer scienceQ-learningComputingMilieux_LEGALASPECTSOFCOMPUTINGSystems and Control (eess.SY)02 engineering and technologyOverfittingMachine Learning (cs.LG)FOS: Electrical engineering electronic engineering information engineering0202 electrical engineering electronic engineering information engineeringReinforcement learningElectrical and Electronic EngineeringVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550business.industry020206 networking & telecommunicationsComputer Science ApplicationsHuman-Computer InteractionArtificial Intelligence (cs.AI)Control and Systems EngineeringShortest path problemEmergency evacuationComputer Science - Systems and Control020201 artificial intelligence & image processingArtificial intelligenceTransfer of learningbusinessSoftwareIEEE Transactions on Systems, Man, and Cybernetics: Systems
researchProduct

Model identification and local linear convergence of coordinate descent

2020

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e.g., support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough. Results concerning coordinate descent are scarcer and model identification has only been shown for specific estimators, the support-vector machine for instance. In this work, we show that cyclic coordinate descent achieves model identification in finite time for a wide class of functions. In addition, we prove explicit local linear convergence rates for coordinate descent. Extensive experiments on various estimators and on real datasets demonstrate that thes…

FOS: Computer and information sciencesComputer Science - Machine LearningOptimization and Control (math.OC)Statistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]FOS: Mathematics[MATH.MATH-OC] Mathematics [math]/Optimization and Control [math.OC]Machine Learning (stat.ML)[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC][MATH.MATH-ST] Mathematics [math]/Statistics [math.ST]Mathematics - Optimization and ControlMachine Learning (cs.LG)
researchProduct

Dual Extrapolation for Sparse Generalized Linear Models

2020

International audience; Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, o…

FOS: Computer and information sciencesComputer Science - Machine Learningextrapolation[MATH.MATH-OC] Mathematics [math]/Optimization and Control [math.OC]Machine Learning (stat.ML)working setsgeneralized linear models[STAT.ML] Statistics [stat]/Machine Learning [stat.ML]Convex optimizationscreening rulesMachine Learning (cs.LG)[STAT.ML]Statistics [stat]/Machine Learning [stat.ML]Statistics - Machine Learning[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC]Lassosparse logistic regression
researchProduct

Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop

2018

Active inference is an ambitious theory that treats perception, inference and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g.\ different environments or agent morphologies. In the literature, paradigms that share this independence have been summarised under the notion of in…

FOS: Computer and information sciencesComputer scienceComputer Science - Artificial Intelligencepredictive informationBiomedical EngineeringInferenceSystems and Control (eess.SY)02 engineering and technologyAction selectionI.2.0; I.2.6; I.5.0; I.5.1lcsh:RC321-57103 medical and health sciences0302 clinical medicineactive inferenceArtificial IntelligenceFOS: Electrical engineering electronic engineering information engineering0202 electrical engineering electronic engineering information engineeringFormal concept analysisMethodsperception-action loopuniversal reinforcement learningintrinsic motivationlcsh:Neurosciences. Biological psychiatry. NeuropsychiatryFree energy principleCognitive scienceRobotics and AII.5.0I.5.1I.2.6Partially observable Markov decision processI.2.0Artificial Intelligence (cs.AI)Action (philosophy)empowermentIndependence (mathematical logic)free energy principleComputer Science - Systems and Control020201 artificial intelligence & image processingBiological plausibility62F15 91B06030217 neurology & neurosurgeryvariational inference
researchProduct

Randomized Block Frank–Wolfe for Convergent Large-Scale Learning

2017

Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well suited for various large-scale learning tasks. When block-separable constraints are present, randomized block FW (RB-FW) has been shown to further reduce complexity by updating only a fraction of coordinate blocks per iteration. To circumvent the limitations of existing methods, the present work develops step sizes for RB-FW that enable a flexible selection of the number of blocks to update per iteration while ensuring convergence and feasibility of the iterates. To this end, convergence rates of RB-FW are established through computational bounds on a primal sub-optimality measure and on the duality gap. The novel b…

FOS: Computer and information sciencesMathematical optimization0102 computer and information sciences02 engineering and technology01 natural sciencesMeasure (mathematics)Machine Learning (cs.LG)Convergence (routing)FOS: Mathematics0202 electrical engineering electronic engineering information engineeringFraction (mathematics)Electrical and Electronic EngineeringMathematics - Optimization and ControlMathematicsSequenceDuality gapComputer Science - Numerical Analysis020206 networking & telecommunicationsNumerical Analysis (math.NA)Stationary pointSupport vector machineComputer Science - LearningOptimization and Control (math.OC)010201 computation theory & mathematicsIterated functionSignal ProcessingAlgorithmIEEE Transactions on Signal Processing
researchProduct

Online shortest paths with confidence intervals for routing in a time varying random network

2018

International audience; The increase in the world's population and rising standards of living is leading to an ever-increasing number of vehicles on the roads, and with it ever-increasing difficulties in traffic management. This traffic management in transport networks can be clearly optimized by using information and communication technologies referred as Intelligent Transport Systems (ITS). This management problem is usually reformulated as finding the shortest path in a time varying random graph. In this article, an online shortest path computation using stochastic gradient descent is proposed. This routing algorithm for ITS traffic management is based on the online Frank-Wolfe approach.…

FOS: Computer and information sciencesMathematical optimizationComputer sciencePopulation02 engineering and technology[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE][INFO.INFO-IU]Computer Science [cs]/Ubiquitous Computing[SPI]Engineering Sciences [physics][INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR]0502 economics and business11. SustainabilityComputer Science - Data Structures and Algorithms0202 electrical engineering electronic engineering information engineeringFOS: MathematicsData Structures and Algorithms (cs.DS)educationIntelligent transportation systemMathematics - Optimization and ControlRandom graph050210 logistics & transportationeducation.field_of_studyStochastic process[SPI.PLASMA]Engineering Sciences [physics]/Plasmas05 social sciencesApproximation algorithm[INFO.INFO-MO]Computer Science [cs]/Modeling and SimulationStochastic gradient descentOptimization and Control (math.OC)[INFO.INFO-MA]Computer Science [cs]/Multiagent Systems [cs.MA]Shortest path problem020201 artificial intelligence & image processing[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET]Routing (electronic design automation)[INFO.INFO-DC]Computer Science [cs]/Distributed Parallel and Cluster Computing [cs.DC]
researchProduct

An LP-based hyperparameter optimization model for language modeling

2018

In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find per…

FOS: Computer and information sciencesMathematical optimizationPerplexityLinear programmingComputer scienceMachine Learning (stat.ML)02 engineering and technology010501 environmental sciences01 natural sciencesTheoretical Computer ScienceNonlinear programmingMachine Learning (cs.LG)Random searchSimplex algorithmSearch algorithmStatistics - Machine Learning0202 electrical engineering electronic engineering information engineeringFOS: MathematicsMathematics - Optimization and Control0105 earth and related environmental sciencesHyperparameterComputer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing)Computer Science - LearningHardware and ArchitectureOptimization and Control (math.OC)Hyperparameter optimization020201 artificial intelligence & image processingLanguage modelSoftwareInformation Systems
researchProduct

Fine-tuning the Ant Colony System algorithm through Particle Swarm Optimization

2018

Ant Colony System (ACS) is a distributed (agent- based) algorithm which has been widely studied on the Symmetric Travelling Salesman Problem (TSP). The optimum parameters for this algorithm have to be found by trial and error. We use a Particle Swarm Optimization algorithm (PSO) to optimize the ACS parameters working in a designed subset of TSP instances. First goal is to perform the hybrid PSO-ACS algorithm on a single instance to find the optimum parameters and optimum solutions for the instance. Second goal is to analyze those sets of optimum parameters, in relation to instance characteristics. Computational results have shown good quality solutions for single instances though with high …

FOS: Computer and information sciencesOptimization and Control (math.OC)MathematicsofComputing_NUMERICALANALYSISFOS: MathematicsComputer Science - Neural and Evolutionary ComputingNeural and Evolutionary Computing (cs.NE)Mathematics - Optimization and ControlComputingMethodologies_ARTIFICIALINTELLIGENCE
researchProduct

Immunization Strategies Based on the Overlapping Nodes in Networks with Community Structure

2016

International audience; Understanding how the network topology affects the spread of an epidemic is a main concern in order to develop efficient immunization strategies. While there is a great deal of work dealing with the macroscopic topological properties of the networks, few studies have been devoted to the influence of the community structure. Furthermore, while in many real-world networks communities may overlap, in these studies non-overlapping community structures are considered. In order to gain insight about the influence of the overlapping nodes in the epidemic process we conduct an empirical evaluation of basic deterministic immunization strategies based on the overlapping nodes.…

FOS: Computer and information sciencesTheoretical computer science[ INFO ] Computer Science [cs]Computer scienceProcess (engineering)Epidemic02 engineering and technologyNetwork topology01 natural sciencesComplex NetworksDiffusion020204 information systems0103 physical sciencesNode (computer science)[INFO.INFO-SY]Computer Science [cs]/Systems and Control [cs.SY]0202 electrical engineering electronic engineering information engineeringOverlapping community[INFO]Computer Science [cs]010306 general physicsSocial and Information Networks (cs.SI)Connected componentWelfare economicsCommunity structureComputer Science - Social and Information NetworksAttackImmunization (finance)Complex networkDynamicsMembership number[ INFO.INFO-SY ] Computer Science [cs]/Systems and Control [cs.SY]ImmunizationEpidemic model
researchProduct

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

2022

International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…

FOS: Computer and information sciencesbilevel optimizationComputer Science - Machine Learninghyperparameter selec- tionMachine Learning (stat.ML)[MATH.MATH-OC] Mathematics [math]/Optimization and Control [math.OC]generalized linear modelsMachine Learning (cs.LG)Convex optimizationStatistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Optimization and Control (math.OC)FOS: Mathematics[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC]hyperparameter optimizationLassoMathematics - Optimization and Control[MATH.MATH-ST] Mathematics [math]/Statistics [math.ST]
researchProduct