Search results for "550"
showing 10 items of 1192 documents
Flavor Ratio of Astrophysical Neutrinos above 35 TeV in IceCube
2015
A diffuse flux of astrophysical neutrinos above $100\,\mathrm{TeV}$ has been observed at the IceCube Neutrino Observatory. Here we extend this analysis to probe the astrophysical flux down to $35\,\mathrm{TeV}$ and analyze its flavor composition by classifying events as showers or tracks. Taking advantage of lower atmospheric backgrounds for shower-like events, we obtain a shower-biased sample containing 129 showers and 8 tracks collected in three years from 2010 to 2013. We demonstrate consistency with the $(f_e:f_{\mu}:f_\tau)_\oplus\approx(1:1:1)_\oplus$ flavor ratio at Earth commonly expected from the averaged oscillations of neutrinos produced by pion decay in distant astrophysical sou…
Constraints on ultra-high-energy cosmic ray sources from a search for neutrinos above 10 PeV with IceCube
2016
We report constraints on the sources of ultra-high-energy cosmic ray (UHECR) above $10^{9}$ GeV, based on an analysis of seven years of IceCube data. This analysis efficiently selects very high energy neutrino-induced events which have deposited energies from $\sim 10^6$ GeV to above $10^{11}$ GeV. Two neutrino-induced events with an estimated deposited energy of $(2.6 \pm 0.3) \times 10^6$ GeV, the highest neutrino energies observed so far, and $(7.7 \pm 2.0) \times 10^5$ GeV were detected. The atmospheric background-only hypothesis of detecting these events is rejected at 3.6$\sigma$. The hypothesis that the observed events are of cosmogenic origin is also rejected at $>$99% CL because of…
Thompson Sampling Guided Stochastic Searching on the Line for Deceptive Environments with Applications to Root-Finding Problems
2017
The multi-armed bandit problem forms the foundation for solving a wide range of on-line stochastic optimization problems through a simple, yet effective mechanism. One simply casts the problem as a gambler that repeatedly pulls one out of N slot machine arms, eliciting random rewards. Learning of reward probabilities is then combined with reward maximization, by carefully balancing reward exploration against reward exploitation. In this paper, we address a particularly intriguing variant of the multi-armed bandit problem, referred to as the {\it Stochastic Point Location (SPL) Problem}. The gambler is here only told whether the optimal arm (point) lies to the "left" or to the "right" of the…
Clustering in Recurrent Neural Networks for Micro-Segmentation using Spending Personality
2021
Author's accepted manuscript. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Customer segmentation has long been a productive field in banking. However, with new approaches to traditional problems come new opportunities. Fine-grained customer segments are notoriously elusive and one method of obtaining them is through feature extraction. It is possible to assi…
Extending the Tsetlin Machine With Integer-Weighted Clauses for Increased Interpretability
2020
Despite significant effort, building models that are both interpretable and accurate is an unresolved challenge for many pattern recognition problems. In general, rule-based and linear models lack accuracy, while deep learning interpretability is based on rough approximations of the underlying inference. Using a linear combination of conjunctive clauses in propositional logic, Tsetlin Machines (TMs) have shown competitive performance on diverse benchmarks. However, to do so, many clauses are needed, which impacts interpretability. Here, we address the accuracy-interpretability challenge in machine learning by equipping the TM clauses with integer weights. The resulting Integer Weighted TM (…
Can Interpretable Reinforcement Learning Manage Prosperity Your Way?
2022
Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely on model assumptions, reinforcement learning is able to exploit large amounts of data to improve customer modelling and decision-making in complex financial environments with fewer assumptions. Model explainability and interpretability present challenges from a regulatory perspective which demands transparency for acceptance; they also offer the opportunity for improved insight into and unde…
Reinforcement Learning with Intrinsic Affinity for Personalized Prosperity Management
2022
AbstractThe purpose of applying reinforcement learning (RL) to portfolio management is commonly the maximization of profit. The extrinsic reward function used to learn an optimal strategy typically does not take into account any other preferences or constraints. We have developed a regularization method that ensures that strategies have global intrinsic affinities, i.e., different personalities may have preferences for certain asset classes which may change over time. We capitalize on these intrinsic policy affinities to make our RL model inherently interpretable. We demonstrate how RL agents can be trained to orchestrate such individual policies for particular personality profiles and stil…
Reinforcement Learning Your Way: Agent Characterization through Policy Regularization
2022
The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective functions. Our method guides the agents’ behaviour during learning, which results in a…
Deep Q-Learning With Q-Matrix Transfer Learning for Novel Fire Evacuation Environment
2021
We focus on the important problem of emergency evacuation, which clearly could benefit from reinforcement learning that has been largely unaddressed. Emergency evacuation is a complex task which is difficult to solve with reinforcement learning, since an emergency situation is highly dynamic, with a lot of changing variables and complex constraints that makes it difficult to train on. In this paper, we propose the first fire evacuation environment to train reinforcement learning agents for evacuation planning. The environment is modelled as a graph capturing the building structure. It consists of realistic features like fire spread, uncertainty and bottlenecks. We have implemented the envir…
Towards Responsible AI for Financial Transactions
2020
Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The application of AI in finance is increasingly dependent on the principles of responsible AI. These principles-explainability, fairness, privacy, accountability, transparency and soundness form the basis for trust in future AI systems. In this empirical study, we address the first p…