Search results for "Machine learning"
showing 10 items of 1464 documents
An LP-based hyperparameter optimization model for language modeling
2018
In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find per…
The Recycling Gibbs sampler for efficient learning
2018
Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary sample…
Gaussianizing the Earth: Multidimensional Information Measures for Earth Data Analysis
2021
Information theory is an excellent framework for analyzing Earth system data because it allows us to characterize uncertainty and redundancy, and is universally interpretable. However, accurately estimating information content is challenging because spatio-temporal data is high-dimensional, heterogeneous and has non-linear characteristics. In this paper, we apply multivariate Gaussianization for probability density estimation which is robust to dimensionality, comes with statistical guarantees, and is easy to apply. In addition, this methodology allows us to estimate information-theoretic measures to characterize multivariate densities: information, entropy, total correlation, and mutual in…
Understanding Climate Impacts on Vegetation with Gaussian Processes in Granger Causality
2020
Global warming is leading to unprecedented changes in our planet, with great societal, economical and environmental implications, especially with the growing demand of biofuels and food. Assessing the impact of climate on vegetation is of pressing need. We approached the attribution problem with a novel nonlinear Granger causal (GC) methodology and used a large data archive of remote sensing satellite products, environmental and climatic variables spatio-temporally gridded over more than 30 years. We generalize kernel Granger causality by considering the variables cross-relations explicitly in Hilbert spaces, and use the covariance in Gaussian processes. The method generalizes the linear an…
PRINCIPAL POLYNOMIAL ANALYSIS
2014
© 2014 World Scientific Publishing Company. This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves instead of straight lines. Contrarily to previous approaches PPA reduces to performing simple univariate regressions which makes it computationally feasible and robust. Moreover PPA shows a number of interesting analytical properties. First PPA is a volume preserving map which in turn guarantees the existence of the inverse. Second such an inverse can be obtained…
Quantum pattern recognition in photonic circuits
2021
This paper proposes a machine learning method to characterize photonic states via a simple optical circuit and data processing of photon number distributions, such as photonic patterns. The input states consist of two coherent states used as references and a two-mode unknown state to be studied. We successfully trained supervised learning algorithms that can predict the degree of entanglement in the two-mode state as well as perform the full tomography of one photonic mode, obtaining satisfactory values in the considered regression metrics.
Regression of high-dimensional angular momentum states of light
2023
The Orbital Angular Momentum (OAM) of light is an infinite-dimensional degree of freedom of light with several applications in both classical and quantum optics. However, to fully take advantage of the potential of OAM states, reliable detection platforms to characterize generated states in experimental conditions are needed. Here, we present an approach to reconstruct input OAM states from measurements of the spatial intensity distributions they produce. To obviate issues arising from intrinsic symmetry of Laguerre-Gauss modes, we employ a pair of intensity profiles per state projecting it only on two distinct bases, showing how this allows to uniquely recover input states from the collect…
Supervised Quantum Learning without Measurements
2017
We propose a quantum machine learning algorithm for efficiently solving a class of problems encoded in quantum controlled unitary operations. The central physical mechanism of the protocol is the iteration of a quantum time-delayed equation that introduces feedback in the dynamics and eliminates the necessity of intermediate measurements. The performance of the quantum algorithm is analyzed by comparing the results obtained in numerical simulations with the outcome of classical machine learning methods for the same problem. The use of time-delayed equations enhances the toolbox of the field of quantum machine learning, which may enable unprecedented applications in quantum technologies. The…
Progressive Stochastic Binarization of Deep Networks
2019
A plethora of recent research has focused on improving the memory footprint and inference speed of deep networks by reducing the complexity of (i) numerical representations (for example, by deterministic or stochastic quantization) and (ii) arithmetic operations (for example, by binarization of weights). We propose a stochastic binarization scheme for deep networks that allows for efficient inference on hardware by restricting itself to additions of small integers and fixed shifts. Unlike previous approaches, the underlying randomized approximation is progressive, thus permitting an adaptive control of the accuracy of each operation at run-time. In a low-precision setting, we match the accu…
At Your Service: Coffee Beans Recommendation From a Robot Assistant
2020
With advances in the field of machine learning, precisely algorithms for recommendation systems, robot assistants are envisioned to become more present in the hospitality industry. Additionally, the COVID-19 pandemic has also highlighted the need to have more service robots in our everyday lives, to minimise the risk of human to-human transmission. One such example would be coffee shops, which have become intrinsic to our everyday lives. However, serving an excellent cup of coffee is not a trivial feat as a coffee blend typically comprises rich aromas, indulgent and unique flavours and a lingering aftertaste. Our work addresses this by proposing a computational model which recommends optima…