Search results for "Computer engineering"
showing 10 items of 164 documents
Do Randomized Algorithms Improve the Efficiency of Minimal Learning Machine?
2020
Minimal Learning Machine (MLM) is a recently popularized supervised learning method, which is composed of distance-regression and multilateration steps. The computational complexity of MLM is dominated by the solution of an ordinary least-squares problem. Several different solvers can be applied to the resulting linear problem. In this paper, a thorough comparison of possible and recently proposed, especially randomized, algorithms is carried out for this problem with a representative set of regression datasets. In addition, we compare MLM with shallow and deep feedforward neural network models and study the effects of the number of observations and the number of features with a special dat…
GPU-Based Optimisation of 3D Sensor Placement Considering Redundancy, Range and Field of View
2020
This paper presents a novel and efficient solution for the 3D sensor placement problem based on GPU programming and massive parallelisation. Compared to prior art using gradient-search and mixed-integer based approaches, the method presented in this paper returns optimal or good results in a fraction of the time compared to previous approaches. The presented method allows for redundancy, i.e. requiring selected sub-volumes to be covered by at least n sensors. The presented results are for 3D sensors which have a visible volume represented by cones, but the method can easily be extended to work with sensors having other range and field of view shapes, such as 2D cameras and lidars.
A low complexity distributed cluster based algorithm for spatial prediction
2017
Los mapas del entorno radioeléctrico (REM) pueden ser una herramienta esencial para numerosas aplicaciones en las futuras redes inalámbricas 5G. En este trabajo, empleamos un popular método geoestadístico llamado kriging ordinario para estimar el REM de un área cubierta por un eNodeB equipado con múltiples antenas. Los sensores inalámbricos se distribuyen por el área de interés y se organizan clústeres adaptativos de sensores para mejorar la calidad de la estimación del canal. En este trabajo, modificamos el algoritmo de clustering distribuido propuesto en un trabajo anterior para reducir la complejidad de la predicción de kriging. Se realizan simulaciones para detallar la técnica de formac…
A hybrid short read mapping accelerator
2013
Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a compet…
Investigating the Impact of Radiation-Induced Soft Errors on the Reliability of Approximate Computing Systems
2020
International audience; Approximate Computing (AxC) is a well-known paradigm able to reduce the computational and power overheads of a multitude of applications, at the cost of a decreased accuracy. Convolutional Neural Networks (CNNs) have proven to be particularly suited for AxC because of their inherent resilience to errors. However, the implementation of AxC techniques may affect the intrinsic resilience of the application to errors induced by Single Events in a harsh environment. This work introduces an experimental study of the impact of neutron irradiation on approximate computing techniques applied on the data representation of a CNN.
DeepEva: A deep neural network architecture for assessing sentence complexity in Italian and English languages
2021
Abstract Automatic Text Complexity Evaluation (ATE) is a research field that aims at creating new methodologies to make autonomous the process of the text complexity evaluation, that is the study of the text-linguistic features (e.g., lexical, syntactical, morphological) to measure the grade of comprehensibility of a text. ATE can affect positively several different contexts such as Finance, Health, and Education. Moreover, it can support the research on Automatic Text Simplification (ATS), a research area that deals with the study of new methods for transforming a text by changing its lexicon and structure to meet specific reader needs. In this paper, we illustrate an ATE approach named De…
Multi-layer intrusion detection system with ExtraTrees feature selection, extreme learning machine ensemble, and softmax aggregation
2019
Abstract Recent advances in intrusion detection systems based on machine learning have indeed outperformed other techniques, but struggle with detecting multiple classes of attacks with high accuracy. We propose a method that works in three stages. First, the ExtraTrees classifier is used to select relevant features for each type of attack individually for each (ELM). Then, an ensemble of ELMs is used to detect each type of attack separately. Finally, the results of all ELMs are combined using a softmax layer to refine the results and increase the accuracy further. The intuition behind our system is that multi-class classification is quite difficult compared to binary classification. So, we…
Neural Classification of HEP Experimental Data
2009
High Energy Physics (HEP) experiments require discrimination of a few interesting events among a huge number of background events generated during an experiment. Hierarchical triggering hardware architectures are needed to perform this tasks in real-time. In this paper three neural network models are studied as possible candidate for such systems. A modified Multi-Layer Perception (MLP) architecture and a E alpha Net architecture are compared against a traditional MLP Test error below 25% is archived by all architectures in two different simulation strategies. E alpha Net performance are 1 to 2% better on test error with respect to the other two architectures using the smaller network topol…
State classification for autonomous gas sample taking using deep convolutional neural networks
2017
Despite recent rapid advances and successful large-scale application of deep Convolutional Neural Networks (CNNs) using image, video, sound, text and time-series data, its adoption within the oil and gas industry in particular have been sparse. In this paper, we initially present an overview of opportunities for deep CNN methods within oil and gas industry, followed by details on a novel development where deep CNN have been used for state classification of autonomous gas sample taking procedure utilizing an industrial robot. The experimental results — using a deep CNN containing six layers — show accuracy levels exceeding 99 %. In addition, the advantages of using parallel computing with GP…
High temperature solid-catalized transesterification for biodiesel production
2010
Biodiesel has become more attractive recently because of its environmental benefits and the fact that it is made from renewable resources. Biodiesel is a mixture of monoalkyl esters of long chain fatty acids derived from renewable feed stock like vegetable oils and animal fats, mainly made of fatty acid glycerides. It is produced by transesterification processes in which oil or fat are reacted with a monohydric alcohol in the presence of a catalyst. The transesterification process is affected by reaction conditions, alcohol to oil molar ratio, type of alcohol, type and amount of catalysts, temperature and purity of reactants. Heterogeneous acid catalysts are quite efficient in promoting the…