Search results for "Computer engineering"

showing 10 items of 164 documents

Do Randomized Algorithms Improve the Efficiency of Minimal Learning Machine?

2020

Minimal Learning Machine (MLM) is a recently popularized supervised learning method, which is composed of distance-regression and multilateration steps. The computational complexity of MLM is dominated by the solution of an ordinary least-squares problem. Several different solvers can be applied to the resulting linear problem. In this paper, a thorough comparison of possible and recently proposed, especially randomized, algorithms is carried out for this problem with a representative set of regression datasets. In addition, we compare MLM with shallow and deep feedforward neural network models and study the effects of the number of observations and the number of features with a special dat…

0209 industrial biotechnologyrandom projectionlcsh:Computer engineering. Computer hardwareComputational complexity theoryComputer scienceRandom projectionlcsh:TK7885-789502 engineering and technologyMachine learningcomputer.software_genresupervised learningapproximate algorithmsSet (abstract data type)regressioanalyysi020901 industrial engineering & automationdistance–based regressionalgoritmit0202 electrical engineering electronic engineering information engineeringordinary least–squaresbusiness.industrySupervised learningsingular value decompositionminimal learning machineMultilaterationprojektioRandomized algorithmkoneoppiminenmachine learningScalabilityFeedforward neural network020201 artificial intelligence & image processingArtificial intelligenceapproksimointibusinesscomputerMachine Learning and Knowledge Extraction
researchProduct

GPU-Based Optimisation of 3D Sensor Placement Considering Redundancy, Range and Field of View

2020

This paper presents a novel and efficient solution for the 3D sensor placement problem based on GPU programming and massive parallelisation. Compared to prior art using gradient-search and mixed-integer based approaches, the method presented in this paper returns optimal or good results in a fraction of the time compared to previous approaches. The presented method allows for redundancy, i.e. requiring selected sub-volumes to be covered by at least n sensors. The presented results are for 3D sensors which have a visible volume represented by cones, but the method can easily be extended to work with sensors having other range and field of view shapes, such as 2D cameras and lidars.

0303 health sciences030306 microbiologyComputer scienceVolume (computing)020207 software engineeringField of view02 engineering and technology3d sensor03 medical and health sciencesRange (mathematics)CUDAComputer engineering0202 electrical engineering electronic engineering information engineeringRedundancy (engineering)Fraction (mathematics)General-purpose computing on graphics processing units2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA)
researchProduct

A low complexity distributed cluster based algorithm for spatial prediction

2017

Los mapas del entorno radioeléctrico (REM) pueden ser una herramienta esencial para numerosas aplicaciones en las futuras redes inalámbricas 5G. En este trabajo, empleamos un popular método geoestadístico llamado kriging ordinario para estimar el REM de un área cubierta por un eNodeB equipado con múltiples antenas. Los sensores inalámbricos se distribuyen por el área de interés y se organizan clústeres adaptativos de sensores para mejorar la calidad de la estimación del canal. En este trabajo, modificamos el algoritmo de clustering distribuido propuesto en un trabajo anterior para reducir la complejidad de la predicción de kriging. Se realizan simulaciones para detallar la técnica de formac…

:CIENCIAS TECNOLÓGICAS [UNESCO]Theoretical computer scienceWireless networkbusiness.industryComputer science020206 networking & telecommunications010103 numerical & computational mathematics02 engineering and technologyUNESCO::CIENCIAS TECNOLÓGICASradio environment maps01 natural sciencesdistributed channel predictionKey distribution in wireless sensor networksKrigingComputer engineeringKriging0202 electrical engineering electronic engineering information engineeringWireless0101 mathematicsCluster analysisbusinesswireless sensor networksWireless sensor networkInterpolation
researchProduct

A hybrid short read mapping accelerator

2013

Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a compet…

:Engineering::Computer science and engineering [DRNTU]GenomeComputer sciencebusiness.industryApplied MathematicsMethodology ArticleChromosome MappingSequence Analysis DNABiochemistryComputer Science ApplicationsSoftwareComputer engineeringStructural BiologySensitivity (control systems)DNA microarraybusinessField-programmable gate arrayAlgorithmMolecular BiologySequence AlignmentDigital signal processingAlgorithmsSoftwareReference genomeBMC Bioinformatics
researchProduct

Investigating the Impact of Radiation-Induced Soft Errors on the Reliability of Approximate Computing Systems

2020

International audience; Approximate Computing (AxC) is a well-known paradigm able to reduce the computational and power overheads of a multitude of applications, at the cost of a decreased accuracy. Convolutional Neural Networks (CNNs) have proven to be particularly suited for AxC because of their inherent resilience to errors. However, the implementation of AxC techniques may affect the intrinsic resilience of the application to errors induced by Single Events in a harsh environment. This work introduces an experimental study of the impact of neutron irradiation on approximate computing techniques applied on the data representation of a CNN.

Approximate computingComputer scienceReliability (computer networking)Radiation effectsRadiation induced02 engineering and technologyneuroverkotExternal Data Representation01 natural sciencesConvolutional neural networkSoftwareHardware020204 information systems0103 physical sciences0202 electrical engineering electronic engineering information engineering[SPI.NANO]Engineering Sciences [physics]/Micro and nanotechnologies/MicroelectronicsResilience (network)mikroprosessoritNeutronsResilience010308 nuclear & particles physicsbusiness.industryReliabilityApproximate computingPower (physics)[SPI.TRON]Engineering Sciences [physics]/ElectronicsComputer engineeringsäteilyfysiikka[INFO.INFO-ES]Computer Science [cs]/Embedded SystemsbusinessSoftware
researchProduct

DeepEva: A deep neural network architecture for assessing sentence complexity in Italian and English languages

2021

Abstract Automatic Text Complexity Evaluation (ATE) is a research field that aims at creating new methodologies to make autonomous the process of the text complexity evaluation, that is the study of the text-linguistic features (e.g., lexical, syntactical, morphological) to measure the grade of comprehensibility of a text. ATE can affect positively several different contexts such as Finance, Health, and Education. Moreover, it can support the research on Automatic Text Simplification (ATS), a research area that deals with the study of new methods for transforming a text by changing its lexicon and structure to meet specific reader needs. In this paper, we illustrate an ATE approach named De…

Artificial intelligenceComputer engineering. Computer hardwareText simplificationComputer scienceText simplificationcomputer.software_genreLexiconAutomatic-text-complexity-evaluationDeep-learningField (computer science)TK7885-7895Automatic text copmplexity evaluationText-complexity-assessmentText complexity assessmentStructure (mathematical logic)Settore INF/01 - InformaticaText-simplificationbusiness.industryDeep learningNatural language processingNatural-language-processingDeep learningGeneral MedicineQA75.5-76.95Artificial-intelligenceSupport vector machineElectronic computers. Computer scienceGradient boostingArtificial intelligencebusinesscomputerSentenceNatural language processingArray
researchProduct

Multi-layer intrusion detection system with ExtraTrees feature selection, extreme learning machine ensemble, and softmax aggregation

2019

Abstract Recent advances in intrusion detection systems based on machine learning have indeed outperformed other techniques, but struggle with detecting multiple classes of attacks with high accuracy. We propose a method that works in three stages. First, the ExtraTrees classifier is used to select relevant features for each type of attack individually for each (ELM). Then, an ensemble of ELMs is used to detect each type of attack separately. Finally, the results of all ELMs are combined using a softmax layer to refine the results and increase the accuracy further. The intuition behind our system is that multi-class classification is quite difficult compared to binary classification. So, we…

Artificial intelligencelcsh:Computer engineering. Computer hardwareExtreme learning machineEnsemble methodsComputer scienceBinary numberlcsh:TK7885-7895Feature selection02 engineering and technologyIntrusion detection systemlcsh:QA75.5-76.95Machine learning0202 electrical engineering electronic engineering information engineeringVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550Multi layerExtreme learning machinebusiness.industryIntrusion detection system020206 networking & telecommunicationsPattern recognitionComputer Science ApplicationsBinary classificationFeature selectionSignal ProcessingSoftmax function020201 artificial intelligence & image processinglcsh:Electronic computers. Computer scienceArtificial intelligencebusinessClassifier (UML)EURASIP Journal on Information Security
researchProduct

Neural Classification of HEP Experimental Data

2009

High Energy Physics (HEP) experiments require discrimination of a few interesting events among a huge number of background events generated during an experiment. Hierarchical triggering hardware architectures are needed to perform this tasks in real-time. In this paper three neural network models are studied as possible candidate for such systems. A modified Multi-Layer Perception (MLP) architecture and a E alpha Net architecture are compared against a traditional MLP Test error below 25% is archived by all architectures in two different simulation strategies. E alpha Net performance are 1 to 2% better on test error with respect to the other two architectures using the smaller network topol…

Artificial neural networkComputer engineeringComputer scienceExperimental dataNeural Networks Intelligent Data Analysis Embedded Neural NetworksArchitecturePerceptronNetwork topology
researchProduct

State classification for autonomous gas sample taking using deep convolutional neural networks

2017

Despite recent rapid advances and successful large-scale application of deep Convolutional Neural Networks (CNNs) using image, video, sound, text and time-series data, its adoption within the oil and gas industry in particular have been sparse. In this paper, we initially present an overview of opportunities for deep CNN methods within oil and gas industry, followed by details on a novel development where deep CNN have been used for state classification of autonomous gas sample taking procedure utilizing an industrial robot. The experimental results — using a deep CNN containing six layers — show accuracy levels exceeding 99 %. In addition, the advantages of using parallel computing with GP…

Artificial neural networkComputer sciencebusiness.industryProperty (programming)Feature extraction0102 computer and information sciences02 engineering and technologyMachine learningcomputer.software_genre01 natural sciencesConvolutional neural networklaw.inventionImage (mathematics)Industrial robot020401 chemical engineeringComputer engineering010201 computation theory & mathematicslawProbability distributionArtificial intelligenceState (computer science)0204 chemical engineeringbusinesscomputer2017 25th Mediterranean Conference on Control and Automation (MED)
researchProduct

High temperature solid-catalized transesterification for biodiesel production

2010

Biodiesel has become more attractive recently because of its environmental benefits and the fact that it is made from renewable resources. Biodiesel is a mixture of monoalkyl esters of long chain fatty acids derived from renewable feed stock like vegetable oils and animal fats, mainly made of fatty acid glycerides. It is produced by transesterification processes in which oil or fat are reacted with a monohydric alcohol in the presence of a catalyst. The transesterification process is affected by reaction conditions, alcohol to oil molar ratio, type of alcohol, type and amount of catalysts, temperature and purity of reactants. Heterogeneous acid catalysts are quite efficient in promoting the…

BIODIESELsolid-catalized transesterificationlcsh:Computer engineering. Computer hardwareSettore ING-IND/25 - Impianti ChimiciComputingMethodologies_DOCUMENTANDTEXTPROCESSINGlcsh:TP155-156lcsh:TK7885-7895lcsh:Chemical engineeringComputingMilieux_MISCELLANEOUSbiodiesel transesterificationComputingMethodologies_COMPUTERGRAPHICS
researchProduct