Search results for "ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION"

showing 10 items of 982 documents

Integration of large-area optical imagers for biometric recognition and touch in displays

2021

In recent years there has been an increasing interest to integrate optical sensing in mobile displays, for instance, for biometric fingerprint scanning functionality. There are several routes to incorporate optical fingerprint functionality within the full display area, each with their own benefits and challenges. Here we investigate the different integration routes using large-area, ultra-thin imagers based on organic photodiodes.

display integrationbiometricsMaterials scienceBiometricsgenetic structuresbusiness.industrylarge-area imagerComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONFingerprint recognitionfingerprint scannerAtomic and Molecular Physics and Opticseye diseasesElectronic Optical and Magnetic MaterialsAMOLEDAMOLEDComputer visionArtificial intelligencesense organsElectrical and Electronic Engineeringbusinessoptical sensorOPDorganic photodiodeJournal of the Society for Information Display
researchProduct

A Dataset of Annotated Omnidirectional Videos for Distancing Applications

2021

Omnidirectional (or 360°) cameras are acquisition devices that, in the next few years, could have a big impact on video surveillance applications, research, and industry, as they can record a spherical view of a whole environment from every perspective. This paper presents two new contributions to the research community: the CVIP360 dataset, an annotated dataset of 360° videos for distancing applications, and a new method to estimate the distances of objects in a scene from a single 360° image. The CVIP360 dataset includes 16 videos acquired outdoors and indoors, annotated by adding information about the pedestrians in the scene (bounding boxes) and the distances to the camera of some point…

distancingComputer scienceDistancing360°Computer applications to medicine. Medical informaticsComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONR858-859.7Pedestrianvideo datasetArticleImage (mathematics)Bounding overwatchResearch communityomnidirectional camerasdepth estimationPhotographyRadiology Nuclear Medicine and imagingComputer visionvideo surveillanceElectrical and Electronic EngineeringOmnidirectional antennaTR1-1050360Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazionispherical imagesbusiness.industryPerspective (graphical)Process (computing)QA75.5-76.95trackingComputer Graphics and Computer-Aided Designequirectangular projectionElectronic computers. Computer sciencepedestrianComputer Vision and Pattern RecognitionArtificial intelligencebusinessJournal of Imaging
researchProduct

Scatter Search for the Point-Matching Problem in 3D Image Registration

2008

Scatter search is a population-based method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, such as the surrogate constraint method, scatter search uses strategies for combining solution vectors that have proved effective in a variety of problem settings. We present a scatter-search implementation designed to find high-quality solutions for the 3D image-registration problem, which has many practical applications. This problem arises in computer vision applications when finding a correspondence or transformation …

education.field_of_studyComputer scienceHeuristic (computer science)business.industryPopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONGeneral EngineeringImage registrationPoint set registrationMachine learningcomputer.software_genreEvolutionary computationNonlinear programmingRobustness (computer science)Artificial intelligenceeducationbusinessMetaheuristicAlgorithmcomputerINFORMS Journal on Computing
researchProduct

Population and Query Interface for a Content-Based Video Database

2002

In this paper we describe the first full implementation of a content-based indexing and retrieval system for MPEG-2 and MPEG-4 videos. We consider a video as a collection of spatiotemporal segments called video objects; each video object is a sequence of video object planes. A set of representative video object planes is used to index each video object. During the database population, the operator, using a semi-automatic outlining tool we developed, manually selects video objects and insert some semantical information. Low-level visual features like color, texture, motion and geometry are automatically computed. The system has been implemented on a commercial relational DBMS and is based on…

education.field_of_studyMotion compensationDatabaseComputer sciencePopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONcomputer.file_formatObject (computer science)Smacker videocomputer.software_genreVideo compression picture typesVideo trackingeducationImage retrievalcomputer
researchProduct

A Method Based on Multi-source Feature Detection for Counting People in Crowded Areas

2019

We propose a crowd counting method for multisource feature fusion. Image features are extracted from multiple sources, and the population is estimated by image feature extraction and texture feature analysis, along with for crowd image edge detection. We count people in high-density still images. For instance, in the city’s squares, sports fields, subway stations, etc. Our approach uses a still image taken by a camera on a drone to appraise the count in the population density image, using a kind of sources of information: HOG, LBP, CANNY. We furnish separate estimates of counts and other statistical measurements through several types of sources. Support vector machine SVM, classification an…

education.field_of_studyWarning systembusiness.industryFeature extractionPopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONRegression analysisPattern recognitionImage (mathematics)Support vector machineArtificial intelligencebusinesseducationMulti-sourceFeature detection (computer vision)2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP)
researchProduct

Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination

2015

This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive…

education.field_of_studybusiness.industryFeature extractionPopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPattern recognitionConvolutional neural networkLidarData visualizationDiscriminative modelRGB color modelComputer visionArtificial intelligencebusinesseducationCluster analysis2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
researchProduct

Early Television Video Game Tournaments as Sports Spectacles

2020

This article looks at two televised video game tournaments from the 1980’s from the viewpoint of sports spectacle. Through the analysis of the television episodes and comparison to modern eSports-scene, the aim is to see, if there were similarities or differences between sports broadcasting and video game broadcasting at the time. The article suggests that because of visual choices made in sports broadcasting, the video game tournaments adapted this style coincidentally, which might have affected the style of eSports-broadcasting later. nonPeerReviewed

elektroninen urheiluComputerApplications_MISCELLANEOUSvideopelitComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONComputingMilieux_PERSONALCOMPUTINGtelevisiointi
researchProduct

Overview of ghost correction for HDR video stream generation

2015

International audience; Most digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step ca…

exposure bracketingComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONbitmapGraph-Cuts[ SPI.SIGNAL ] Engineering Sciences [physics]/Signal and Image processingGeneralLiterature_MISCELLANEOUSghost detectionsmart camerahigh dynamic rageentropyreal-time algorithm[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processingComputingMethodologies_COMPUTERGRAPHICS
researchProduct

Experiences from the Use of an Eye-Tracking System in the Wild

2010

Eye-tracking systems have been widely used as a data collection method in the human–computer interaction research field. Eyetracking has typically been applied in stationary environments to evaluate the usability of desktop applications. In the mobile context, user studies with eye-tracking are far more infrequent. In this paper, we report our findings from user tests performed with an eye-tracking system in a forest environment. We present some of the most relevant issues that should be considered when planning a mobile study in the wild using eye-tracking as a data collection method. One of the most challenging finding was the difficulty in identifying where the user actually looked in th…

eye-trackingsilmän liikkeetmobiilipalvelutComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONmobile user experiencekäyttäjäkokemus
researchProduct

Fast Photomosaic

2005

Photomosaic is a technique which transforms an input image into a rectangular grid of thumbnail images preserving the overall appearance. The typical photomosaic algorithm searches from a large database of images one picture that approximates a block of pixels in the main image. Since the quality of the output depends on the size of the database, it turns out that the bottleneck in each photomosaic algorithm is the searching process. In this paper we present a technique to speed-up this critical phase using the Antipole Tree Data Structure. This improvement allows the use of larger databases without requiring much longer processing time.

fotomozaikanefotorealistické vykreslováníComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONphotomosaicPhotomosaic Antipole tree non-photorealistic rendering image processing and enhancementzpracování obrazunon-photorealistic renderingimage processing
researchProduct