Search results for "GEP"

showing 10 items of 1017 documents

Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination

2015

This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive…

education.field_of_studybusiness.industryFeature extractionPopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPattern recognitionConvolutional neural networkLidarData visualizationDiscriminative modelRGB color modelComputer visionArtificial intelligencebusinesseducationCluster analysis2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
researchProduct

Early Television Video Game Tournaments as Sports Spectacles

2020

This article looks at two televised video game tournaments from the 1980’s from the viewpoint of sports spectacle. Through the analysis of the television episodes and comparison to modern eSports-scene, the aim is to see, if there were similarities or differences between sports broadcasting and video game broadcasting at the time. The article suggests that because of visual choices made in sports broadcasting, the video game tournaments adapted this style coincidentally, which might have affected the style of eSports-broadcasting later. nonPeerReviewed

elektroninen urheiluComputerApplications_MISCELLANEOUSvideopelitComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONComputingMilieux_PERSONALCOMPUTINGtelevisiointi
researchProduct

Overview of ghost correction for HDR video stream generation

2015

International audience; Most digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step ca…

exposure bracketingComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONbitmapGraph-Cuts[ SPI.SIGNAL ] Engineering Sciences [physics]/Signal and Image processingGeneralLiterature_MISCELLANEOUSghost detectionsmart camerahigh dynamic rageentropyreal-time algorithm[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processingComputingMethodologies_COMPUTERGRAPHICS
researchProduct

Experiences from the Use of an Eye-Tracking System in the Wild

2010

Eye-tracking systems have been widely used as a data collection method in the human–computer interaction research field. Eyetracking has typically been applied in stationary environments to evaluate the usability of desktop applications. In the mobile context, user studies with eye-tracking are far more infrequent. In this paper, we report our findings from user tests performed with an eye-tracking system in a forest environment. We present some of the most relevant issues that should be considered when planning a mobile study in the wild using eye-tracking as a data collection method. One of the most challenging finding was the difficulty in identifying where the user actually looked in th…

eye-trackingsilmän liikkeetmobiilipalvelutComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONmobile user experiencekäyttäjäkokemus
researchProduct

Fast Photomosaic

2005

Photomosaic is a technique which transforms an input image into a rectangular grid of thumbnail images preserving the overall appearance. The typical photomosaic algorithm searches from a large database of images one picture that approximates a block of pixels in the main image. Since the quality of the output depends on the size of the database, it turns out that the bottleneck in each photomosaic algorithm is the searching process. In this paper we present a technique to speed-up this critical phase using the Antipole Tree Data Structure. This improvement allows the use of larger databases without requiring much longer processing time.

fotomozaikanefotorealistické vykreslováníComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONphotomosaicPhotomosaic Antipole tree non-photorealistic rendering image processing and enhancementzpracování obrazunon-photorealistic renderingimage processing
researchProduct

An interactional ‘live eye tracking’ study in autism spectrum disorder : combining qualitative and quantitative approaches in the study of gaze

2017

Recent studies on gaze behaviours in individuals with autism spectrum disorder (ASD) have utilised “live eye tracking.” Such research has focused on generating quantitative eye tracking measurements, which provide limited (if any) qualitative contextual details of the actual interactions in which gaze occurs. This article presents a novel methodological approach that combines live eye tracking with qualitative interaction analysis, multimodally informed conversation analysis. Drawing on eye tracking and wide-angle video recordings, this combination renders visible some of the functions, or what gaze “does,” in interactional situations. The participants include three children with ASD and th…

functions of gazeconversation analysisgenetic structuresComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONDevelopmental psychology03 medical and health sciencessilmänliikkeet0302 clinical medicineInformationSystems_MODELSANDPRINCIPLESautismimedicine0501 psychology and cognitive sciencesAutism spectrum disorderGeneral Psychologyta515keskustelunanalyysi05 social sciencesmedicine.diseaseGazelive eye trackingConversation analysisgaze shiftsAutism spectrum disorderEye trackingkatsePsychology030217 neurology & neurosurgery050104 developmental & child psychologyCognitive psychologyQualitative Research in Psychology
researchProduct

A Performance Evaluation of Fusion Techniques for Spatio-Temporal Saliency Detection in Dynamic Scenes

2013

International audience; Visual saliency is an important research topic in computer vision applications, which helps to focus on regions of interest instead of processing the whole image. Detecting visual saliency in still images has been widely addressed in literature. However, visual saliency detection in videos is more complicated due to additional temporal information. A spatio-temporal saliency map is usually obtained by the fusion of a static saliency map and a dynamic saliency map. The way both maps are fused plays a critical role in the accuracy of the spatio-temporal saliency map. In this paper, we evaluate the performances of different fusion techniques on a large and diverse datas…

fusionComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technology[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Image (mathematics)Visual salincy[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Salience (neuroscience)0202 electrical engineering electronic engineering information engineeringComputer visionSaliency mapcontext informationFusionImage fusionbusiness.industry[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]020207 software engineeringPattern recognitionSpatio-temporal saliencyperformance evaluationKadir–Brady saliency detector020201 artificial intelligence & image processingArtificial intelligenceFocus (optics)business
researchProduct

An exploratory study of gazing behavior during live performance

2009

It is known that the visual information given by performers during a performance works as a useful channel of communication to the audience. In my previous studies, many performers referred to the importance of gazing behavior or eye contact. The purpose of this research is to explore the role of gaze during live performance by measuring the timing and direction of gazing. The hypotheses are as follows: [1] Gazing behavior depends on the musical structure. [2] Gazing behavior is used for the communication between performers which is necessary during performance. [3] Performers set their gazing direction in order to contribute to the audience’s understanding of the music. This research was e…

gazing behaviourperformance and communicationInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
researchProduct

Schematic eye models to mimic the behavior of the accommodating human eye

2018

A simplified version of the human eye is known as schematic eye model. Since the first attempts in the middle of the 19th century, numerous approaches describing new schematic eye models have been introduced. Some of them are able to describe the accommodation ability of the human eye. Accommodative schematic eyes could be of great interest since they are able to explain the functionality of the human eye and they can be easily used for different research purposes. Some of these include the design and testing of multifocal ophthalmic solutions, evaluation of the effect of optical aberrations on the retinal image quality, and study of the optical performance of the eye at different distances…

genetic structuresComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONSchematic eyeModels Biological01 natural sciencesCornea010309 optics03 medical and health sciencesOcular physiology0302 clinical medicine0103 physical sciencesmedicineHumansbusiness.industryAccommodation OcularSchematicSurgical proceduresÒpticaRefractive ErrorsSensory SystemsRetinal imageeye diseasesOphthalmologymedicine.anatomical_structureIntraocular lenses030221 ophthalmology & optometryOptometrySurgeryHuman eyesense organsbusinessAccommodationUlls
researchProduct

When virtual and real worlds coexist: Visualization and visual system affect spatial performance in augmented reality

2021

New visualization approaches are being actively developed aiming to mitigate the effect of vergence-accommodation conflict in stereoscopic augmented reality; however, high interindividual variability in spatial performance makes it difficult to predict user gain. To address this issue, we investigated the effects of consistent and inconsistent binocular and focus cues on perceptual matching in the stereoscopic environment of augmented reality using a head-mounted display that was driven in multifocal and single focal plane modes. Participants matched the distance of a real object with images projected at three viewing distances, concordant with the display focal planes when driven in the mu…

genetic structuresComputer scienceperceptual matchingmedia_common.quotation_subjectComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONStereoscopydepth cuesArticlelaw.inventionlawPerceptionHumansComputer visionmedia_commonAugmented Realitybusiness.industryAccommodation OcularEmmetropiaSensory SystemsVisualizationOphthalmologySpatial relationhead-mounted displaybinocular and accommodative disordersCardinal pointAugmented realityArtificial intelligenceCuesFocus (optics)Depth perceptionbusinessJournal of Vision
researchProduct