6533b821fe1ef96bd127b6a3
RESEARCH PRODUCT
Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination
Carlo GattaGustau Camps-vallsManuel Campos-tabernerAdriana Romerosubject
education.field_of_studybusiness.industryFeature extractionPopulationComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPattern recognitionConvolutional neural networkLidarData visualizationDiscriminative modelRGB color modelComputer visionArtificial intelligencebusinesseducationCluster analysisdescription
This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization.
year | journal | country | edition | language |
---|---|---|---|---|
2015-07-01 | 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) |