6533b820fe1ef96bd127a2bc
RESEARCH PRODUCT
Emergency Analysis: Multitask Learning with Deep Convolutional Neural Networks for Fire Emergency Scene Parsing
Ole-christoffer GranmoJivitesh SharmaMorten Goodwinsubject
Parsingbusiness.industryComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONMulti-task learningImage segmentationcomputer.software_genreMachine learningConvolutional neural networkBenchmark (computing)SegmentationArtificial intelligencebusinessTransfer of learningcomputerSituation analysisdescription
In this paper, we introduce a novel application of using scene semantic image segmentation for fire emergency situation analysis. To analyse a fire emergency scene, we propose to use deep convolutional image segmentation networks to identify and classify objects in a scene based on their build material and their vulnerability to catch fire. We introduce our own fire emergency scene segmentation dataset for this purpose. It consists of real world images with objects annotated on the basis of their build material. We use state-of-the-art segmentation models: DeepLabv3, DeepLabv3+, PSPNet, FCN, SegNet and UNet to compare and evaluate their performance on the fire emergency scene parsing task. During inference time, we only run the encoder (backbone) network to determine whether there is a fire or not in the image. If there is a fire, only then the decoder is activated to segment the emergency scene. This results in dispensing with unnecessary computation, i.e. the decoder. We achieve this by using multitask learning. We show the importance of transfer learning and the difference in performance between models pretrained on different benchmark datasets. The results show that segmentation models can accurately analyse an emergency situation, if properly trained to do so. Our fire emergency scene parsing dataset is available here: https://github.com/cair.
year | journal | country | edition | language |
---|---|---|---|---|
2021-01-01 |