Multi-exposure image fusion using convolutional neural network Evrişimli sinir aǧi kullanarak çoklu-pozlamali görüntü birleştirme


Creative Commons License

AKBULUT H., ASLANTAŞ V.

Journal of the Faculty of Engineering and Architecture of Gazi University, cilt.38, sa.3, ss.1439-1451, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 38 Sayı: 3
  • Basım Tarihi: 2023
  • Doi Numarası: 10.17341/gazimmfd.1067400
  • Dergi Adı: Journal of the Faculty of Engineering and Architecture of Gazi University
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Art Source, Compendex, TR DİZİN (ULAKBİM)
  • Sayfa Sayıları: ss.1439-1451
  • Anahtar Kelimeler: convolution neural network, Multi-exposure image fusion, quality metrics, weighted fusion rule
  • Yozgat Bozok Üniversitesi Adresli: Evet

Özet

© 2023 Gazi Universitesi Muhendislik-Mimarlik. All rights reserved.The method of obtaining a single high dynamic range (HDR) image from two or more low dynamic range (LDR) images of the same scene is called multi-exposure image fusion (MEF). In this study, a new MEF method using convolutional neural network (CNN) from deep learning (DL) models is proposed. In the proposed method, firstly, the fusion map (fmap) was computed from source LDR images using the CNN model. In order to eliminate the saw-tooth effect in the fused images, weighting was performed on the fmap. Then, a well-exposed fused image was constructed using the weighted fmap. After that, the proposed method was applied to the MEF datasets, which are widely used in the literature, and the fused images obtained were evaluated using well-known quality metrics. The developed technique and other well-known MEF techniques are compared in terms of quantitative and visual evaluation. The results obtained show the feasibility of the proposed technique.