Dong, Z. and Kamata, S. and Breckon, T.P. (2018) 'Infrared image colorization using S-shape network.', in 2018 25th IEEE International Conference on Image Processing (ICIP) : October 7–10, 2018, Megaron Athens International Conference Centre, Athens, Greece. Proceedings. Piscataway: IEEE, pp. 2242-2246.
This paper proposes a novel approach for colorizing near infrared (NIR) images using a S-shape network (SNet). The proposed approach is based on the usage of an encoder-decoder architecture followed with a secondary assistant network. The encoder-decoder consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The assistant network is a shallow encoder-decoder to enhance the edge and improve the output, which can be trained end-to-end from a few image examples. The trained model does not require any user guidance or a reference image database. Furthermore, our architecture will preserve clear edges within NIR images. Our overall architecture is trained and evaluated on a real-world dataset containing a significant amount of road scene images. This dataset was captured by a NIR camera and a corresponding RGB camera to facilitate side-by-side comparison. In the experiments, we demonstrate that our SNet works well, and outperforms contemporary state-of-the-art approaches.
|Item Type:||Book chapter|
|Full text:||(AM) Accepted Manuscript|
Download PDF (1239Kb)
|Publisher Web site:||https://doi.org/10.1109/ICIP.2018.8451230|
|Publisher statement:||© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.|
|Date accepted:||04 May 2018|
|Date deposited:||11 June 2018|
|Date of first online publication:||06 September 2018|
|Date first made open access:||28 February 2023|
Save or Share this output
|Look up in GoogleScholar|