Under-Display Camera (UDC) is a new imaging system that mounts a display screen on top of a traditional digital camera lens. Such a system has mainly two advantages. First, it follows a new product trend of full-screen devices with a larger screen-to-body ratio, which can provide better user perceptive and intelligent experience. Without seeing the bezel and extra buttons, users can easily access more functions by directly touching the screen. Second, it provides a better human-computer interaction. By putting the camera in the center of the display, it enhances teleconferencing experiences with perfect gaze tracking, and it is increasingly relevant for larger display devices such as laptops and TVs.
Unlike pressure or fingerprint sensors can be more easily integrated into a display, the imaging sensor is relatively hard to maintain its functions after being mounted behind a display. The imaging quality of a camera will be severely degraded due to lower light transmission rates and diffraction effects. As a result, images captured will be noisy and blurry. Therefore, while bringing better user experience and interaction, UDC may sacrifice the quality of photography, face processing, and other downstream vision tasks.
Enhancing the degraded images can be better addressed by learning-based image restoration approaches. Recently, deep restoration models have achieved great performance on image processing applications such as de-noising, de-blurring, de-raining, de-hazing, super-resolution, and light-enhancement. Working on synthesis data with single degradation type, existing models can be hardly utilized to enhance real-world low-quality images with complicated and combined degradation types. To address complicated real degradation using learning-based methods, collecting real paired data or synthesizing near-realistic data by fully understanding the degradation model is necessary.
We hold this image restoration challenge in conjunction with RLQ'20 Workshop which will be held on ECCV'20. We are seeking an efficient and high-performance image restoration algorithm to be used for recovering under-display camera images. We have two tracks and participants are encouraged to submit results on both of them, but only attending one track is also acceptable. The challenge will be held via CodaLab, and please check the following two tracks:
[Preliminary] Participants could validate their methods on challenge websites for the two tracks: [Track1-TOLED] [Track2-POLED] and submit results to the validation server. Results during validation period are not utilized for ranking, but only for evaluation purposes.
[Compulsory] To officially participate in the challenge, the leader of each team should submit results to the testing servers [Track1-TOLED] [Track2-POLED], and at the same time, email the following files to the organizer Yuqian Zhou (zhouyuqian133@gmail.com). The title of the mail should be: UDC ECCV 2020 Challenge - TEAM_NAME The body of the mail shall include the following: a) the challenge name, b) team name, c) team leader's name and email address, d) rest of the team members, e) team members with RLQ or UDC organizers, f) team name and user names on UDC CodaLab competitions, g) executable/source code attached or download links, h) factsheet attached, i) download link to the results of all of the test frames. The executable/source code should include trained models or necessary parameters so that we could run it and reproduce results. There should be a README or descriptions that explain how to execute the executable/code. Factsheet must be a compiled pdf file.
[Optional] Participants are also encouraged to submit a paper related to your competition algorithm and other related experiments conducted during the challenge via the CMT. High-performance or novel methods are both considered as important contributions. Please follow the format guideline from ECCV'20.
Yuqian Zhou, David Ren, Neil Emerton, Sehoon Lim, Tim Large. "Image Restoration for Under-Display Camera." arXiv preprint arXiv:2003.04857 (2020).