Brief introduction about the challenge

Highly reflective and transparent objects (namely challenging object, e.g., glasses and mirrors) are very common in our daily lives, yet their unique visual and optical properties severely degrade the performance of many computer vision algorithms when used in practice.
As shown in Fig 1, the algorithm wrongly segments the instances inside the mirror (Fig 1(a)), or segments the instances behind the glass but is not aware that they are actually behind the glass (Fig 1(b)). Many 3D computer vision tasks (e.g., depth estimation and 3D reconstruction) and 3D cameras also suffer from these challenge objects due to their optical properties (Figs 1(c-d)). These challenging objects limit the application of scene-understanding algorithms in the industry. Therefore, it is essential to detect and segment these challenging objects.
This challenging proposal provides a challenging-object semantic-segmentation dataset, which contains both highly reflective and transparent objects (e.g., mirrors, mirror-like objects, drinking glasses and transparent plastic). The proposed dataset allows researchers to make profound improvements for challenging object segmentation. It also can promote the progress of related research areas (robot grasping, 3D pose estimation and so on).

Important Dates

  • 1. Registration Opens: April 12th, 2021 11:59 PM UTC
  • 2. Training Dataset Available: April 12th, 2021 11:59 PM UTC
  • 3. Stage one
    • a) Testing Dataset Available: April 26th, 2021 11:59 PM UTC
    • b) Submission Deadline: July 14th, 2021 11:59 PM UTC
  • 4. Stage two
    • a) Testing Dataset Available: July 19th, 2021 11:59 PM UTC
    • b) Submission Deadline: July 20th, 2021 11:59 PM UTC
  • 5. Release of the evaluation result: July 30th, 2021 11:59 PM UTC
  • 6. GC sessions, Winners Announcement: September 20th, 2021

Rules for participation

Participants are subject to the following rules:

  • a) You cannot sign up from multiple accounts and therefore you cannot submit from multiple accounts.
  • b) Privately sharing code or data outside of teams is not permitted. It is okay to share code if made available to all participants on the forums.
  • c) The maximum team size is 3.

Criteria of judging a submission

Mean Intersection-over-Union (mIoU) will be the criteria for evaluation. Each type of object is assessed and assigned an Intersection-over-Union (IoU) score, then all types of objects will be calculated for the average of their IoU scores, finally resulting in the mIoU score. The submissions will be ranked according to the criteria.

Description

This challenging contains two different stages of test. Each stage may use different test dataset for evaluation. There may have two different leaderboards. In this way, the final mIoU score will be the weighted sum of the mIoU scores at two stage. The weight of mIoU score at stage one is 0.3, and the weight of mIoU score at stage two is 0.7.