Evaluation Criteria
The participants are required to output non-repeating subtitles and concatenate them
for each video. For a given video, subtitles of adjacent frames may be the same, and
the participants are required to deduplicate them and output subtitles.
We will evaluate the predicted transcription with CER.
Note: To avoid the ambiguity in annotations, we preform preprocessing before
evaluation: 1)The English letters are not case sensitive; 2) The Chinese traditional and
simplified characters are treated as the same label; 3)The blank spaces and symbols
will be removed; 4) All illegible videoes will not contribute to the evaluation result.
Evaluation Plan
On March 12, organizers provide the training set with annotations. Each task has 50 hours of video data.
Participants are required to develop corresponding models according to the requirements of each track.
On April 22, the organizers will provide the validation set without annotations. And each track contains 20 hours of video data.
Participants predict subtitles of each video, and submit the prediction results to the CodaLab website.
The organizers will give the ranking on the validation set according to the prediction result.
On May 7th, the organizers will provide the test set (including 5 hours of video data) without annotations.
It is required that the top ten participants of each track in the validation set be within two days (that is, before May 9th).
According to the video data in the test set, top ten participants on the validation set are required to predict the subtitles in the video,
and submit the prediction results to the CodaLab website.
Finally, the final rank of each track = 50% * rank on the validation set + 50% * rank on the test set .
The test set will be announced on the official website (icprmsr.github.io) after May 9,
and the final ranking information of the participants will also be announced on the competition website,
and the participants will be notified by email.