How to Participate

Tips

  • 👉 Please refer to our 🐋 TopCoW_Algo_Submission repo on GitHub as a template and guide on the submission process.
  • The validation phase is not used for final evaluation.
    • Please use validation phase to debug and validate your docker submission workflow.
    • Or better still, test locally with the provided test.sh and use the Try-out Algorithm on your algorithm page
      • Please refer to the README of our submission template repo for more instructions
  • The final test phases only allow for one submission per team for each phase
    • The test phases take much longer than the smaller validation phases to be evaluated (might take a few hours!)
    • Please plan ahead and do not cram last minute for the final test phases.

Account and Team

  • Each participant can only be linked to one account, one team, and one method for each track-task
    • We will exclude duplicate accounts and submissions from leaderboards
    • We will exclude anonymous submissions unless we can verify your profile
  • An organization (lab/company/institute) can split into different teams/submissions
    • Each team can submit a different approach separately
    • Only if your methods are sufficiently distinct!
    • We will need clarifications from you if we suspect any team/submission splitting

Usage of Training Data

Participants may use any other public datasets and private in-house data, or modify the supplied TopCoW 2024 training data, provided that they disclose and mention any additional or modified training datasets in their description of the submitted algorithm (please see "Publication policy" below).

Members of the Organizing Institutes

Members of the organizers' direct research groups can participate and their results can be included in the publications and the leaderboard. However, they are not eligible for awards.

Award and Result Announcement Policy

Top 3 teams for segmentation performance from each of the two tracks and three tasks will be publicly named and given a certificate along with a Swiss wooden toy cow 🇨🇭🐄 as a souvenir at the in-person challenge event.

All participants/teams are invited to prepare a 4-minute presentation/video for the challenge session to present and discuss their methods.

After the public announcement, a detailed analysis of the submitted results will be available upon request.


🏆 📰 Publication Policy

The challenge results will be summarized and published in a journal manuscript. All participants with a (reasonable) submission are invited to contribute to our challenge publication!

In order for us to include you in our paper, please

  1. Fill up a contact form + a short questionnaire regarding your algorithm submission before the MICCAI event. Link to appear here soon.
We will prepare the questionnaire in a survey form. Stay tuned!

The above are the minimal requirements. You are of course welcome to send us more paragraphs + figures about your submission.

NOTE: this is required for the co-authorship even if you are awarded and announced for winner or have presented in the event. (for the academic value of the summary paper)

👉 Submit BEFORE the MICCAI event on Oct 06 📅

Each submission can have maximum three co-authorships for the challenge paper. Additional authors from the top submissions can be included upon request with justification according to the ICMJE authorship guidelines.

Participating teams may submit their results separately without any publication embargo.


Last updated on Sep 06, 2024