FAQ

1. How do I match the MRA CTA pair during test phase?

The grand-challenge algorithm environment is designed such that only one set of input is provided to the algorithm container. So in our challenge, that one set of input is a pair of CT and MR from a patient, and we have taken care of the matching for you.

Please refer to the documentations of our 👉 TopCoW_Algo_Submission 🐋 repo on GitHub.

2. Is the region of interest (ROI) provided during test phase as well?

The ROIs of the test set are NOT available to the participants, but are used by our evaluation to calculate the metrics.

The segmentation results will be evaluated only for the CoW region of interest (ROI). We will not assess the segmentation performance on the peripheral and further downstream vessels outside the CoW ROI. Participants should focus on segmenting the CoW vessel components necessary to characterize the CoW angio-architecture.

3. Is the input during test phase the whole image or the ROI view?

The input available for your algorithm during test phase is the pair of whole images.

For each test-case, we present to your algorithm whole brain image of both CT and MR modalities from that patient, i.e. the CTA-MRA pair, irrespective of whether your algorithm uses both modalities or not. In case you only need one of them, simply ignore the other modality input.

4. How do you weight the metrics and how will the final ranking be done?

For the MICCAI in-person event and its results annoucement, we will simply use an equal weighting of metrics mentioned in our "Assessment" page. Please visit our GitHub repo 👉 TopCoW_Eval_Metrics 📐 for our evaluation metrics for the tasks. (We will broadcast in the forum and webpages and github readme accordingly if we update any metrics.)

The ranking to be annouced for the event and awards will thus be based on the leaderboard displayed on grand-challenge. The leaderboard uses equal weights for each column (for example when you see "mean position", it is the mean of several columns' positions/ranks), and 'rank then average'.

We will summarize the results and do post-challenge analysis. Additional metrics and more advanced ranking analysis will be introduced.

5. Can I take part in just one track and one task?

Yes, definitely. You are welcome to submit to any track or task of your preference. There are 6 rankings in the end.

6. Submission Confidentiality and Availability

According to grand-challenge, "Challenge organisers do not have access to the algorithms. Challenge organisers only have access to the algorithms logs and predictions for the cases in the challenges test/training archive. Challenge organisers cannot use the algorithm on any other cases. Algorithm owners can see who has access to use their algorithm in the Users tab on their algorithm page. No one is ever given access to the algorithms container image."

Having said that, please contact us with your team's information and contact details, so we can reach out to you for the MICCAI event and follow-up publications. (We organizers cannot see your profile's email due to GDPR.)

7. Common pitfalls in failed submissions

  • Watch out for Time limit exceeded errors (try your docker out before submission helps)
  • During inference, you can first cast/convert the input images to float before inputting them to your model
    • this prevents the following RuntimeErrors:
      • Input type (torch.cuda.ShortTensor) and weight type (torch.cuda.HalfTensor)
      • result type Float can't be cast to the desired output type Short
  • KeyError: '13': we do not have a label-13. The labels are from 0-12, and then 15.

Last updated on Sep 06, 2024