Viewing 15 posts - 1 through 15 (of 18 total)
  • Author
    Posts
  • #756
    20Years
    Member

    Dear Kurt, Is it possible to post the ground truth crack lengths of validation specimens now? Thanks!

    #760
    kjd02002
    Moderator

    Dear 20Years,

    We will be sharing measured crack lengths for the validation data within the next few days.

    Final team results (scores and rank) will be posted on 18 August, 2019. This is to allow time for the results and preliminary winner review process.

    Please note: preliminary winners will be contacted soon.

    #764
    20Years
    Member

    Thanks Kurt! Looking forward to it.

    #768
    cranthena
    Member

    I wonder if the ground truth / measured crack lenghts for the validation data have been posted anywhere, as per message above?

    #769
    realai
    Member

    Dear Kurt,
    We have the same question, please post the ground truth crack lengths, thanks!

    #770
    kjd02002
    Moderator

    Sorry for the delay posting the actual crack lengths for T7 and T8. These will be posted in a results page soon, but I am posting them here in the interim:

    Cycle | Crack length (mm)
    T7 36001 | 0
    40167 | 0
    44054 | 2.07
    47022 | 3.14
    49026 | 3.56
    51030 | 4.13
    53019 | 5.05
    55031 | 7.22

    T8 40000 | 0
    50000 | 0
    70000 | 0
    74883 | 1.94
    76931 | 2.5
    89237 | 3.71
    92315 | 3.88
    96475 | 4.61
    98492 | 4.96
    100774 | 5.52

    #772
    20Years
    Member

    Thanks Kurt!

    When the final results are posted, it would be also interesting to see the scores achieved by every teams for two validation datasets T7 and T8, respectively.

    #773
    tsinghua_wcy
    Member

    Thanks Kurt!

    Accroading to the posted final results, the ground truth is likely to be leaked before (publicly). The reproducibility should be double checked before the announcement of the winners.

    #774
    kjd02002
    Moderator

    Dear tsinghua_wcy,

    Thank you for your message. I appreciate that you’ve brought this to our attention in the interest of a fair competition.

    Please note that preliminary winners of the data challenge must submit explanation of their models as well as journal papers before the committee approves final competition winners. Therefore, submitting predictions that yield the best score alone will not necessarily result in a team winning a top 3 position for the competition.

    We have been working to validate the approaches of all potential winners over the past 2 weeks which is why final results have not yet been posted.

    #775
    tsinghua_wcy
    Member

    Thanks Kurt and the committee for your work. Looking forward to seeing the final results!

    Some tools you might consider to check the reproducibility, which is better than just manuscripts:
    1. codelab: https://worksheets.codalab.org/
    2. Jupyter Notebook

    #776
    SyRRA
    Member

    Dear Kurt,

    We’re still waiting to see the final rankings just to know how well we did, comparing to others.
    You mentioned August 18th and 8 days have passed. When do you think you can post the final results?

    Bests

    #780
    20Years
    Member

    +?
    How about to post the original scores/rankings first if it is still need more time to evaluate the winners? Thanks!

    #781
    kjd02002
    Moderator

    Final results are still pending validation/review. However, here are preliminary results which will be posted to the website soon:

    RANK | TEAM | SCORE
    1-6 | Angler, KoreanAvengers, pyEstimate, Seoulg, SLUNG, ValyrianAluminumers | Not Yet Announced (alphabetic order)

    7 | trybest | 24.39
    8 | tsinghua_wcy | 29.72
    9 | HIAI | 31.61
    10 | LDM | 34.97
    11 | PHiMa | 43.46
    12 | TeamBlue | 46.32
    13 | HITF | 48.17
    14 | JCHA | 52.54
    15 | ChoochooTrain | 53.83
    16 | NaN | 55.94
    17 | realai | 58.95
    18 | beat_real | 59.00
    19 | 20Years | 72.56
    20 | Runtime_Terror | 78.57
    21 | CAPE_HM301 | 88.67
    22 | RRML | 96.04
    23 | SyRRA | 128.86
    24 | 553 | 153.86
    25 | ACES | 165.68
    26 | Cracker | 189.35
    27 | FMAKE | 192.39
    28 | Xukunv | 194.88
    29 | NTS01 | 199.43
    30 | cranthena | 239.02
    31 | _______ | 243.54
    32 | Apostov | 253.86
    33 | TeamKawakatsu | 282.46
    34 | U-Q | 526.79
    35 | ISNDE | 594.61
    36 | UTJS-1 | 621.93
    37 | ISX-MPO | 755.49
    38 | UBC-Okanagan | 1140.69
    39 | NukeGrads | 2175.56
    40 | NUTN_DSG_TW | 2381.42
    41 | GTC | 5513.57
    42 | TWT | 6346.47
    43 | dataking | 6346.47
    44 | Arundites | 16571.80
    45 | LIACS | 86762.72
    46 | TPRML | 10^7
    47 | Mizzou | 10^19
    48 | DSBIGINNER333 | > 10^100

    • This reply was modified 3 years, 5 months ago by samuel_admin.
    #784
    realai
    Member

    What about asking everyone for a self-contained jupyter notebook, and we can have a ranking where all teams have release the codes? Thanks!

    #1685
    _______
    Member

    Hello Kurt,

    Since not everyone can come to the conference, could there be a way for the other competitors to share their ideas and codes? By the way, the email address for the student poster session seems to be unavailable now.

    Thank you

Viewing 15 posts - 1 through 15 (of 18 total)
  • The forum ‘2019 Data Challenge: General Competition Questions’ is closed to new topics and replies.