Guidelines
- A brief introduction to the challenge;
- The expected impact and significance of the challenge in the research and/or industry community;
- Details of the rules for participation;
- Details of datasets that will be used, and state of preparedness of any newly constructed datasets.
- Criteria that will be used to evaluate the submitted systems;
- Details of the challenge organisers including names, affiliations and short bios.
- The full challenge schedule include dates for data release, submission deadline and announcement of results
- List of potential participants (indicate confirmed participants if applicable).
Please submit your Grand Challenges proposals here.
- Submission specification: PDF format, file size<20MB.
- Please name your file in the following format, <Author Last Name>_<Author First Name>_<Grand Challenges Name>.pdf ( e.g. Smith_John_Speech_Coding.pdf ).
- If you would like to update your proposal, please upload a new file. The latest version of your uploaded file will be used.
Upload Grand Challenges Proposal
|
|
|
|
|
In addition:
- Participation: Grand Challenge organizers should facilitate participation, communication, and impact. Grand Challenge organizers are not allowed to participate in the competition they are organizing.
- Dataset: If applicable to the challenge, organizers should make the competition datasets available to the participants. Organizers are encouraged to provide at least one training dataset with both input and ground truth and one test dataset without the ground truth—the latter to be used for final assessment. A secondary test set for evaluation, which is not available to participants, is also encouraged to highlight the generalizability of their approach. It is encouraged to also include a standard algorithm/model to serve as a baseline and for illustrative purposes.
- Evaluation: How the results are evaluated and ranked should be announced along with the challenge description. During the competition, the participants and organizers have to follow the challenge’s criteria carefully. The evaluation methods should be unbiased and transparent to all participants. Regular update of ranking is also encouraged via management platforms such as Kaggle or Piazza. The provision of baseline approaches and evaluation metrics is encouraged. Each SPGC would have around 2-3 months to run the competition and rank/select winning teams.
- Prize Award: The organizer can provide/solicit for prize money for their grand challenge winners and presentation of award during the APSIPA ASC 25 in Singapore. The APSIPA organizing committee will not be able to provide (monetary) award to the winning teams.
- If organizers wish to submit challenge overview and winning teams’ papers to the APSIPA ASC 25 proceeding, it will also be subjected to peer review and submission of these papers for review must be carried out by 30th August 2025. We can extend our early bird registration to these group of grand challenge authors.
- Publication: Winning award papers will also be invited to submit a full paper to the open access APSIPA Transaction on Signal and Information Processing.