AI Challenge
Help satellites weather the storm by participating in the 2025 MIT ARCLab Prize for AI Innovation in Space!
Welcome to Phase 1!
Welcome to Phase 1 of the 2025 MIT ARCLab Prize for AI Innovation in Space! You can now submit your code on a rolling basis for evaluation on the Codabench platform.
Check out our devkit docs for the latest information about the competition dataset, baseline model, submission instructions, tutorials, and more. Click below to join our mailing list and stay up-to-date on challenge announcements.
Announcements
February 6, 2025
Dear AI Challenge Participants,
We hope you’re doing well! We wanted to inform you of some important updates regarding the competition:
- Weekly Office Hours: Starting next week, we will be holding weekly office hours on Zoom every Wednesday from 11:00 AM to 11:30 AM ET. To avoid creating any unfair advantage, we request that discussions about potential approaches to the challenge problem are kept to the forums, but office hours will be a great opportunity to ask clarifying and logistical questions (e.g. about the data or issues with code submission). We encourage you to join if you need assistance!
- Platform Transition to Codabench: As of today, we are transitioning from EvalAI to Codabench for the remainder of the competition. As part of this change, you will now need to submit your code using a conda environment. This new platform will better suit our evaluation needs and will provide a smoother experience moving forward.
With this transition, you will no longer need to update Docker images. Instead, you will manage dependencies using an environment.yml
file. We will use this file to install all necessary dependencies for your solution. Specifically, we will create the conda environment using micromamba
(due to its high performance) and then execute submission.py
, which should remain the entry point/controller of your model.
We strongly encourage you to start adapting your solutions to this new environment and testing them locally to ensure that micromamba
can correctly install all dependencies. We will provide a more detailed explanation of the submission process in the coming days as we finish migrating the platform.
We understand that this transition may cause some inconvenience, especially at this stage in the competition, and we apologize for the disruption. Unfortunately, EvalAI was unable to resolve the evaluation issues we encountered, and we believe Codabench will be much more reliable for your submissions.
What’s Next?
We will be adding detailed documentation about this change and the conda environment setup to the STORM-AI wiki later this week. You can now access the challenge on Codabench here. We will send out another email when the wiki is updated with all of the new submission instructions, but in the meantime feel free to submit your code to Codabench for evaluation if you are comfortable doing so.
Additionally, we will do our best to provide strong support to help you adapt to the new requirements and platform as smoothly as possible. We encourage you to participate actively in the forums and ask any questions you may have.
Thank you for your understanding and continued participation. We look forward to seeing your submissions and hope to see you at our weekly office hours!
Best regards,
Your AI Challenge Organizers
December 17, 2024
Dear challenge participants,
Welcome to Phase 1 of the 2025 MIT ARCLab Prize for AI Innovation in Space! You can now submit your code on a rolling basis for evaluation on the EvalAI platform. You will not be able to see your scores on the leaderboard quite yet, but we should have full functionality on EvalAI within the next few days.
The training dataset, model submission tutorial, development toolkit, and baseline model are available at the following links:
- Dataset: it currently contains initial satellite orbital states between the years 2000 and 2020 along with a 60-day OMNI2 space weather data fragment and 3 days of orbit-averaged thermospheric densities for each initial state. The corresponding 60-day GOES data fragments will be uploaded before the end of the week. We may also add more test cases to increase the size of the training dataset and will keep you posted.
- Development toolkit: the Satellite Tracking and Orbit Resilience Modeling with AI (STORM-AI) DevKit accessible on GitHub, reporting tutorials, baseline model, and a high-fidelity orbit propagator.
- Baseline model: this script represents the baseline model which appears on the current EvalAI leaderboard.
- Model submission tutorial: this section of the wiki discusses Docker containers and example submissions for models coded in Python as well as other helpful information about submitting your model to EvalAI for evaluation.
As a reminder, the STORM-AI Wiki Page reports extensive documentation about the challenge dataset, example code, tutorials, EvalAI submission process, supplemental resources, and FAQs.
If you have any questions about the data or development toolkit, please feel free to reach out via our devkit Q&A. For general inquiries, you can email us at ai_challenge@mit.edu.
We’ll keep you informed via email of any changes to the dataset or EvalAI page over the next few days. Good luck!
Best Regards,
Your AI Challenge Organizers
December 11, 2024
Good evening challenge participants,
We hope that you’re ready for the competition phase because the warm-up period is nearly over! Our baseline model will be available on the devkit this weekend after we make some final adjustments, and you will soon have access to our EvalAI page and the full training dataset.
In preparation for Phase 1, we have also provided a new version of the warm-up dataset that is organized into I/O pairs. We hope this will help streamline the process of training your algorithms. The new format is consistent with our evaluation pipeline, so the training data that will be released during Phase 1 will follow this convention as well. Changes from the V1 data format include:
- Satellite geodetic position coordinates for each set of initial orbital elements have been added to the initial state file (
wu001_to_wu715-initial_states.csv
) - GOES & OMNI2 data has been split into 60-day chunks so that each initial state has a corresponding GOES input file and OMNI2 input file with the specified file ID (“wu###” for the warmup dataset)
- SWARM density data has been split into 3-day chunks so that each initial state has a corresponding “forecasted” density file for model validation
- SWARM density data columns that do not appear in the required output format have been removed, meaning that for each initial state, your model should output a 3-day density forecast in the same format as the files in the
Sat_Density
folder - The STORM-AI dropbox folder now has the following structure:
📁 STORM-AI
├── 📁 warmup
│ ├── 📁 v1
│ └── 📁 v2
│ ├── 📁 GOES
│ ├── 📁 OMNI2
│ ├── 📁 Sat_Density
│ └── 📝 wu001_to_wu715-initial_states.csv
└── 📁 phase_1
***Please note that the reformatted inputs and outputs still contain gaps due to the nature of the source data***
If you have any questions about the data or development toolkit, please feel free to reach out via our devkit Q&A. For general inquiries, you can email us at ai_challenge@mit.edu.
Best Regards,
Your AI Challenge Organizers
November 15, 2024
Dear challenge participants,
Good evening and welcome to the warm-up phase of the 2025 MIT ARCLab Prize for AI Innovation in Space! Phase 0 launched earlier today, and there are a few key updates and resources that we want to make you aware of.
TL;DR: Competition Overview | DevKit | Dataset | Wiki | Q&A
EvalAI Development Status
As in last year’s competition, we will begin accepting and evaluating your submissions through EvalAI at the launch of Phase 1 in December. We previously announced that the new challenge would be available for you to view on the EvalAI platform during this warm-up period; however, we are still working with the EvalAI team to validate our evaluation pipeline. We’ll give you a heads up when we go live on EvalAI, but in the meantime, we have published a comprehensive challenge overview to the ARCLab website at the link above.
STORM-AI Development Kit
To help you get started on your models, we created the Satellite Tracking and Orbit Resilience Modeling with AI (STORM-AI) DevKit on GitHub. The repo currently includes a transformer-based atmospheric density forecaster based on this paper, which was written by an ARCLab alum and the original inspiration for this year’s challenge problem. In the coming weeks, we will continue to release tutorials and other components of the devkit, including the code for our baseline model that will appear on the Phase 1 leaderboard. We will notify you of new content via email and do our best to respond to any questions or feedback on the devkit Q&A.
Warm-Up Dataset
A portion of the public challenge dataset is currently available for download via Dropbox at the link above. It includes a few years of space weather data and initial satellite states that your models should parse as inputs. There is also a history of orbit-averaged atmospheric density values reported by ESA’s SWARM A satellite, which you may use to validate your model. The challenge data formats should change very little from this preliminary dataset and will be finalized well in advance of the Phase 1 launch date. More detailed information about the data is located in the devkit wiki.
STORM-AI Wiki Page
Extensive documentation about the challenge dataset, example code, EvalAI submission process, and supplemental resources is available on the devkit wiki. The latest long-form tutorials, FAQs, and more will live on the wiki throughout the course of the competition, so refer back to it often!
Thank you for being a part of this year’s AI Challenge! We hope that these resources will provide everything you need to start working with the dataset and developing your prediction models. Feel free to reach out with questions about the data or development toolkit via our devkit Q&A, or for general inquiries, email us at ai_challenge@mit.edu.
Best regards,
Your AI Challenge organizers
November 4, 2024
Hello everyone!
Thank you for your interest in the 2025 MIT ARCLab Prize for AI Innovation in Space!
We are excited to announce that we are launching Phase 0 of our space weather challenge on Friday November 15th. This is a warm-up phase, during which you will be able to view the competition’s EvalAI page, a preliminary GitHub devkit, and a portion of the dataset you will be provided for training your algorithms. Last year’s webpage, devkit, and warm-up dataset are good references for what you can expect.
Code submissions will not be accepted until the launch of Phase 1 in mid-December, but the warm-up phase will give you the opportunity to familiarize yourself with the data types and formats you’ll be asked to work with during the challenge.
Read about the challenge objective, phases, and prize breakdown in our launch announcement or contact the organizers at ai_challenge@mit.edu for more information.
Best regards,
Your AI Challenge organizers
Why Space Weather?
In 2024, solar storms have lit up the skies with stunning Auroras across the United States. But while these displays are captivating to observers on the ground, space weather has the potential to wreak havoc on our global satellite infrastructure. Geomagnetic storms cause rapid heating in Earth’s thermosphere, which can lead to more than a 10x increase in satellite drag in mere hours. In May 2024, the Gannon storm caused the largest mass migration of satellites in history and severely degraded satellite collision avoidance systems worldwide for multiple days (Parker and Linares, 2024). This challenge tackles the urgent need for more efficient and accurate tracking and orbit prediction capabilities for resident space objects in the increasingly crowded near-Earth environment. As space activities expand, the demand for advanced technologies to monitor and manage satellite behavior becomes paramount.
This year’s challenge objective is to develop cutting-edge AI algorithms for nowcasting and forecasting space weather-driven changes in atmospheric density across low earth orbit using historical space weather observations. The available phenomenology include solar and geomagnetic space weather indices, measurements of the interplanetary magnetic field, and measured solar wind parameters which can be used in conjunction with existing empirical atmospheric density models. Participants are provided with a baseline prediction model and spacecraft accelerometer-derived in situ densities and are tasked with training or creating models to forecast the atmospheric density.
Dataset
You can download the challenge dataset here.
The Satellite Tracking and Orbit Resilience Modeling with AI (STORM-AI) dataset contains a collection of historical orbital elements and satellite atmospheric densities, as well as information on magnetic field, plasma, indices, particles, X-Ray flux, and additional derived parameters. All training data is derived from public data sources distributed by organizations that are not affiliated with the AI Challenge, including the ESA, NASA Goddard Space Flight Center, and NOAA.
The dataset consists of a public challenge dataset that can be used to train and develop AI algorithms and a private evaluation dataset of the same type and format. For valid submissions, algorithm inputs must be limited to the phenomenology and data formats present in the public training dataset, but utilizing additional phenomenology or data sources for model validation and development is allowed and encouraged.
Development Toolkit
The STORM-AI DevKit is accessible on GitHub here. It includes the code for the baseline model that appears on the public leaderboard, a high-fidelity orbit propagator, and more. Additionally, the STORM-AI Wiki Page reports extensive documentation about the challenge dataset, example code, tutorials, Codabench submission process, supplemental resources, and FAQs.
Important Dates
The below timeline is subject to change. We recommend signing up for the challenge mailing list to stay up-to-date on key dates and deadlines.
- November 15, 2024: Warm-up phase starts.
- December 17, 2024: Phase 1 of the competition starts. Submissions accepted on a rolling basis.
- March 17, 2025: Phase 1 ends. Top 10 finalists are notified of advancement to Phase 2.
- April 14, 2025: Phase 2 ends. Technical report deadline.
- May 16, 2025: Winners announced.
Prizes
We offer 10 prizes with a total value of USD 25,000 in cash and travel expenses for three presenters to share their results at a technical meeting. Terms and conditions apply. Here is the prize breakdown:
- First place*: USD 10,000 in cash and a trip for one team-member to present their results at a technical meeting.
- Second place*: USD 5,000 in cash and a trip for one team-member to present their results at a technical meeting.
- Third place*: USD 3,000 in cash and a trip for one team-member to present their results at a technical meeting.
- Seal of Excellence (4th – 10th)*: USD 1,000 in cash.
*Terms and conditions: Expenses for travel and accommodations may be reimbursed for one person from each of the first, second, and third place teams. Airfare is reimbursable for economy class and U.S. Flag carrier airlines only. Travelers must submit a budget for approval prior to the trip. Travelers must provide comparison airfare if their trip exceeds the bounds of one day prior to and one day following the designated trip dates. Expenses will be reimbursed after the trip is complete. Cash awards are taxable, and automatic tax withholding will be carried out for nonresidents, while a 1099 will be issued for U.S. residents. Taxes for domestic payments are subject to MIT rules. Taxes for international payments (payments to non-U.S. citizens, including human subjects and recipients of student prizes or awards) are subject to a mandatory 30 percent tax withholding per U.S. government regulations. For some international awards, a reduced cash prize must be awarded due to MIT regulations. All cash prizes will be awarded after the technical meeting. All cash awards are subject to MIT policies and any relevant government policies.
Citations
The challenge dataset contains multiple data sources and should be credited in accordance with the policies of the original data providers. Please refer to the Dataset and Resource sections of the wiki for more information on how to cite the 2025 AI Challenge and the STORM-AI dataset.
Contact Us
For general questions about the challenge, please contact the organizers at ai_challenge@mit.edu. If you have any questions regarding our development kit, you may submit them to our GitHub discussion forum.
Acknowledgement
Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
© 2024 Massachusetts Institute of Technology.
2024 Leaderboard
Check out the 2024 competition here, and learn more about the results here.
Rank | Team | Phase I Score (F2, norm) | Phase II Score (Q) | Final Score (CS) |
1 | Hawaii2024 | 0.994 | 0.827 | 0.960 |
2 | Millennial-IUP | 1.000 | 0.713 | 0.943 |
3 | QR_Is | 0.979 | 0.787 | 0.941 |
4 | MiseryModel | 0.987 | 0.753 | 0.940 |
5 | K-PAX | 0.951 | 0.653 | 0.892 |
6 | Go4Aero | 0.952 | 0.640 | 0.890 |
7 | FuturifAI | 0.963 | 0.520 | 0.874 |
8 | Astrokinetix | 0.875 | 0.627 | 0.826 |
9 | Colt | 0.935 | 0.293 | 0.807 |