Registration
Registration
14th March - 3rd April
14th March - 3rd April
Declaration
Declaration
Before R1
Before R1
Prepare
Prepare
Prepare
Now - 25th March
Now - 25th March
Now - 25th March
Round 1
Round 1
Round 1
25th March - 8th April
25th March - 8th April
25th March - 8th April
Results
Results
Results
10th April
10th April
10th April
Finale
Finale
Finale
25th-26th April
25th-26th April
25th-26th April
Welcome !
Welcome !
Welcome !
Welcome !

Join the Discord Community
Join the Discord Community
Join the Discord Community
All announcements, mentor access, and team matching happens here.
All announcements, mentor access, and team matching happens here.
All announcements, mentor access, and team matching happens here.
All announcements, mentor access, and team matching happens here.
step 1
step 1
How will you compete?
How will you compete?
How will you compete?
Choose solo or team before you can start the assessment
Choose solo or team before you can start the assessment
How team selection works
- •Only one person (team lead) fills the team form. Teammates are added by their email address.
- •If your teammate has already added you to their team, this screen will update automatically — you don't need to do anything.
- •Each teammate must have their own account (they need to register with their email first).
- ⚠️Once confirmed, teams cannot be changed. Solo is locked for Round 1 only.
Solo Warrior
Compete individually. You'll work and submit on your own.
Team Up
2–3 members. Only the team lead fills this form.
PROBLEM STATEMENT
PROBLEM STATEMENT
Round 1 — Problem Statement
Round 1 — Problem Statement
The Task
The Task
Build a complete, real-world OpenEnv environment that an AI agent can learn from through the standard step() / reset() / state() API.
Build a complete, real-world OpenEnv environment that an AI agent can learn from through the standard step() / reset() / state() API.
Build a complete, real-world OpenEnv environment that an AI agent can learn from through the standard step() / reset() / state() API.
Key Requirements at a Glance
Key Requirements at a Glance
Key Requirements at a Glance
Must simulate a real-world task (not games or toys)
Must simulate a real-world task (not games or toys)
Implement full OpenEnv spec: typed models, step()/reset()/state(), openenv.yaml
Implement full OpenEnv spec: typed models, step()/reset()/state(), openenv.yaml
Minimum 3 tasks with agent graders (easy → medium → hard, scores 0.0–1.0)
Minimum 3 tasks with agent graders (easy → medium → hard, scores 0.0–1.0)
Meaningful reward function with partial progress signals
Meaningful reward function with partial progress signals
Baseline inference script with reproducible scores
Baseline inference script with reproducible scores
Deploy to Hugging Face Spaces + working Dockerfile
Deploy to Hugging Face Spaces + working Dockerfile
README with environment description, action/observation spaces, setup instructions
README with environment description, action/observation spaces, setup instructions
Detailed Requirements
Detailed Requirements
Evaluation Criteria
Evaluation Criteria
How Judging works
How Judging works
Pre-Submission Checklist — all must pass or you're disqualified
Pre-Submission Checklist — all must pass or you're disqualified
Pre-Submission Checklist — all must pass or you're disqualified
HF Space deploys
HF Space deploys
Automated ping to the Space URL — must return 200 and respond to reset()
Automated ping to the Space URL — must return 200 and respond to reset()
Automated ping to the Space URL — must return 200 and respond to reset()
OpenEnv spec compliance
OpenEnv spec compliance
Validate openenv.yaml, typed models, step()/reset()/state() endpoints
Validate openenv.yaml, typed models, step()/reset()/state() endpoints
Validate openenv.yaml, typed models, step()/reset()/state() endpoints
Dockerfile builds
Dockerfile builds
Automated docker build on the submitted repo
Automated docker build on the submitted repo
Automated docker build on the submitted repo
Baseline reproduces
Baseline reproduces
Run the submitted inference script — must complete without error and produce scores
Run the submitted inference script — must complete without error and produce scores
Run the submitted inference script — must complete without error and produce scores
3+ tasks with graders
3+ tasks with graders
Enumerate tasks, run each grader, verify scores in 0.0–1.0 range
Enumerate tasks, run each grader, verify scores in 0.0–1.0 range
Enumerate tasks, run each grader, verify scores in 0.0–1.0 range
Additional Instructions
Additional Instructions
Before submitting, ensure the following variables are defined in your environment configuration:
API_BASE_URL The API endpoint for the LLM.
MODEL_NAME The model identifier to use for inference.
HF_TOKEN Your Hugging Face / API key.
Before submitting, ensure the following variables are defined in your environment configuration:
API_BASE_URL The API endpoint for the LLM.
MODEL_NAME The model identifier to use for inference.
HF_TOKEN Your Hugging Face / API key.
Before submitting, ensure the following variables are defined in your environment configuration:
API_BASE_URL The API endpoint for the LLM.
MODEL_NAME The model identifier to use for inference.
HF_TOKEN Your Hugging Face / API key.
The inference script must be named `inference.py` and placed in the root directory of the project
The inference script must be named `inference.py` and placed in the root directory of the project
The inference script must be named `inference.py` and placed in the root directory of the project
Participants must use OpenAI Client for all LLM calls using above variables
Participants must use OpenAI Client for all LLM calls using above variables
Participants must use OpenAI Client for all LLM calls using above variables
Infra Restrictions
Infra Restrictions
Runtime of inference script should be less than 20min
Make sure your env and inference can run on a machine with vcpu=2, memory=8gb
Runtime of inference script should be less than 20min
Make sure your env and inference can run on a machine with vcpu=2, memory=8gb
Validator
Validator
Run the pre-submission validation script before submitting
Run the pre-submission validation script before submitting
Run the pre-submission validation script before submitting
Sample Inference Script
Sample Inference Script
Pre Validation Script
Pre Validation Script
Pre Validation Script
Submission window opens on 28th March
Submission window opens on 28th March
Submission window opens on 28th March
Submission window opens on 28th March
Study material
Study material
Preparatory Course
Preparatory Course
Preparatory Course
4 modules · ~3.5 hours
4 modules · ~3.5 hours
Each module: read the README first, then open the notebook in Colab. No local setup needed.
Each module: read the README first, then open the notebook in Colab. No local setup needed.
Module 1: Why OpenEnv?
ESSENTIAL FOR ROUND 1
45 min
Module 1: Why OpenEnv?
ESSENTIAL FOR ROUND 1
45 min
Module 1: Why OpenEnv?
ESSENTIAL FOR ROUND 1
45 min
Module 2: Using Existing Environments
ESSENTIAL FOR ROUND 1
50 min
Module 2: Using Existing Environments
ESSENTIAL FOR ROUND 1
50 min
Module 2: Using Existing Environments
ESSENTIAL FOR ROUND 1
50 min
Module 3: Deploying Environments
ESSENTIAL FOR ROUND 1
45 min
Module 3: Deploying Environments
ESSENTIAL FOR ROUND 1
45 min
Module 3: Deploying Environments
ESSENTIAL FOR ROUND 1
45 min
Module 4: Building Your Own Environment
MOST IMPORTANT FOR ROUND 1
60 min
Module 4: Building Your Own Environment
MOST IMPORTANT FOR ROUND 1
60 min
Module 4: Building Your Own Environment
MOST IMPORTANT FOR ROUND 1
60 min
GUIDE
GUIDE
Round 1 Guide
Round 1 Guide
What to Expect
When Round 1 opens, you'll choose 1 of 4–5 problem statements and build an OpenEnv environment around it.
Example of what a problem statement looks like
"Build a mini-game RL environment with clearly defined tasks, automated graders, and reward logic using the OpenEnv framework."
→ Create a mini-game an AI agent can play
→ Define tasks with increasing difficulty
→ Write graders that verify task completion
→ Define reward logic for scoring
→ Package using OpenEnv for automated evaluation
Evaluation Criteria
Runtime correctness
Runs without errors
Interface compliance
Follows OpenEnv standard
Task design
Clear, realistic, testable
Grading logic
Reward system makes sense
20,000 → 3,000 teams advance
Prerequisites
Install before April 1st.
Required
Python 3.10+
Install 3.10, 3.11, or 3.12.
Git + GitHub account
Push your submission to GitHub or HF.
Hugging Face CLI
Deploy to HF Spaces.
OpenEnv
The framework.
Google Colab
Prep course runs in Colab. Free tier works.
OpenEnv
The framework.
Docker
Isolated container testing.
Recommended
VS Code
Best Python + Docker support
How to Submit
When Round 1 starts on 1 April:
Step 1
Application Form
Choose 1 of the 4–5 problem statements revealed on the platform.
Step 2
Scaffold
Generate project structure.
Step 3
Build
Define your environment in the generated files.
Step 4
Test locally
Step 5
Deploy
Step 6
Submit
Paste your HF Spaces URL here before the deadline.
Deadline: 7 April 2026, 11:59 PM IST
Deadline: 8 April 2026, 11:59 PM IST
What to Expect
Prerequisites
How to Submit
When Round 1 opens, you'll choose 1 of 4–5 problem statements and build an OpenEnv environment around it.
Example of what a problem statement looks like
"Build a mini-game RL environment with clearly defined tasks, automated graders, and reward logic using the OpenEnv framework."
→ Create a mini-game an AI agent can play
→ Define tasks with increasing difficulty
→ Write graders that verify task completion
→ Define reward logic for scoring
→ Package using OpenEnv for automated evaluation
Evaluation Criteria
Runtime correctness
Runs without errors
Interface compliance
Follows OpenEnv standard
Task design
Clear, realistic, testable
Grading logic
Reward system makes sense
Step 2
Step 2
Submit your Assessment
Submit your Assessment
Complete Step 1 first
Complete Step 1 first
Problem Statement is live. Build and submit.
Problem Statement is live. Build and submit.
Round 1 begins
Submission window opens on 28th March
Deadline: 8 Apr 11:59 PM
NOTE: Only team leaders can make the final submission.
NOTE: Only team leaders can make the final submission.
NOTE: Only team leaders can make the final submission.
FAQs
FAQs
Frequently Asked Questions
Frequently Asked Questions
How does the team/solo declaration work?
How does the team/solo declaration work?
Who should fill the team form?
Who should fill the team form?
What if someone already added me to their team?
What if someone already added me to their team?
Can I change my team or switch to solo after confirming?
Can I change my team or switch to solo after confirming?
Do I need to complete the prep course?
Do I need to complete the prep course?
What happens during Round 1?
What happens during Round 1?
Can I update my submission?
Can I update my submission?
How are submissions evaluated?
How are submissions evaluated?
What framework must be used?
What framework must be used?
What happens after Round 1?
What happens after Round 1?
What do I need to submit?
What do I need to submit?
Where can I get help?
Where can I get help?
Need help? Reach out to us
Need help? Reach out to us
help_openenvhackathon@scaler.com
help_openenvhackathon@scaler.com
submission Deadline: 8th April 11:59 PM
