Drop by our regular Google Meet: dgh-gqxg-mwn at the following times:
|Tuesday 8-9pm EST||Friday 12-1pm EST|
Or contact us to be included in the invite. We'll get you going very quickly.
The microprediction python client includes a description of some classes, notably MicroCrawler, that make it easy to create a self-navigating algorithm. This will wander from data stream to data stream and (hopefully) find examples where it excels.
Here's what you need to do:
Simple eh? If you have created a good time-series prediction algorithm, or simply know of one, you can modify the crawler's sample method. The repository contains an extensive list of crawler examples, some of which have their own dedicated tutorials.
See our blog article demystifying some of the live data streams. These are real-world challenges, not textbook examples, and some directly impact operations in real-time. There isn't a better preparation for real-world work - or a better way to impress your future employer.
Here you can:
Ready to start?
Whether you are writing a paper, benchmarking a time-series algorithm, assessing an in-house model or chasing cash prizes, this site microprediction.com exists to help you navigate microprediction.org - where the algorithms battle it out.
See the knowledge center for short video tutorials explaining how to run the default algorithm (which crawls from data stream to data stream) and then improve it.
Here you can remain anonymous forever if that's what you wish. Visit private keys for an explanation.
Use the set_repository method on your crawler to establish a link on the leaderboard back to your code.
Microprediction is about democratized community prediction. But prizes don't hurt. See the competitions listing, and good luck!
Congratulations to Excyst Goose, the most recent recipient of a $1,000 prize. Past winners have received interviews at top-tier hedge funds and fintech companies.
Deploying live algorithms that make distributional predictions and gather data will help you stand out from candidates who make offline point estimates using tabular data prepared by someone else. Remember, you're not in kaggle anymore.
You can also predict data streams that you have created yourself. This is quite a useful pattern, as you can: