The publishing docs explain how to send numbers to microprediction using the Python client, the R language or the API directly. This act initiates a fight between hundreds of algorithms authored by different people that watch microprediction.org. If you'd like these algorithms to serve you, the create_a_stream.py example might be the fastest way to get going. Here's a video demonstrating creation of a data stream in under ten minutes. Once this is done, you can sit back and wait for the algorithms to arrive.
No sales folks to talk to, or email to surrender. This is an experiment you are welcome to join.
A swarm of fiercely competing algorithms can try to predict your model errors.
Algorithms can utilize existing streams or exogenous data.
Predictions improve as new data, models and talent become available.
A key use for this public prediction API is the ongoing performance analysis of private models used on private data. In this pattern, you publish the difference between your prediction model and the revealed truth. Have you ever wondered:
The sooner you publish your "noise", the sooner you'll discover if there is a signal you are missing.
Finding capable data scientists today is difficult. By seeking predictions that fit your organization’s needs, Microprediction can help you uncover the kind of talent you’re seeking. Every Microprediction challenge has its own leaderboard which can help filter contributors with the most appropriate skill sets for your organization.
Why wait?
See also the documentation.
Microprediction systematically evaluates and combines competing prediction models against your data for four time horizons:
You’ll be able to retrieve the following:
See An Introduction to Z-Streams for explanation of the mechanics, and Tears of Joy, our blog article illustrating how useful z-streams can be. The swarming algorithms don’t just predict your data. They also predict how other algorithms will predict your data -- feedback for more accurate predictions!