To be a good Bayesian, you have to be well calibrated, meaning that of the events you assign X% probability about X% of them should turn occur. You can use a calibration test to assess a person’s calibration. A calibration test consists of a relatively long list of independent trivia-like questions. The test-taker gives an answer and confidence in their answer (probability of their answer being right). When the test is finished, the results are compiled into a calibration curve showing how the actual % right varied with % confidence (see fig).

As you can see, the chart makes it easy to tell how well calibrated you are; most people in most situations are overconfident (green line).

When I first read about calibration tests (6 months or so ago), I grew excited about taking such tests online. I thought that if I could see my own calibration, I could fix it. Unfortunately, I couldn’t find anything online. I no longer think that calibrating yourself is very easy; it seems extremely difficult, but now Alex Loddengaard and I have created a website where people can take calibration tests: Calibrated Probability I should note that Tom McCabe also recently created a set of calibration tests.

Currently, I have two tests on the website. They are both, ok, but need work. I have also written up a little bit of information on calibration literature, but this also needs work. I hope to improve the website over time, making it more useful. Anyone who wants to help out, creating test questions, explanations or website building should contact me.

The main about Calibrationd Probability Assessment so far has been that the test is too long, which makes it fatiguing. Unfortunately, 50 questions is just about the minimum for a test with 5 confidence levels because it takes at least 10 questions to assess 90% confidence.

I think there might be some workaround for this, I could assume a linear relationship, or I could set up a program to do emails of 10 questions a day, but I have to think about how to do these.