Why You Can’t Predict Anything Based on the First 10 Races

The plot below shows the cumulative number of cautions per mile since 2007.  I’m using number of cautions per100 miles to 1) make up for races that were not run to completion; 2) compensate for green-white-checkered finishes; 3) compensate for tracks that have shortened races; and 4) compensated for changing order in which tracks are visited.

Cautions per 100 miles can be thought of as follows:  If the cautions per 100 miles is 1.6, then the number of cautions for a 500 mile race would average (1.5*500/100) = 6.

The results are sort of interesting:

Things to notice:

1)  All of the final values for cautions per 100 miles end up between 1.8 and 2.4, even though the values at the start of the year ranged from 1.4 to 3.1.

2)  The data for the first 10 races changes wildly with each race.  The data don’t start to converge toward their final values until at least 15 races into the season.  I suspect that if you plotted a drivers’ standing in the points as a function of number of races, you would see the same behavior.  Why?  As the total number of miles run increases, the number of cautions in a race is increasingly small compared with the total number of  miles run.

3)  Despite the decreasing fluctuations, there are still quite a few noticeable jumps upward.  When I saw them, I immediately thought:  Ah – there’s Bristol.  But closer inspection showed me wrong.  The big troublemakers are Richmond and Martinsville, which together account for the largest number of upward jumps.

4)  There seems to be a significant difference in caution rates from 2008/2009 to 2010/2011.  Anyone want to venture guess as to what is responsible?

1 Comment

1 Trackback / Pingback

  1. 2022 Spring Martinsville Race Report : Building Speed

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.