One of my friends has decided to take up CFA, but she hasn't decided whether to take the June or December paper for Level I. Time wasn't the main issue - she was more concerned about the historical passing rate for each paper, and wanted to sit for the one with a higher passing rate.
CFA Institute actually publishes their exam results all the way from 1963, so there is no lack of historical data to do a proper analysis. However, CFA Institute only started offering two seatings for Level I from 2003, so our sample size is limited to eight years (2011 December results not out yet).
We can use statistics to determine whether there is any significant difference between the June and December passing rates for Level I. Here's the relevant extract of our data, in the format Year: June passing rate / December passing rate.
2003: 42% / 40%
2004: 34% / 36%
2005: 36% / 34%
2006: 40% / 39%
2007: 40% / 39%
2008: 35% / 35%
2009: 46% / 34%
2010: 42% / 36%
The full report can be found here.
For this, the statistical distribution to use would be the Student's t-distribution for paired samples, with the following hypothesis:
H0 : There is no difference between the passing rates.
H1: There is a difference between the passing rates.
The formula to be used is:
where XD refers to the average of the differences, sD is the standard deviation of the differences, n is the sample size, and μ0 refers to the amount of difference to be tested. Since we are testing whether there is any difference or not, μ0 would be zero. Our sample size n is 8, as given below. The degree of freedom should be n - 1.
So we start by calculating the differences between the passing rates (June subtract December), which would give us:
2003: 2%
2004: -2%
2005: 2%
2006: 1%
2007: 1%
2008: 0%
2009: 12%
2010: 6%
The average of the differences, XD, is 2.75, and their standard deviation, sD, is 4.367085. Calculating t using the above formula would give us 1.781. Since we are trying to find out whether there is a difference (as opposed to one being larger than the other), this is a two tailed test. Assuming a 5% level of significance with a degree of freedom of 7, the t statistic is 2.365 (refer to the t-distribution table here). This means that our test statistic must be bigger than this figure in order for us to reject the null hypothesis H0.
Since 1.781 < 2.365, there is insufficient evidence to reject the null hypothesis and we can conclude that there is no significant difference between the CFA Level I passing rates for June and December. Of course, for 2009 and 2010 the difference seems to be bigger, so if one were to rely on these two years, they would be more compelled to take the June paper. Will update this entry when the results for 2011 December are announced.
Update as of 29th Jan 2012:
The December 2011 results for Level I were just out, and the passing rate was 38% as compared to 39% for June 2011. Redoing the test would give us the following updated values:
XD = 2.555
sD = 4.126473
n = 9
test statistic t = 1.858
t statistic (from table) = 2.306
degrees of freedom = 8
Since 1.858 < 2.306, the original results remain the same. This can actually be figured out without even redoing the test, as the additional data of 1% difference is obviously not significant.
Update as of 29th Jan 2012:
The December 2011 results for Level I were just out, and the passing rate was 38% as compared to 39% for June 2011. Redoing the test would give us the following updated values:
XD = 2.555
sD = 4.126473
n = 9
test statistic t = 1.858
t statistic (from table) = 2.306
degrees of freedom = 8
Since 1.858 < 2.306, the original results remain the same. This can actually be figured out without even redoing the test, as the additional data of 1% difference is obviously not significant.
No comments:
Post a Comment