Keywords: committees

Analysis of Committees[1]

The committee is a native inspiration for how to combine several models(or we can say how to combine the outputs of several models). For example, we can combine all the models by:


However, we want to analyze whether this average prediction of models is good than a single one of them.

To compare the committee and a single model, we should first build a criterion depending on which we can distinguish which model is better. Assuming that the true generator of the training data xx is:


So our prediction of the mmth model for m=1,2,Mm=1,2\cdots,M can be represented as:

ym(x)=h(x)+ϵm(x)(3)y_m(x) = h(x) +\epsilon_m(x)\tag{3}

then the average sum-of-square of error can be a nice criterion. And the criterion of sigle model is:

Ex[(ym(x)h(x))2]=Ex[ϵm(x)2](4)\mathbb{E}_x[(y_m(x)-h(x))^2] = \mathbb{E}_x[\epsilon_m(x)^2] \tag{4}

where the E[]\mathbb{E}[\cdot] is the frequentist expectation. To make the creterion more concrete, we consider the average of the error over MM models:

EAV=1Mm=1MEx[ϵm(x)2](5)E_{AV} = \frac{1}{M}\sum_{m=1}^M\mathbb{E}_x[\epsilon_m(x)^2]\tag{5}

And on the other hand, the committees have the error given by equation(1), (3) and (4):

ECOM=Ex[(1Mm=1Mym(x)h(x))2]=Ex[{1Mm=1Mϵm(x)}2](6)\begin{aligned} E_{COM}&=\mathbb{E}_x[(\frac{1}{M}\sum_{m=1}^My_m(x)-h(x))^2] \\ &=\mathbb{E}_x[\{\frac{1}{M}\sum_{m=1}^M\epsilon_m(x)\}^2] \end{aligned} \tag{6}

Now we assume that the random variables ϵi(x)\epsilon_i(x) for i=1,2,,Mi=1,2,\cdots,M have mean 0 and uncorrelated, so that:

Ex[ϵm(x)]=0Ex[ϵm(x)ϵl(x)]=0,ml(7)\begin{aligned} \mathbb{E}_x[\epsilon_m(x)]&=0 &\\ \mathbb{E}_x[\epsilon_m(x)\epsilon_l(x)]&=0,&m\neq l \end{aligned} \tag{7}

Then subsitute equ (7) into equ (6), we can get:


According the equation (5) and (8):


All the mathematics above is based on the assumption that the error of each model is uncorrelated. However, most time they are highly correlated and the reduction of error is generally small. But the relation:

ECOMEAV(10)E_{COM}\leq E_{AV}\tag{10}

exists for sure. Then the boosting method was established to make the combining model more powerful.


  1. Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006. ↩︎