Don't wanna be here? Send us removal request.
Link
3 Linear Classification… There is a discriminant function k (x) for each class k Classification rule: In higher dimensional space the decision boundaries are piecewise hyperplanar Remember that 0-1 loss function led to the classification rule: So, can serve as k (x)
0 notes
Link
3 Linear Classification… There is a discriminant function k (x) for each class k Classification rule: In higher dimensional space the decision boundaries are piecewise hyperplanar Remember that 0-1 loss function led to the classification rule: So, can serve as k (x)
0 notes
Link
Bayes Classifier The marginal distributions of G are specified as PMF p G (g), g=1,2,…,K f X|G (x|G=g) shows the conditional distribution of X for G=g The training set (x i,g i ),i=1,..,N has independent samples from the joint distribution f X,G (x,g) –f X,G (x,g) = p G (g)f X|G (x|G=g) The loss of predicting G * for G is L(G *,G) Classification goal: minimize the expected loss –E X,G L(G(X),G)=E X (E G|X L(G(X),G))
0 notes