Abstract
|
|
---|---|
We study the problem of learning Bayesian classifiers (BC) when the true class label of the training instances is not known, and is substituted by a probability distribution over the class labels for each instance. This scenario can arise, e.g., when a group of experts is asked to individually provide a class label for each instance. We particularize the generalized expectation maximization (GEM) algorithm in [1] to learn BCs with different structural complexities: naive Bayes, averaged one-dependence estimators or general conditional linear Gaussian classifiers. An evaluation conducted on eight datasets shows that BCs learned with GEM perform better than those using either the classical Expectation Maximization algorithm or potentially wrong class labels. BCs achieve similar results to the multivariate Gaussian classifier without having to estimate the full covariance matrices. | |
International
|
Si |
|
|
Book Edition
|
|
Book Publishing
|
Springer |
ISBN
|
978-3-642-40642-3 |
Series
|
0302-9743 |
Book title
|
Advances in Artificial Intelligence, Lecture Notes in Artificial Intelligence 8109 |
From page
|
139 |
To page
|
148 |