Abstract
In this paper, we extend the use of moderated outputs to the support vector machine (SVM) by making use of a relationship between SVM and the evidence framework. The moderated output is more in line with the Bayesian idea that the posterior weight distribution should be taken into account upon prediction, and it also alleviates the usual tendency of assigning overly high confidence to the estimated class memberships of the test patterns. Moreover, the moderated output derived here can be taken as an approximation to the posterior class probability. Hence, meaningful rejection thresholds can be assigned and outputs from several networks can be directly compared. Experimental results on both artificial and real-world data are also discussed.
Original language | English |
---|---|
Pages (from-to) | 1018-1031 |
Number of pages | 14 |
Journal | IEEE Transactions on Neural Networks |
Volume | 10 |
Issue number | 5 |
DOIs | |
Publication status | Published - Sept 1999 |
Externally published | Yes |
Scopus Subject Areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence
User-Defined Keywords
- Bayesian
- evidence framework
- moderated output
- support vector machine