Moderating the outputs of support vector machine classifiers

J.T.-Y. Kwok

Research output: Contribution to journalJournal articlepeer-review

141 Citations (Scopus)


In this paper, we extend the use of moderated outputs to the support vector machine (SVM) by making use of a relationship between SVM and the evidence framework. The moderated output is more in line with the Bayesian idea that the posterior weight distribution should be taken into account upon prediction, and it also alleviates the usual tendency of assigning overly high confidence to the estimated class memberships of the test patterns. Moreover, the moderated output derived here can be taken as an approximation to the posterior class probability. Hence, meaningful rejection thresholds can be assigned and outputs from several networks can be directly compared. Experimental results on both artificial and real-world data are also discussed.

Original languageEnglish
Pages (from-to)1018-1031
Number of pages14
JournalIEEE Transactions on Neural Networks
Issue number5
Publication statusPublished - Sept 1999
Externally publishedYes

Scopus Subject Areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

User-Defined Keywords

  • Bayesian
  • evidence framework
  • moderated output
  • support vector machine


Dive into the research topics of 'Moderating the outputs of support vector machine classifiers'. Together they form a unique fingerprint.

Cite this