The paper by Wang et al (2005) introduced the fuzzy SVM but did not address how we can obtain the membership value. This paper fill up the void.

The paper surveyed some prior work and reported that one way to find the membership is to measure the distance between each data point and its corresponding class center. For two classes of labels $\pm 1$, the training samples $(x_i, y_i)$ are used to find the radius of each class:

which $x_+$ and $x_-$ are the mean $x_i$ of the respective class. Then the fuzzy membership of each sample is given by

The above is operating on the input space. Small constant $\delta>0$ is to prevent $s_i = 0$. We can easily extend the above by replacing $x_i$ with $\phi(x_i)$ in case input vectors are mapped into feature space.

However, this paper propose to use a support vector data description (SVDD) to supplement SVM to find the membership. Similar to SVM finding a hyperplane, SVDD is to find a hypersphere with minimum volume that enclosed all target objects. The formulation of SVDD is as follows:

This paper suggested to use SVDD on the same training data to find the class center $a$ and radius $R$. Then we use the distance to the class center scaled by the radius as the membership

The above is operated in the feature space. The paper argues that using the SVDD hypersphere to determine the membership is more explicit and interpretable.

## Bibliographic data

@article{
author = "Jian Shi and Benlian Xu",
title = "Credit Scoring by Fuzzy Support Vector Machines with a Novel Membership Function",
journal = "Journal of Risk and Financial Management",
volume = "9",
number = "13",
year = "2016",
doi = "10.3390/jrfm9040013",
}