iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.3390/e21050489
Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance
Next Article in Journal
Performance of Different Risk Indicators in a Multi-Period Polynomial Portfolio Selection Problem Based on the Credibility Measure
Next Article in Special Issue
Learning Coefficient of Vandermonde Matrix-Type Singularities in Model Selection
Previous Article in Journal
Negentropy Spectrum Decomposition and Its Application in Compound Fault Diagnosis of Rolling Bearing
Previous Article in Special Issue
How the Choice of Distance Measure Influences the Detection of Prior-Data Conflict
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance

1
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
2
College of Computer Science and Technology, Jilin University, Changchun 130012, China
3
Faculty of Science, Engineering & Built Environment, Deakin University, Burwood, VIC 3125, Australia
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 489; https://doi.org/10.3390/e21050489
Submission received: 12 February 2019 / Revised: 29 April 2019 / Accepted: 6 May 2019 / Published: 13 May 2019
(This article belongs to the Special Issue Bayesian Inference and Information Theory)

Abstract

:
Over recent decades, the rapid growth in data makes ever more urgent the quest for highly scalable Bayesian networks that have better classification performance and expressivity (that is, capacity to respectively describe dependence relationships between attributes in different situations). To reduce the search space of possible attribute orders, k-dependence Bayesian classifier (KDB) simply applies mutual information to sort attributes. This sorting strategy is very efficient but it neglects the conditional dependencies between attributes and is sub-optimal. In this paper, we propose a novel sorting strategy and extend KDB from a single restricted network to unrestricted ensemble networks, i.e., unrestricted Bayesian classifier (UKDB), in terms of Markov blanket analysis and target learning. Target learning is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model Bayesian network classifiers (BNC) P to complement BNC T learned from training data T . UKDB respectively introduced UKDB P and UKDB T to flexibly describe the change in dependence relationships for different testing instances and the robust dependence relationships implicated in training data. They both use UKDB as the base classifier by applying the same learning strategy while modeling different parts of the data space, thus they are complementary in nature. The extensive experimental results on the Wisconsin breast cancer database for case study and other 10 datasets by involving classifiers with different structure complexities, such as Naive Bayes (0-dependence), Tree augmented Naive Bayes (1-dependence) and KDB (arbitrary k-dependence), prove the effectiveness and robustness of the proposed approach.

1. Introduction

Since 1995, researchers have proposed to embed machine-learning techniques into a computer-aided system, such as medical diagnosis system [1,2,3,4]. Andres et al. [5] proposed an ensemble of fuzzy system and evolutionary algorithm for breast cancer diagnosis, which can evaluate the confidence level to which the system responds and clarifies the working mechanism of how it derives its outputs. Huang et al. [6] constructed a hybrid SVM-based strategy with feature selection to find the important risk factor for breast cancer. Generally speaking, without domain-specific expertise in medicine, researchers in data mining prefer models with high classification accuracy and low computational complexity. In contrast, common people (including patients and their relatives) hope that the models can have high-level interpretability simultaneously. Bayesian network classifiers (BNCs) are such models that can graphically describe the conditional dependence between attributes (or variables) and be considered to be one of the most promising graph models [7,8]. It can mine statistical knowledge from data and infer under conditions of uncertainty [9,10]. BNCs, from 0-dependence Naive Bayes (NB) [11] to 1-dependence tree augmented Naive Bayes (TAN) [12], then to arbitrary k-dependence Bayesian classifier (KDB) [13], can represent the knowledge with complex or simple network structure. KDB can theoretically represent conditional dependence relationships of arbitrary complexity. However, this approach is not effective for some specific cases. The model learned from training data may not definitely fit all testing instances. Otherwise, its bias and variance will always be 0, which is against the bias-variance dilemma [14]. In the case of breast cancer, for different specific cases, the dependence relationships between attributes may be different. For BNCs, conditional mutual information (CMI) [15], I ( X i ; X j | C ) , is commonly used to measure the conditional dependence relationship between attributes X i and X j given class variable C:
I ( X i ; X j | C ) = x i X i x j X j c C P ( x i , x j , c ) l o g P ( x i , x j | c ) P ( x i | c ) P ( x j | c ) = x i X i x j X j c C I ( x i ; x j | c ) .
I ( X i ; X j | C ) can measure the conditional dependence between attributes between attributes X i and X j given class C. Correspondingly, I ( x i ; x j | c ) can measure the conditional dependence between them when they take specific values. When P ( x i , x j | c ) > P ( x i | c ) P ( x j | c ) or l o g ( P ( x i , x j | c ) / ( P ( x i | c ) P ( x j | c ) ) > 0 , I ( x i ; x j | c ) > 0 holds and the relationship between attribute values x i and x j can be considered to be conditional dependence. In contrast, when P ( x i , x j | c ) < P ( x i | c ) P ( x j | c ) or l o g ( P ( x i , x j | c ) / ( P ( x i | c ) P ( x j | c ) ) < 0 , I ( x i ; x j | c ) < 0 holds and we argue that the relationship between attribute values x i and x j can be considered to be conditional independence. When P ( x i , x j | c ) = P ( x i | c ) P ( x j | c ) and I ( x i ; x j | c ) = 0 , the relationship between attribute values x i and x j just turns from conditional dependence to conditional independence. On dataset WBC (breast cancer), I ( X 1 ; X 2 | C ) achieves the largest value of CMI (0.4733) among all attribute pairs. The distribution of I ( x i ; x j | c ) , which correspond to different attribute value pairs of X 1 and X 2 , are shown in Figure 1. As shown in Figure 1, the relationship between attributes X 1 and X 2 is dependent in general because the positive values of I ( x 1 ; x 2 | c ) , which represent conditional dependence, have a high proportion among all the values. In addition, some I ( x 1 ; x 2 | c ) values are especially large. In contrast, there also exist some negative values of I ( x 1 ; x 2 | c ) that represent conditional independence, i.e., the dependence relationship may be different rather than invariant when attributes take different values. However, general BNCs (like NB, TAN and KDB), which only build one model to fit training instances, cannot capture this difference and cannot represent the dependence relationships flexibly.
To meet the needs of experts in machine learning or in medicine, common people (including patients and their relatives) and the problem of breast cancer mentioned above, we propose a novel sorting strategy and extend KDB from a single restricted network to unrestricted ensemble networks, i.e., unrestricted k-dependence Bayesian classifier (UKDB), in terms of Markov blanket analysis and target learning. Target learning [16] is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model BNC P to complement BNC T learned from training data T .
To clarify the basic idea of UKDB, we introduce two concepts: “Domain knowledge”, which expresses a general knowledge framework learned from the training data, it focuses on describing interdependencies between attributes, such as attribute A 1 and B 1 . In addition, “Personalized knowledge”, which expresses a specific knowledge framework learned from the attribute values in the testing instance, such as attribute A 1 = a 1 and B 1 = b 1 . Take breast cancer as an example, there is a strong correlation between attributes “Clump Thickness” and “Uniformity of Cell Size” (corresponding CMI achieves the maximum value, i.e., 0.4733), which can be considered to be the domain knowledge. In contrast, for a testing instance with attribute values “Clump Thickness = 1” and “Uniformity of Cell Size = 3”, the dependence relationship between those attribute values is approximately independent (corresponding value of CMI is 0.0002), which can be regarded as the personalized knowledge. The personalized knowledge with clear expressivity (capacity to respectively describe dependence relationships between attributes in different situations.) and tight coupling (capacity to describe the most significant dependencies between attributes.) makes ever more urgent the quest for highly scalable learners.
UKDB contains two sub-models: UKDB T and UKDB P . UKDB T is learned from training data T , which can be thought of as a spectrum of dependencies and is a statistical form of domain knowledge. UKDB P is a specific BNC to mine the personalized knowledge implicated in each single testing instance P , i.e., the specific knowledge that describes the conditional dependency between the attribute values in each single testing instance P . UKDB P and UKDB T apply the same strategy to build the network structure, but they apply different probability distributions and target different data spaces, thus they are complementary in nature, i.e., in contrast to restricted BNC, e.g., KDB, UKDB can discriminatively learn different unrestricted Bayesian network structures to represent different knowledge from training dataset and testing instance, respectively.
The Wisconsin breast cancer (WBC) database [17] is usually used as a benchmark dataset [1,2,3,4] and is also selected in our main experiments for case study to demonstrate personalized Bayesian networks (BN) structures. The case study on the WBC database, as well as an extensive experimental comparison on additional 10 UCI datasets by involving some benchmark BNCs, show the advantages of the proposed approach.

2. Bayesian Network and Markov Blanket

All the symbols used in this paper are shown in Table 1. We wish to build a Bayesian network classifier from labeled training dataset T such that the classifier can estimate the probability P ( c | x ) and assign a discrete class label c Ω C to a testing instance x = ( x 1 , , x n ) . BNs are powerful tools for knowledge representation and inference under conditions of uncertainty. A BN consists of two parts: the qualitative one in the form of a directed acyclic graph. Each node of the graph represents a variable in the training data and the directed edges between pairs of nodes represent dependence relationships between them; and the quantitative one based on local probability distributions for specifying the dependence relationships. Even though BNs can deal with continuous variables, we exclusively discuss BNs with discrete nodes in this paper. Directed edges represent statistical or causal dependencies among the variables. The directions are used to define the parent-children relationships. For example, given an edge X Y , X is the parent node of Y, and Y is the children node.
A node is conditionally independent of every other node in the graph given its parents ( X p ), its children ( X c ), and the other parents of its children ( X c p ). { X p , X c , X c p } forms the Markov blanket of the node [7], which contains all necessary information or knowledge to describe the relationships between that node and other nodes. BNCs are special type of BNs. By applying different learning strategies, BNCs encode the dependence relationships between predictive attributes X = { X 1 , , X n } and class variable C. Thus, the Markov blanket for variable C can provide the necessary knowledge for classification.
Suppose that X is divided into three parts, i.e., X = { X p , X c , X c p } , the joint probability distribution P ( x , c ) can be described in the form of chain rule,
P ( x , c ) = P ( x p , x c p , x c , c ) = P ( x p ) P ( c | x p ) P ( x c p | x p , c ) P ( x c | x c p , x p , c )
The unrestricted BNC shown in Figure 2, which corresponds to (2), is a full Bayesian classifier (i.e., no independencies). The computational complexity in such an unrestricted model is an NP-hard problem.
NB is the simplest of the BNCs. Given the class variable C, the predictive attributes are supposed to be conditionally independent of one another, i.e.,
P NB ( x | c ) = i = 1 n P ( x i | c ) .
Even though the supposition rarely holds, its classification performance is competitive to some benchmark algorithms, e.g., decision tree, due to the insensitivity to the changes in training data and approximate estimation of the conditional probabilities P ( x i | c ) [10]. Figure 3 shows the structure of NB. In contrast to Figure 2, there exists no edge between attribute nodes for NB and thus it can represent 0 conditional dependencies. It is obvious that the conditional independence assumption is too strict to be true in reality. When dealing with complex attribute dependencies, that will result in classification bias.
TAN relaxes the independence assumption and extends NB from 0-dependence tree to 1-dependence maximum weighted spanning tree [12]. The joint probability for TAN turns to be
P TAN ( x , c ) = P ( c ) P ( x 1 | c ) i = 2 n P ( x i | c , x j ) ,
where X j is the parent attribute of X i . The constraint on the number of parents intensively requires that only the most significant, i.e., 0 + 1 + + 1 = n 1 , conditional dependencies are allowed to be represented. By comparing CMI, the edge between X i and X j will be added to the network in turn to build a maximal spanning tree. Once the conditional independence assumption does not hold, TAN is supposed to achieve better classification performance than NB. An example of TAN is shown in Figure 4.
KDB can represent arbitrary degree of dependence and control its bias/variance trade-off with a single parameter, k. By comparing mutual information (MI) I ( X i ; C ) [15], attributes will be sorted in descending order and enter the network structure in turn.
I ( X i ; C ) = x i X i c C P ( x i , c ) l o g P ( x i , c ) P ( x i ) P ( c )
To control the structure complexity, each attribute X i is required to have no more than k parent attributes. Thus, for any of the first k + 1 attributes in the order, they will indiscriminately select all the attributes already in the model as its parents. For the other attributes, they will select k parent attributes which correspond to the highest values of I ( X i ; X j | C ) where X j ranks before X i .
Suppose that the attribute order is { X 1 , , X n } , the joint probability for KDB turns to be
P KDB ( x , c ) = P ( c ) i = 1 n P ( x i | c , π x i )
where π x i = { X i 1 , , X i j } are the j parent attributes of X i in the structure, where j = m i n { i 1 , k } . KDB can represent n k k 2 2 k 2 conditional dependencies. When k = 1 , KDB represents the same number of conditional dependencies of TAN. As k increases, KDB can represent increasingly conditional dependencies. Figure 5 shows an example of KDB when k = 2.
Since KDB can be extended to describe dependence relationships of arbitrary degree and thus demonstrates its flexibility, researchers proposed many important refinements to improve its performance [18,19,20,21]. Pernkopf and Bilmes [22] proposed a greedy heuristic strategy to determine the attribute order by comparing I ( C ; X i | X j ) where X j ranks higher than X i in the order, i.e., i > j . Taheri et al. [23] proposed to build a dynamic structure without specifying k a priori, and they proved that the resulting BNC is optimal.

3. The UKDB Algorithm

According to generative approach, the restricted BNCs, which take class variable C as the common parent of all predictive attributes, define a unique joint probability distribution P ( x , c ) in the form of chain rule of lower-order conditional probabilities,
P ( x , c ) = P ( c ) P ( x 1 | c ) P ( x 2 | x 1 , c ) P ( x n | x 1 , , x n 1 , c ) .
The corresponding classification rule is
c = arg max P ( x , c ) = arg max P ( c ) P ( x 1 | c ) P ( x n | x 1 , , x n 1 , c ) .
To maximize P ( x , c ) , an ideal condition is that each factor P ( x i | x 1 , , x i 1 , c ) will be maximized. In other words, X i should be strongly dependent on its parents, especially on class variable C. Given limited number of training instances, the reliability of conditional probability estimation P ( x i | Π i , c ) will increase as the dependence relationships between X i and its parent attributes increases. To achieve the trade-off between classification performance and structure complexity, only limited number of dependence relationships will be represented by BNs, e.g., KDB. In addition, the classification rule for KDB turns to be
c = arg max P ^ ( x , c ) = arg max P ( c ) i = 1 n P ( x i | Π i , c ) ,
where Π i is one subset of { X 1 , , X i 1 } and contains at most k attributes. Obviously, P ( x , c ) P ^ ( x , c ) . No matter what the attribute order is, the full BNC represents the same joint distribution, i.e., P ( x , c ) . In contrast, from Equation (8) we can see that for different attribute orders, the candidate parents for X i may differ greatly. The joint distributions P ^ ( x , c ) represented by KDBs learned from different attribute orders may not surely be same. The key issue for structure learning of restricted BNC is how to describe the most significant conditional dependence relationships among predictive attributes, or more precisely, the relationships between X i and its parent attribute X j ( i > j ) . However for KDB, the attributes are sorted in descending order of I ( X i ; C ) , which only considers the dependence relationship between X i and class variable C while neglecting the conditional dependence relationships between X i and its parents. If the first few attributes in the order are relatively independent of each other, the robustness of the network structure will be damaged from the beginning of structure learning. To address this issue, UKDB selects the parents of variable C, or X p , which are also the parents of the other attributes from the viewpoint of Markov blanket. In addition, there exist strong conditional dependence relationships between X p and the other attributes. On the other hand, k corresponds to the maximum allowable degree of attribute dependence, thus the number of attributes in X p is k.
Suppose that attribute set X p contains k attributes { X n k + 1 , , X n } and the order of attributes in X is { X p , X 1 , , X n k } , Formula (7) can be rewritten in another form,
P ( x , c ) = P ( x p ) P ( c | x p ) P ( x n k | x p , x 1 , , x n k 1 , c ) ( k 1 )
The relationships between X i and its parents corresponding to Equations (7) and (10) are shown in Table 2.
Since P ( x p ) is irrelevant to the classification, then
P ( c , x ) P ( c | x p ) P ( x 1 | x p , c ) P ( x n k | x p , x 1 , , x n k 1 , c )
Thus, UKDB uses the following formula for classification,
c = arg max P ˇ ( x , c ) = arg max P ( c | x p ) i = 1 n k P ( x i | Π ˇ i , c ) ,
where Π ˇ i is one subset of { X p , X 1 , , X i 1 } and contains k attributes. For any attribute X i ( X i X p ), X i is the parent of the other attributes, then there should exist strong conditional dependencies, or tight coupling, between them. To this end, we sort the attributes by comparing the sum of CMI. To express this clearly in the following discussion, we sort the attributes by comparing the sum of CMI (SCMI) and SCMI ( X i ) = j I ( X i ; X j | C ) ( X i X j ) . The first k attributes in the order with the largest SCMI are selected as X p . To control the structure complexity, UKDB also require that X i should select at most k parents from Π i as shown in Table 2. The attribute sets X c and X c p will be determined thereafter. Figure 6 shows two examples of UKDB when k = 1 and k = 2 .
In the real world, when attributes take different values the same dependence relationships between them may lead to wrong diagnosis or therapy. Considering attributes Sex and Pregnant, Sex = “Female” and Pregnant = “Yes” are highly related, whereas Sex = “female” and Pregnant = “No” also hold for some instances. Obviously, treatment of breast cancer during pregnancy should be different to that during non-pregnancy. CMI can weigh the conditional dependency between Sex and Pregnant, but cannot discriminately weigh the dependencies when these two attributes take different values. Target learning takes each testing instance P = { x 1 , , x n , c = ? } as a target and tries to mine the dependence relationships between these attribute values [16]. From Equations (1) and (5), we have the following equations:
I ( X i ; C ) = x i X i I ( x i ; C ) I ( X i ; X j | C ) = x i X i x j X j I ( x i ; x j | C )
where
I ( x i ; C ) = c C P ( c , x i ) log P ( c , x i ) P ( c ) P ( x i ) I ( x i ; x j | C ) = c C P ( x i , x j , c ) log P ( x i , x j | c ) P ( x i | c ) P ( x j | c )
The definitions of MI and CMI are measures of the average dependence between attributes implicated in the training data. In contrast to those, local mutual information (LMI) I ( x i ; C ) and conditional local mutual information (CLMI) I ( x i ; x j | C ) can weigh the direct dependence and conditional dependence relationships between attribute values implicated in each instance [16,24]. Similarly, we sort the attribute values by comparing the sum of CLMI (SCLMI) and SCLMI ( x i ) = j I ( x i ; x j | C ) ( x i x j ) .
For Bayesian inference, LMI refers to the event when X i = x i and can be used to measure the expected value of mutual dependence between X i and C after observing that X i = x i . CLMI can be used to weigh the conditional dependence between attribute values x i and x j while considering all possible values of variable C.
From Equations (1) and (5), to compute I ( X i ; C ) or I ( X i ; X j | C ) , all possible values of attribute X i need to be considered. If there exist missing or unknown value for attribute X i and X j in any instance, they will be replaced by some values and noise may be artificially introduced into the computation of I ( X i ; C ) or I ( X i ; X j | C ) . These missing or unknown values are regarded as noisy because the conditional dependence relationships between them and other non-noisy attribute values may be incorrectly measured. If the noisy part only account for a small portion of the non-noisy part, the dependence relationships learned from training data may be still of high-confidence level and the network structure of UKDB T may be still robust. In contrast, from the definitions of LMI and CLMI (Equation (14)) we can see that for specific instance x , to compute I ( x i ; C ) or I ( x i ; x j | C ) only these attribute values in x need to be considered. The computation of I ( x i ; C ) or I ( x i ; x j | C ) concerning noisy values will not be needed. Thus, neglecting these noisy conditional dependence relationships may make the network structure of UKDB P more robust.
We propose to use the Markov blanket and target learning to build an ensemble of two unrestricted BNCs, i.e., UKDB T and UKDB P . UKDB T and UKDB P learn from different parts data space and their learning procedures are almost the same, thus they are complementary in nature. In the training phase, by calculating MI and CMI, UKDB T describes the global conditional dependencies implicated in training data T . Correspondingly, in the classification phase, by calculating LMI and CLMI, UKDB P describes the local conditional dependencies implicated in unlabeled testing instance P . Breiman [25] revealed that ensemble learning brings improvement in accuracy only to those “unstable” learning algorithms, in the sense that small variations in the training set would lead them to produce very different models. UKDB T and UKDB P are such algorithms. UKDB T tries to learn the certain domain knowledge implicated in training dataset, whereas the domain knowledge may not describe the conditional dependencies in testing instance P . It may cause overfitting on the training set and underfitting on the testing instance. In contrast, UKDB P can describe the conditional dependencies implicated in testing instance P , whereas the personalized knowledge is uncertain since the class label of P is unknown. It may cause underfitting on the training set and overfitting on the testing instance. Thus, an ensemble of UKDB T and UKDB P may be much more appropriate for making the final prediction.
The learning procedures of UKDB T is described by Algorithm 1 as follows:
Algorithm 1: The UKDB T algorithm
Entropy 21 00489 i001
Since the class label of testing instance P is unknown, we can get all possible class labels from training set T . Assume that the probability the testing instance P in class c is 1 / m for each c { c 1 , , c m } , there will be m “pseudo” instances. By adding these m “pseudo” instances to training set T , we can estimate the joint or conditional probabilities between arbitrary attribute value pairs by using Equation (14) to achieve the aim of learning conditional independence from a testing instance P .
The learning procedures of UKDB P is shown in Algorithm 2, where “?” is represented the missing value in the dataset. To estimate the marginal and joint probabilities P ( c ) , P ( x i , c ) and P ( x i , x j , c ) , at training time UKDB needs one pass through the training data to collect the base statistics of co-occurrence counts. Calculating MI and CMI respectively need O ( N m n v ) and O ( N m ( n v ) 2 ) time, where N is the number of training instances, m is the number of classes, n is the number of attributes and v is the number of values that discrete attributes may take on average. The procedure of parent assignment for each attribute needs O ( n 2 l o g n ) . Thus, the time complexity for UKDB T to build the actual network structure is O ( N m ( n v ) 2 ) . Since UKDB P only needs to consider the attribute values in the testing instance, calculating LMI and CLMI respectively need O ( N m n ) and O ( N m n 2 ) time. The procedure of parent assignment for each attribute in UKDB P needs the same time, O ( n 2 l o g n ) . Thus, the time complexity for UKDB P is only O ( N m n 2 ) . UKDB T and UKDB P use different variations of P ( x , c ) to classify each single instance and corresponding time complexities are the same, O ( m n k ) .
Algorithm 2: The UKDB P algorithm
Entropy 21 00489 i002
UKDB T learned from training data T describes the general conditional dependencies, thus UKDB T corresponds to the domain knowledge that may be suitable for most cases. In contrast, UKDB P learned from testing instance P describes local conditional dependencies with uncertainty because all class labels are considered, thus UKDB P corresponds to the personalized knowledge that may be suitable for P only [16].
When facing an expected case, it is difficult to judge which kind of knowledge should be considered in priority. Precision knowledge may provide some statistical information that the expert does not recognize and help him use the domain knowledge to confirm or rule out the decision. For different cases, the weights of UKDB P and UKDB T may differ greatly. In this paper, without any prior knowledge we simply use the uniformly weighted average instead of the nonuniformly weighted one. The final probability estimate for the ensemble of UKDB T and UKDB P is,
P ^ ( c | x ) = P ( c | x , UKDB T ) + P ( c | x , UKDB P ) 2 .

4. Results and Discussion

4.1. Data

Breast cancer is the leading life-threatening cancer for women, especially for those aged between 40 and 55 in US and Europe [26]. American Cancer Society (ACS) estimated that [27], in 2017 about 252,000 women were diagnosed with invasive breast cancer and over 60,000 with noninvasive breast cancer. Sometimes it is too late for those women to be treated since no obvious symptoms appear before the diagnosis and among them about 12.8% will die of breast cancer after diagnosis [27]. Thus, there is strong demand for improved classification/detection systems in medical science community.
Dr William H. Wolberg collected data relevant to breast cancer during his stay at the University of Wisconsin-Madison Hospitals from 1989 to 1991, and provided the data to the UCI repository of machine learning [17]. This WBC database is relatively small, containing only 699 instances of breast cancer. In this database, 458 (65.5%) instances are benign and 241 (34.5%) instances are malignant. Each instance has 10 predictive attributes and the detailed introduction of the 10 attributes is shown in Table 3. Please note that some instances have missing values. In addition, attribute “Sample code number” is not considered in experimental study because it represents the id number and is not helpful for classification.
In the last decade, larger datasets are not scarce resources anymore [28,29,30]. Larger data quantities can help make the estimation of conditional probabilities more accurate. BNCs need higher-degree representation of attribute dependence and more accurate estimation of probability distribution to deal with them. Ten large datasets (size > 3000) with different number of attributes ( n 10 ) are selected from the UCI repository of machine learning [17] for experimental study. Table 4 describes the details of each dataset, including the number of instances, attributes and classes.

4.2. Evaluation Function

In machine learning, zero-one loss [31] is one of the standard measures for evaluating the classification performance. The bias-variance decomposition [32] for zero-one loss can help analyze the expected generalization error of trained models. To achieve bias-variance trade-off is a key issue in supervised learning. Zero-one loss can measure the extent to which a classifier correctly identifies the class label of an unlabeled instance. Given M testing instances, the zero-one loss function can be calculated as follows:
ξ ( c , c ^ ) = i = 1 M { 1 δ ( c i , c i ^ ) } M ,
where c i and c i ^ are respectively the true class label and predicted label of the i-th instance, besides δ ( c i , c i ^ ) = 1 if c i = c i ^ and 0 otherwise. While dealing with highly imbalanced datasets where “positive” class has very low proportion as compared to the “negative” class, F 1 score can help to judge whether the classifier tends to be biased towards the majority class or not. The F 1 score is defined as follows,
F 1 = 2 T P 2 T P + F P + F N
where T P is equal to the number of positive instances that have been classified correctly, F P and F N are equal to the numbers of positive instances that have been misclassified and the numbers of negative instances that have been misclassified.
We also has been introduced the ROC (Receiver Operating Characteristics) cure [33,34] to evaluate performance of machine-learning algorithms. The ROC curve is created by plotting the true-positive rate (TPR) against the false-positive rate (FPR) at various threshold settings. The TPR is also known as sensitivity or recall in machine learning. The FPR is also known as the fall-out or probability of false alarm and can be calculated as (1 - specificity), where specificity is the true negative rate (TNR). All formula involved are defined as follows:
T P R = T P T P + F N
T N R = T N T N + F P
F P R = F P F P + T N = 1 T N R
We compared the proposed algorithm when k = 1 , 2 with several benchmark classifiers [12,13,23] that were presented in the literature. The statistical results of all evaluated functions using 20 rounds of 10-fold cross validation are shown in Table 5. For each fold, 9/10 of the data was used for training and 1/10 of the data was used for testing. In addition, all experiments have been conducted on a desktop computer with an Intel(R) Xeon(R) CPU X5680 @ 3.33GHz, 64 bits and 8192 MiB of memory. In addition, for training data, missing values for qualitative attributes are replaced with modes and those for quantitative attributes are replaced with means from the training data [35,36,37]. In addition, for testing data, UKDB P proposes a natural way for dealing with missing values, not considering the dependence relationships related to missing values. The negative effect caused by missing values for UKDB P can be mitigated by removing noisy dependence relationships, and the learned network structure may be more robust.
Sampling is one of the main methods used for handling the problem of imbalanced dataset, which follows two different approaches: undersampling and oversampling [38,39,40]. Undersampling methods aim to decrease the size of the majority class. On the contrary to undersampling, oversampling algorithms tend to balance class distributions through the increase of the minority class. Since undersampling may cause the classifier to miss important concepts pertaining to the majority class, we conduct all experiments with oversampling. In the preprocessing stages of datasets, we add a set of randomly selected minority instances in the set of minority class instances and augment the original set by replicating the selected instances and adding them to it. In this way, the number of total instances in the set of minority class instances is increased and the class distribution balance is adjusted accordingly.
We also employ the Win/Draw/Loss records to summary the experimental results. Cell [ i , j ] in each table contains the number of datasets for which the BNC on the ith-row performs better (Win), equally well (Draw) or worse (Loss) than the other on the jth-column. In the following experiments, we assess a difference as significant if the outcome of a one-tailed binomial sign test is less than 0.05.

4.3. Experimental Study on WBC Dataset

From Table 5 we can see that except NB, UKDB ( k = 1 ) has a remarkably obvious prediction superiority compared to the other algorithms in terms of zero-one loss and UKDB ( k = 2 ) achieves slightly improved F 1 score than other algorithms. Although NB achieves lower errors than other algorithms on WBC, it is just a special case. As Sahami [13] argued that there would be expected to achieve optimal Bayesian accuracy if more “right” dependencies are captured. In most cases, BNCs with simple structure perform worse than those with complex structure. We will further demonstrate it in the Section 4.4.3.
UKDB T , which is learned from all training instances, can describe the general conditional dependence relationships. However, it is not all the dependence relationships but only some of them that may hold for a certain instance. In contrast, UKDB P can encode the most possible local conditional dependencies implicated in one single testing instance. UKDB can use the knowledge learned from the training set and testing instances by applying the aggregating mechanism. If UKDB T and UKDB P are complementary to each other for classification, an ideal phenomenon is that they focus on different key points. To prove this, we take an instance from WBC dataset for case study, and the detail of the instance is shown as follows,
P = { x 1 = 9 , x 2 = 5 , x 3 = 8 , x 4 = 1 , x 5 = 2 , x 6 = 3 , x 7 = 2 , x 8 = 1 , x 9 = 5 }
By comparing MI I ( X i ; C ) , X ¯ = { X 2 , X 3 , X 6 } are the first three key attributes for UKDB T . Whereas by comparing I ( x i ; C ) , X ^ = { X 4 , X 5 , X 8 } are the first three for UKDB P . The marginal probabilities of each attribute value in P are shown in Table 6. From Table 6, for any attribute value x i ( X i X ^ ) and x j ( X j X ¯ ), P ( x i ) > P ( x j ) always holds. Then for attribute X k , it is more possible that P ( x k | x i , c ) > P ( x k | x j , c ) ( k i and k j ) . To maximize the joint probability P ( x , c ) , as (10) suggests, an ideal condition is that each underlying conditional probability will be maximized. Obviously, UKDB P can achieve a much more reasonable attribute order.
Generally, as Figure 7 shows, dependency types in BNCs can be divided into two types: one is the direct dependence relationship (indicated in the Figure 7a by the solid line), such as the relationships between variables U and V; another is the conditional dependence relationship (indicated in the Figure 7b by the dotted line), such as the relationships between variables V and W given U. To interpret the effect of dependency types to UKDB, a simulation study has been carried out on dataset WBC.
Figure 8 and Figure 9 respectively show the network structures of UKDB T and UKDB P on dataset WBC when k = 1 , where UKDB P is based on testing instance P . The parent attribute of class variable is annotated in black. We can see clearly the differences in direct and conditional dependencies between them. For UKDB T , attribute X 8 and class C have direct dependence relationships with other attributes, and X 2 is the key attribute that has conditional dependence relationships with almost all the other attributes. In contrast, for UKDB P , X 3 and C have direct dependence relationships with other attributes, and X 4 plays the main role instead and is the common parent of only 3 out of 8 other attributes. In Figure 10 another structure is presented for the testing instance P = { 5 , 3 , 3 , 3 , 6 , 10 , 3 , 1 , 1 } that is different from the structure obtained for instance P = { 9 , 5 , 8 , 1 , 2 , 3 , 2 , 1 , 5 } . These examples illustrate the personalized structure (e.g., Figure 9) generated from our targeted learning for given testing instance are discriminative not only with the domain structure (e.g., Figure 8) but also other personalized structure (e.g., Figure 10) learned from other testing instance. In the next section, we will prove that the ensemble of these discriminative BNCs can use the knowledge learned from the training set and testing instances to achieve better classification performance.

4.4. Further Experiments on Other Datasets

4.4.1. The Effect of Values of k

We firstly compared the classification performance of KDB and UKDB with the same values of k. Since the restrictions of currently available hardware place some requirements on the software and the complexity of the probability table increases exponentially as k increases, to achieve the trade-off between classification performance and efficiency, we respectively compared KDB and UKDB with k = 1 and k = 2 on 10 datasets (described in Table 4). The detailed results in terms of zero-one loss can be found in Table A1 in Appendix A.
As shown in Table 7, for UKDB, the model with k = 2 achieves significant advantages over the one with k = 1 and results in Win/Draw/Loss of 6/2/2. In addition, there are only two datasets, i.e., Dis and Mushroom, have larger results of zero-one loss with UKDB, which indicates that UKDB ( k = 2 ) seldom performs worse than UKDB ( k = 1 ). In addition, for many datasets, UKDB ( k = 2 ) substantially improved the classification performance of UKDB ( k = 1 ), for example, the decrease from 0.0644 to 0.0414 for the datasets Adult.

4.4.2. The Effect of Missing Values

As mentioned above, for training data, missing values for qualitative attributes are replaced with modes and those for quantitative attributes are replaced with means from the training data [35,36,37]. In addition, for testing data, UKDB P proposes a natural way for dealing with missing values, not considering the dependence relationships related to missing values. The negative effect caused by missing values for UKDB P can be mitigated by removing noisy dependence relationships, and the learned network structure may be more robust.
In this section, to prove that UKDB has the ability to mitigate the negative effect caused by missing values in testing instance, we also present a simulation experiment to investigate the effect of missing values to UDKB. We choose datasets with no missing values from Table 4. In addition, there are three datasets satisfying this conditions, i.e., Chess, Magic and Spambase. To compare the algorithm on a controlled situation, when classifying testing instances, we manually and randomly delete 5% of attribute values in each instance.
Table 8 shows the detailed results of UKDB ( k = 2 ) on two sets of data with and without missing values in terms of zero-one loss. As can be seen, although some attribute values of testing instances have been deleted, the results of zero-one loss on these 3 datasets are similar to the one without missing values (we assess a difference as significant if the outcome of a one-tailed binomial sign test is less than 0.05), i.e., UKDB has the ability to mitigate the negative effect caused by missing values in testing instance.

4.4.3. The Effect of Criterion Used to Measure the Strength of the Dependence between the Variables

Our proposed algorithm, UKDB, is using MI and CMI (or LMI and CLMI) to measure the strength of the dependence between attributes. Actually, UKDB could use others. Since the efficiency of the UKDB depends on the efficiency of MI and CMI, we use another criterion, pointwise mutual information (PMI) and pointwise conditional mutual information (PCMI) to compare and to show in which situations MI and CMI is more (or less) efficient. In contrast to MI and CMI, PMI and PCMI refer to single events, whereas MI and CMI refer to the average of all possible events [41]. In computational linguistics, PMI and PCMI have been used for finding collocations and associations between words [41]. They can be calculated as follows:
PMI ( x ; c ) = l o g P ( x , c ) P ( x ) P ( c ) .
PCMI ( x i ; x j | c ) = l o g P ( x i , x j | c ) P ( x i | c ) P ( x j | c ) .
Table 9 shows the Win/Draw/Loss comparison results of UKDB ( k = 2 ) with {MI, CMI} and {PMI, PCMI}. The corresponding detailed results can be found in Table A2 in Appendix A. As can be seen, UKDB ( k = 2 ) with {MI, CMI} achieves lower error more often than the one with {PMI, PCMI}. To identify the efficiency between UKDB ( k = 2 ) with different information-based criteria to measure the dependence relationships between attributes, we present the results of average running computational time for UKDB ( k = 2 ) with {MI, CMI} and {PMI, PCMI} in Table 10. The results in Table 10 reinforce what the orders of complexity for these two algorithms indicated, i.e., UKDB ( k = 2 ) with {MI, CMI} needs more time to build model than the one with {PMI, PCMI} on most datasets. For example, on dataset Census-Income, the running computational time of UKDB with {PMI, PCMI} is almost 1.84 times faster than the one with {MI, CMI} (as highlighted in bold in the table). Thus, although UKDB with {PMI, PCMI} is more efficient than the one with {MI, CMI} in terms of average running computational time, UKDB with {MI, CMI} has better classification performance in terms of zero-one loss at the cost of increasing less computational time.

4.4.4. UKDB vs. NB, TAN and KDB

Although NB ranked the highest among all algorithms on WBC database in terms of zero-one loss and F1, the conditional independence assumption of NB is not true in most cases, furthermore, many researchers found that general algorithm performs better than NB in most cases [12,13,18,19,20]. Thus, it is necessary to have more general algorithm even if NB works the best in some cases.
In this section, we will demonstrate that the advantages of UKDB are due to its flexible high-dependence representation when dealing with large datasets. Since UKDB with k = 2 achieves lower results of zero-one loss more often than the one with k = 1 , we compare UKDB ( k = 2 ) with other lower-dependence BNCs, i.e., NB (0-dependence) and TAN (1-dependence). The experimental results of KDB (2-dependence when k = 2 ) are also shown for object reference. The derailed results of the average zero-one loss, bias and variance on 10 datasets (described in Table 4) are presented in Appendix A, respectively.
Table 11 shows the corresponding Win/Draw/Loss comparison results of different BNCs.
The results of zero-one loss in Table 11 reveal some patterns that confirm the hypothesis proposed above. As can be seen, TAN performs better than NB on 8 datasets and never worse. KDB performs better than TAN on 5 datasets and never worse. UKDB performs the best among all classifiers. It proved that the superior classification performance of NB on dataset WBC is just a special case. NB, TAN, KDB and UKDB can represent different degrees of dependence relationship. In general, as structure complexity increases, higher-dependence BNCs enjoy significant advantage in classification over lower-dependence BNCs on most cases.
From Table 11, in terms of bias, TAN still performs better than NB, and KDB performs better than TAN. However, the advantage of UKDB over KDB is not so significant. Higher-dependence BNCs can represent more conditional dependencies, which in general help these models to approximate the correct value of conditional probability P ( x i | Π i , c ) . From Table 11, in terms of variance, NB achieves the lowest variance because there exists no structure learning for it and its structure remains the same regardless of the change of training data. TAN performs better than KDB on 5 datasets and worse on 3 datasets. UKDB performs better than TAN on 5 datasets and worse on 3 datasets, and it performs better than KDB on 7 datasets and worse on 2 datasets. This also emphasizes that the robustness of UKDB is only second to NB. UKDB enjoys significant advantage over TAN and KDB in terms of bias and variance. Simple network structure may result in underfitting whereas complex one may result in overfitting. It is very difficult for a BNC to achieve the trade-off between structure complexity and classification performance. However, mining the possible dependence relationships implicated in testing instance helps to alleviate the negative effect caused by overfitting while improving the classification accuracy.
To attest the effective superiority of the UKDB, we use the Friedman test [42] for comparison of all alternative algorithms on other 10 datasets in Table 4. The null hypothesis of the Friedman test is that there is no difference in average ranks. With 4 algorithms and 10 datasets, the Friedman test is distributed according to the F distribution with 4 1 = 3 and ( 4 1 ) × ( 10 1 ) = 27 degrees of freedom. The critical value of F ( 3 , 27 ) for α = 0.05 is 2.9603. The result of Friedman test for zero-one loss is 22.25 > 2.9603 with p < 0.001 . Hence, we reject the null hypothesis. That is to say, the seven algorithms are not equivalent in terms of zero-one loss results. The average ranks of zero-one loss of different classifiers are {NB(3.8000), TAN(2.8000), KDB(2.2000), UKDB(1.2000)}, and the minimum required difference of mean rank is 0.6701, i.e., the rank of UKDB is better than that of other algorithms, followed by KDB, TAN and NB. UKDB has significant statistical difference with NB, TAN and KDB.
The ROC cures for NB, TAN, KDB ( k = 2 ) and UKDB ( k = 2 ) on 10 datasets are presented in Figure 11, respectively. The X-axis represents (1 - specificity) and Y-axis represents sensitivity. The area under the curve (AUC) is an effective and combined measure of sensitivity and specificity for assessing inherent validity of a diagnostic test [33]. The value of AUC closer to 1 indicates better performance of the test. According to the values of AUC, UKDB performs lower results more often than other algorithms, especially on datasets Adult, Chess, Magic, Musk and Sick. Compared with KDB, UKDB achieves similar values of AUC on 4 datasets (Dis, Hypothyroid, Mushroom and Spambase), i.e., UKDB also has significant advantages with NB, TAN and KDB in terms of ROC cures.
To further demonstrate the performance of UKDB over KDB, we employ the goal difference (GD) [19,21]. Suppose there are two classifiers A and B, the value of G D can be computed as follow:
G D ( A ; B | T ) = | w i n | | l o s s | ,
where T is the datasets, | w i n | and | l o s s | represent the number of datasets on which A performs better or worse than B, respectively.
Figure 12 shows the fitting curve of G D (UKDB;KDB| S t ) in terms of 0-1 loss. The X-axis shows the indexes of different datasets, referred to as t, which correspond to that described in Table 4. In addition, the Y-axis corresponds to the value of G D (UKDB;KDB| S t ), where S t = { D m | m t } and D m is the dataset with index m. As can be seen, UKDB enjoys significant advantages over KDB in terms of 0-1 loss when the number of instances ≤4000 (3 wins and 1 draw) or >10,000 (3 wins), otherwise the advantage is not significant (2 draws and 1 loss).
Figure 13 shows the fitting curve of G D (UKDB;KDB| S n ) in terms of 0-1 loss. The X-axis shows the number of attributes for different datasets, referred to as n, which correspond to that described in Table 4. In addition, the Y-axis corresponds to the value of G D (UKDB;KDB| S n ), where S n = { D n | n n } and D n is the dataset with n attributes. We can see that when the number of attributes >22, the advantage of UKDB over KDB is significant in terms of 0-1 loss (4 wins and 3 draws), otherwise the advantage is not significant (2 wins and 1 loss).

4.4.5. UKDB vs. Target Learning

Target learning [16] is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model BNC P to complement BNC T learned from training data T . It respectively uses TAN and KDB as the base classifier to clarify the superiority of target learning (which referred to as TANe and KDBe).
We have conducted experiments with TANe and KDBe ( k = 2 ) on 10 datasets (described in Table 4). The detailed zero-one loss results of all alternative algorithms are presented in Table A6 in Appendix A. Table 12 shows the Win/Draw/Loss comparison results of TANe, KDBe and UKDB ( k = 2 ) in terms of zero-one loss. As can be seen, UKDB achieves lower values of zero-one loss more often than TANe and KDBe, for example, the decrease from 0.4821 ± 0.0037 (TANe) or 0.4781 ± 0.0039 (KDBe) to 0.1537 ± 0.0045 (UKDB) for the dataset Abalone.
The Friedman test was also performed for these three algorithms on 10 datasets. The final result is 5.6862 > F ( 2 , 18 ) = 3.5546 with p < 0.001 . This means that at α = 0.05 , there is evidence to reject the null hypothesis that all algorithms are equivalent. The average ranks of zero-one loss of these three algorithms are {TAN e ( 2.4500 ) , KDB e ( 2.2500 ) , UKDB ( 1.3000 ) } , and the minimum required difference of mean rank is 0.7655, which demonstrates that UKDB has significant statistical difference with TANe and KDBe.

4.4.6. UKDB vs. ETAN

Cassio P. de Campos et al. [43] proposed an extended version of the TAN, ETAN, which also does not require attributes to be connected to the class. Based on a modification of Edmonds’ algorithm, its structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way.
Since it shares similarities with UKDB ( k = 1 ), we have conducted experiments with ETAN on 10 datasets (described in Table 4). The detailed zero-one loss results can be found in Table A7 in Appendix A. The Win/Draw/Loss comparison results are presented in Table 13. As can be seen, UKDB obtains lower error than ETAN more often than the reverse. Although ETAN is an efficient algorithm and has similar unrestricted Bayesian network structure with UKDB ( k = 1 ), it is a single model. On the contrary, UKDB is an ensemble algorithm.
The corresponding results of Friedman test for these three algorithms on 10 datasets is 4.0435 > F ( 2 , 18 ) = 3.5546 with p < 0.001 . The corresponding average ranks in terms of zero-one loss are {ETAN ( 2.5000 ) , UKDB ( k = 1 ) ( 2.1000 ) , UKDB ( k = 2 ) ( 1.3000 ) } , and the minimum required difference of mean rank is 0.8227, which demonstrates that the rank of UKDB ( k = 2 ) is better than that of other algorithms, followed by UKDB ( k = 1 ) and ETAN. UKDB ( k = 2 ) has significant statistical difference with ETAN.

5. Conclusions

In this paper, we have proposed to extend KDB from restricted BNC to unrestricted one by applying Markov blanket. The final classifier, called UKDB, demonstrates better classification performance with high expressivity, enhanced robustness and tight coupling. For each testing instance P , an appropriate local Bayesian classifier UKDB P is built using the same learning strategy as that of UKDB T learned from training data T . Compared with other state-of-the-art BNCs, the novelty of UKDB is that it can use the information mined from labeled and unlabeled data to make joint decisions. From the case study we can see that given testing instances P 1 and P 2 , the weights of dependence relationships between the same pair of attribute values may differ that makes the topology of UKDB P 1 distinguish from that of UKDB P 2 . Besides, the model is learned directly from the data in some field, and it can only express part of domain knowledge, i.e., datasets are only part of the field, and the knowledge of statistics may be contrary to expert knowledge. Some of the mined knowledge does not conform to the knowledge of medical experts, which requires the discrimination of expert knowledge. Thus, if given expertise in medicine, the network structures of UKDB P and UKDB T will be improved.
Given a limited number of instances, the accuracy of probability estimation determines the robustness of dependence relationships, and then determines the structure complexity of BNCs. The characteristic of tight coupling helps UKDB improve the probability estimation. UKDB has been compared experimentally with some state-of-the-art BNCs with different structure complexities. Although KDB and UKDB are of the same structure complexity, UKDB presents superior advantage over KDB in terms of classification accuracy (zero-one loss) and robustness (bias and variance). The independence assumption of NB rarely holds for all instances but may hold for specific instance. However, high-dependence BNCs, e.g., TAN, KDB and UKDB focus on the interdependence between attributes but disregard the independence between attribute values. If the independence in testing instance can be measured and identified, UKDB P can provide a much more competitive representation.
Target learning is related to dependence evaluation when attributes take specific values. Because the proposed UKDB P is based on UKDB, it needs enough data to learn accurate conditional probability during structure learning. Thus, in practical applications, the inaccurate estimate of conditional probability for some attribute values, e.g., P ( x i | Π , c ) , may lead to noise propagation in the estimate of joint probability P ( c , x ) . This situation is more obvious while dealing with datasets with less attributes. Therefore, our further research is to decide the appropriate estimate of conditional probability needed for this purpose and to seek alternative methods, e.g., Laplace correction.

Author Contributions

All authors have contributed to the study and preparation of the article. All authors have read and approved the final manuscript.

Funding

This work was supported by the National Science Foundation of China (Grant No. 61272209 and No. 61872164).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 shows the detailed results of zero-one loss for KDB ( k = 1 ) , KDB ( k = 2 ) , UKDB ( k = 1 ) and UKDB ( k = 2 ) on 10 datasets (described in Table 4). Table A2 shows the detailed results of zero-one loss for UKDB ( k = 2 ) with {MI, CMI} and {PMI, PCMI} on 10 datasets (described in Table 4). Table A3, Table A4, Table A5 show the detailed experimental results of average zero-one loss, bias and variance for NB, TAN, KDB and UKDB ( k = 2 ) on 10 datasets (described in Table 4), respectively. Table A6 shows the detailed zero-one loss results of TANe, KDBe and UKDB. In addition, Table A7 shows the detailed zero-one loss results of ETAN, UKDB ( k = 1 ) and UKDB ( k = 2 ) .
Table A1. Detailed zero-one loss results of KDB (k = 1), KDB (k = 2), UKDB (k = 1) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
Table A1. Detailed zero-one loss results of KDB (k = 1), KDB (k = 2), UKDB (k = 1) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
DatasetKDB (k = 1)KDB (k = 2)UKDB (k = 1)UKDB (k = 2)
Adult0.1758 ± 0.00410.1852 ± 0.00360.1676 ± 0.00350.1537 ± 0.0045
Census-Income0.0736 ± 0.00200.0767 ± 0.00190.0740 ± 0.00370.0689 ± 0.0022
Dis0.0186 ± 0.00470.0196 ± 0.00470.0141 ± 0.00270.0177 ± 0.0047
Hypothyroid0.0127 ± 0.00490.0124 ± 0.00430.0121 ± 0.00620.0112 ± 0.0065
Chess0.0998 ± 0.00230.0491 ± 0.00390.0644 ± 0.00230.0414 ± 0.0061
MAGIC0.2042 ± 0.01050.2139 ± 0.01100.2021 ± 0.00810.1987 ± 0.0101
Mushroom0.0007 ± 0.00070.0007 ± 0.00090.0007 ± 0.00120.0009 ± 0.0004
Musk0.0713 ± 0.01570.0684 ± 0.01660.0706 ± 0.01970.0654 ± 0.0167
Sick0.0241 ± 0.00710.0264 ± 0.00700.0266 ± 0.00620.0262 ± 0.0067
Spambase0.0865 ± 0.01190.0742 ± 0.01420.0810 ± 0.01260.0732 ± 0.0144
Table A2. Detailed zero-one loss results of UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}. The lowest results from all these BNCs are highlighted in bold.
Table A2. Detailed zero-one loss results of UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}. The lowest results from all these BNCs are highlighted in bold.
DatasetUKDB (k = 2) with {MI, CMI}UKDB (k = 2) with {PMI, PCMI}
Adult0.1537 ± 0.00450.1569 ± 0.0039
Census-Income0.0689 ± 0.00220.0729 ± 0.0019
Dis0.0177 ± 0.00470.0181 ± 0.0047
Hypothyroid0.0112 ± 0.00650.0116 ± 0.0047
Chess0.0414 ± 0.00610.0438 ± 0.0028
Magic0.1987 ± 0.01010.2872 ± 0.0107
Mushroom0.0009 ± 0.00040.0010 ± 0.0004
Musk0.0654 ± 0.01670.0701 ± 0.0360
Sick0.0262 ± 0.00670.0267 ± 0.0070
Spambase0.0732 ± .001440.0743 ± 0.0126
Table A3. Experimental results of average zero-one loss for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
Table A3. Experimental results of average zero-one loss for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
DatasetNBTANKDB (k = 2)UKDB (k = 2)
Adult0.1840 ± 0.00410.1765 ± 0.00390.1852 ± 0.00360.1537 ± 0.0045
Census-Income0.1739 ± 0.00220.0736 ± 0.00220.0767 ± 0.00190.0689 ± 0.0022
Dis0.0251 ± 0.00770.0197 ± 0.00540.0196 ± 0.00470.0177 ± 0.0047
Hypothyroid0.0144 ± 0.00430.0128 ± 0.00480.0124 ± 0.00430.0112 ± 0.0065
Chess0.1354 ± 0.00510.0853 ± 0.00920.0491 ± 0.00390.0414 ± 0.0061
MAGIC0.2396 ± 0.00690.2149 ± 0.00980.2139 ± 0.01100.1987 ± 0.0101
Mushroom0.0480 ± 0.00360.0008 ± 0.00040.0007 ± 0.00090.0009 ± 0.0004
Musk0.1222 ± 0.06960.0890 ± 0.01320.0684 ± 0.01660.0654 ± 0.0167
Sick0.0290 ± 0.00580.0296 ± 0.00610.0264 ± 0.00700.0262 ± 0.0067
Spambase0.1069 ± 0.01270.0827 ± 0.01000.0742 ± 0.01420.0732 ± 0.0144
Table A4. Experimental results of average bias for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
Table A4. Experimental results of average bias for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
DatasetNBTANKDB (k = 2)UKDB (k = 2)
Adult0.16490.13120.12200.1127
Census-Income0.23030.05440.04210.0396
Dis0.01650.01930.01910.0190
Hypothyroid0.01160.01040.00960.0083
Chess0.11070.07020.04170.0401
MAGIC0.21110.12520.12410.1203
Mushroom0.02370.00010.00010.0001
Musk0.18470.15600.15350.1582
Sick0.02460.02070.01980.0193
Spambase0.09290.05700.04970.0532
Table A5. Experimental results of average variance for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
Table A5. Experimental results of average variance for 10-cross validation. The lowest results from all these BNCs are highlighted in bold.
DatasetNBTANKDB (k = 2)UKDB (k = 2)
Adult0.00690.01650.02850.0164
Census-Income0.00520.01010.01100.0060
Dis0.00010.00050.00110.0002
Hypothyroid0.00170.00210.00240.0026
Chess0.01860.01020.01110.0096
MAGIC0.01740.04900.04910.0457
Mushroom0.00430.00020.00020.0002
Musk0.11080.11800.13200.1169
Sick0.00470.00510.00430.0048
Spambase0.00920.01580.02140.0152
Table A6. Detailed zero-one loss results of TANe, KDBe (k = 2) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
Table A6. Detailed zero-one loss results of TANe, KDBe (k = 2) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
DatasetTANeKDBe (k = 2)UKDB (k = 2)
Adult0.1554 ± 0.00370.1601 ± 0.00390.1537 ± 0.0045
Census-Income0.0784 ± 0.00300.0729 ± 0.00270.0689 ± 0.0022
Dis0.0180 ± 0.00410.0185 ± 0.00340.0177 ± 0.0047
Hypothyroid0.0120 ± 0.00550.0120 ± 0.00560.0112 ± 0.0065
Chess0.0701 ± 0.00580.0463 ± 0.00890.0414 ± 0.0061
Magic0.2177 ± 0.00900.2157 ± 0.00970.1987 ± 0.0101
Mushroom0.0008 ± 0.00080.0010 ± 0.00110.0009 ± 0.0004
Musk0.0828 ± 0.01650.0749 ± 0.01900.0654 ± 0.0167
Sick0.0320 ± 0.00620.0272 ± 0.00710.0262 ± 0.0067
Spambase0.0849 ± 0.01130.0755 ± 0.01320.0732 ± 0.0144
Table A7. Detailed zero-one loss results of ETAN, UKDB (k = 1) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
Table A7. Detailed zero-one loss results of ETAN, UKDB (k = 1) and UKDB (k = 2). The lowest results from all these BNCs are highlighted in bold.
DatasetETANUKDB (k = 1)UKDB (k = 2)
Abalone0.1180 ± 0.00430.1676 ± 0.00350.1537 ± 0.0045
Census-Income0.0733 ± 0.00170.0740 ± 0.00370.0689 ± 0.0022
Dis0.0194 ± 0.00400.0141 ± 0.00270.0177 ± 0.0047
Hypothyroid0.0113 ± 0.00490.0121 ± 0.00620.0112 ± 0.0065
Chess0.0746 ± 0.00540.0644 ± 0.00230.0414 ± 0.0061
Magic0.2157 ± 0.01120.2021 ± 0.00810.1987 ± 0.0101
Mushroom0.0008 ± 0.00090.0007 ± 0.00120.0009 ± 0.0004
Musk0.0789 ± 0.01820.0706 ± 0.01970.0654 ± 0.0167
Sick0.0281 ± 0.00800.0266 ± 0.00620.0262 ± 0.0067
Spambase0.0838 ± 0.01370.0810 ± 0.01260.0732 ± 0.0144

References

  1. Abonyi, J.; Szeifert, F. Supervised fuzzy clustering for the identification of fuzzy classifiers. Pattern Recognit. Lett. 2003, 24, 2195–2207. [Google Scholar] [CrossRef]
  2. Ubeyli, E.D. A mixture of experts network structure for breast cancer diagnosis. J. Med. Syst. 2005, 29, 569–579. [Google Scholar] [CrossRef] [PubMed]
  3. Ubeyli, E.D. Implementing automated diagnostic systems for breast cancer detection. Expert Syst. Appl. 2006, 33, 1054–1062. [Google Scholar] [CrossRef]
  4. Wolberg, W.H.; Street, W.N.; Mangasarian, O.L. Image analysis and machine learning applied to breast cancer diagnosis and prognosis. Anal. Quant. Cytol. Histol. 1995, 17, 77–87. [Google Scholar]
  5. Andres, C.; Reyes, P.; Sipper, M. A fuzzy-genetic approach to breast cancer diagnosis. Artif. Intell. Med. 1999, 17, 131–155. [Google Scholar]
  6. Huang, C.L.; Liao, H.C.; Chen, M.C. Prediction model building and feature selection with support vector machines in breast cancer diagnosis. Expert Syst. Appl. 2006, 34, 578–587. [Google Scholar] [CrossRef]
  7. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: Palo Alto, CA, USA, 1988. [Google Scholar]
  8. Webb, G.I.; Boughton, J.R.; Zheng, F.; Ting, K.M.; Salem, H. Learning by extrapolation from marginal to full-multivariate probability distributions: Decreasingly naive Bayesian classification. Mach. Learn. 2012, 86, 233–272. [Google Scholar] [CrossRef]
  9. Wu, J.; Cai, Z. A naive Bayes probability estimation model based on self-adaptive differential evolution. J. Intell. Inf. Syst. 2014, 42, 671–694. [Google Scholar] [CrossRef]
  10. Webb, G.I.; Boughton, J.R.; Wang, Z.H. Not So Naive Bayes: Aggregating One-Dependence Estimators. Mach. Learn. 2005, 58, 5–24. [Google Scholar] [CrossRef]
  11. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; A Wiley-Interscience Publication, Wiley: New York, NY, USA, 1973. [Google Scholar]
  12. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
  13. Sahami, M. Learning limited dependence Bayesian classifiers. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; pp. 335–338. [Google Scholar]
  14. Gigerenzer, G.; Brighton, H. Homo heuristicus: Why biased minds make better inferences. Top. Cognit. Sci. 2009, 1, 107–143. [Google Scholar] [CrossRef] [PubMed]
  15. Shannon, C.E. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1949. [Google Scholar]
  16. Wang, L.M.; Chen, S.; Mammadov, M. Target Learning: A Novel Framework to Mine Significant Dependencies for Unlabeled Data. In Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Cham, Switzerland, 2018; pp. 106–117. [Google Scholar]
  17. Murphy, P.M.; Aha, D.W. UCI Repository of Machine Learning Databases. 1995. Available online: http://archive.ics.uci.edu/ml/datasets.html (accessed on 1 February 2019).
  18. Wang, L.M.; Zhao, H.Y. Learning a Flexible K-Dependence Bayesian Classifier from the Chain Rule of Joint Probability Distribution. Entropy 2015, 17, 3766–3786. [Google Scholar] [CrossRef]
  19. Duan, Z.Y.; Wang, L.M. K-Dependence Bayesian Classifier Ensemble. Entropy 2017, 19, 651. [Google Scholar] [CrossRef]
  20. Arias, J.; Gámez, J.A.; Puerta, J.M. Scalable learning of k-dependence bayesian classifiers under mapreduce. In Proceedings of the 2015 IEEE Trustcom/BigDataSE/ISPA, Helsinki, Finland, 20–22 August 2015; Volume 2, pp. 25–32. [Google Scholar]
  21. Liu, Y.; Wang, L.M.; Sun, M.H. Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier. Entropy 2018, 20, 897. [Google Scholar] [CrossRef]
  22. Pernkopf, F. Bayesian network classifiers versus selective k-NN classifier. Pattern Recognit. 2005, 38, 1–10. [Google Scholar] [CrossRef]
  23. Taheri, S.; Mammadov, M. Structure learning of Bayesian Networks using global optimization with applications in data classification. Optim. Lett. 2015, 9, 931–948. [Google Scholar] [CrossRef]
  24. Wang, L.M.; Zhao, H.Y.; Sun, M.H.; Ning, Y. General and Local: Averaged k-Dependence Bayesian Classifiers. Entropy 2015, 17, 4134–4154. [Google Scholar] [CrossRef]
  25. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  26. Chen, H.L.; Yang, B.; Wang, G.; Wang, S.J.; Liu, J.; Liu, D.Y. Support vector machine based diagnostic system for breast cancer using swarm intelligence. J. Med. Syst. 2012, 36, 2505–2519. [Google Scholar] [CrossRef]
  27. American Cancer Society: About Breast Cancer. 2017. Available online: https://www.cancer.org/content/dam/CRC/PDF/Public/8577.00.pdf (accessed on 15 January 2019).
  28. Erhan, D.; Bengio, Y.; Courville, A.; Manzagol, P.A.; Vincent, P.; Bengio, S. Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 2010, 11, 625–660. [Google Scholar]
  29. Sariyar, M.; Borg, A.; Pommerening, K. Controlling false match rates in record linkage using extreme value theory. J. Biomed. Inform. 2011, 44, 648–654. [Google Scholar] [CrossRef]
  30. Agarwal, A.; Chapelle, O.; Dudík, M.; Langford, J. A reliable effective terascale linear learning system. J. Mach. Learn. Res. 2014, 15, 1111–1133. [Google Scholar]
  31. Duda, R.; Hart, P.; Stork, D.G. Pattern Classification; John Wiley and Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  32. Domingos, P. A Unified Bias-Variance Decomposition for Zero-One and Squared Loss. In Proceedings of the 17th National Conference on Artificial Intelligence, Austin, TX, USA, 31 July–2 August 2000; pp. 564–569. [Google Scholar]
  33. Hanley, J.A.; McNeil, B.J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143, 29–36. [Google Scholar] [CrossRef]
  34. Fukunaga, K. Introduction to Statistical Pattern Recognition; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  35. Jiang, L.X.; Zhang, H.; Cai, Z.H.; Wang, D.H. Weighted average of one-dependence estimators. J. Exp. Theor. Artif. Intell. 2012, 24, 219–230. [Google Scholar] [CrossRef]
  36. Yang, Y.; Webb, G.I.; Cerquides, J.; Korb, K.; Boughton, J.; Ting, K.M. To select or to weigh: A comparative study of model selection and model weighing for spode ensembles. In European Conference on Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; pp. 533–544. [Google Scholar]
  37. Zheng, F.; Webb, G.I.; Suraweera, P.; Zhu, L.G. Subsumption resolution: An efficient and effective technique for semi-naive Bayesian learning. Mach. Learn. 2012, 87, 93–125. [Google Scholar] [CrossRef]
  38. Kubat, M.; Matwin, S. Addressing the curse of imbalanced training sets: One-sided selection. In Proceedings of the Fourteenth International Conference on Machine Learning, Nashville, TN, USA, 8–12 July 1997; Volume 97, pp. 179–186. [Google Scholar]
  39. Lewis, D.D.; Catlett, J. Heterogeneous uncertainty sampling for supervised learning. In Proceedings of the Eleventh International Conference of Machine Learning, San Francisco, CA, USA, 10–13 July 1994; pp. 148–156. [Google Scholar]
  40. Ling, C.X.; Li, C. Data mining for direct marketing: Problems and solutions. In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining (KDD-98), New York, NY, USA, 27–31 August 1998; Volume 98, pp. 73–79. [Google Scholar]
  41. Church, K.W.; Hanks, P. Word association norms, mutual information, and lexicography. Comput. Linguist. 1990, 16, 22–29. [Google Scholar]
  42. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  43. De Campos, C.P.; Corani, G.; Scanagatta, M.; Cuccu, M.; Zaffalon, M. Learning extended tree augmented naive structures. Int. J. Approx. Reason. 2016, 68, 153–163. [Google Scholar] [CrossRef]
Figure 1. The distribution of I ( x i ; x j | c ) between attributes X 1 and X 2 on dataset WBC.
Figure 1. The distribution of I ( x i ; x j | c ) between attributes X 1 and X 2 on dataset WBC.
Entropy 21 00489 g001
Figure 2. Unrestricted Bayesian classifier corresponding to joint probability distribution.
Figure 2. Unrestricted Bayesian classifier corresponding to joint probability distribution.
Entropy 21 00489 g002
Figure 3. An example of Naive Bayes.
Figure 3. An example of Naive Bayes.
Entropy 21 00489 g003
Figure 4. An example of Tree augmented Naive Bayes.
Figure 4. An example of Tree augmented Naive Bayes.
Entropy 21 00489 g004
Figure 5. An example of k-dependence Bayesian classifier when k = 2.
Figure 5. An example of k-dependence Bayesian classifier when k = 2.
Entropy 21 00489 g005
Figure 6. Two examples of UKDB when k = 1 and k = 2 .
Figure 6. Two examples of UKDB when k = 1 and k = 2 .
Entropy 21 00489 g006
Figure 7. The dependency types in BNCs.
Figure 7. The dependency types in BNCs.
Entropy 21 00489 g007
Figure 8. The network structure of UKDB T corresponding to breast cancer dataset.
Figure 8. The network structure of UKDB T corresponding to breast cancer dataset.
Entropy 21 00489 g008
Figure 9. The network structure of UKDB P corresponding to testing instance P = {9,5,8,1,2,3,2,1,5} in breast cancer dataset.
Figure 9. The network structure of UKDB P corresponding to testing instance P = {9,5,8,1,2,3,2,1,5} in breast cancer dataset.
Entropy 21 00489 g009
Figure 10. The network structure of UKDB P corresponding to testing instance P ’ = {5,3,3,3,6,10,3,1,1} in breast cancer dataset.
Figure 10. The network structure of UKDB P corresponding to testing instance P ’ = {5,3,3,3,6,10,3,1,1} in breast cancer dataset.
Entropy 21 00489 g010
Figure 11. The ROC cures for NB, TAN, KDB ( k = 2 ) and UKDB ( k = 2 ) on 10 datasets.
Figure 11. The ROC cures for NB, TAN, KDB ( k = 2 ) and UKDB ( k = 2 ) on 10 datasets.
Entropy 21 00489 g011
Figure 12. The fitting curve of G D (UKDB;KDB| S t ) in terms of 0-1 loss.
Figure 12. The fitting curve of G D (UKDB;KDB| S t ) in terms of 0-1 loss.
Entropy 21 00489 g012
Figure 13. The fitting curve of G D (UKDB;KDB| S n ) in terms of 0-1 loss.
Figure 13. The fitting curve of G D (UKDB;KDB| S n ) in terms of 0-1 loss.
Entropy 21 00489 g013
Table 1. List of symbols used.
Table 1. List of symbols used.
NotationDescription
P ( · ) probability estimation
X i predictive attribute (or variable)
x i discrete values for attribute X i
x = ( x 1 , , x n ) an instance of n-dimensional vector
Cclass variable
cdiscrete values for C
Ω C set of labels of the class variable C
Nnumber of training instances
Mnumber of testing instances
nnumber of predictive attributes
D = ( < x 1 , c 1 > , < x N , c N > ) training dataset
< x i , c i > the i-th training instance with the corresponding class label
Table 2. The relationships between X i and its parents corresponding to the restricted and unrestricted BNC.
Table 2. The relationships between X i and its parents corresponding to the restricted and unrestricted BNC.
Relationships in the Restricted BNCRelationships in the Unrestricted BNC
X i Π i X i Π i
C { } C { X p }
X 1 { C } X 1 { X p , C }
X 2 { X 1 , C } X 2 { X p , X 1 , C }
X 3 { X 1 , X 2 , C } X 3 { X p , X 1 , X 2 , C }
X n { X 1 , X 2 , , X n 1 , C } X n k { X p , X 1 , , X n k 1 , C }
Table 3. Attributes in WBC database.
Table 3. Attributes in WBC database.
AttributeTypeExplanationSymbol
Sample code numberDiscretecode number− −
Clump ThicknessDiscrete[1,10] X 1
Cell SizeDiscrete[1,10] X 2
Cell ShapeDiscrete[1,10] X 3
Marginal AdhesionDiscrete[1,10] X 4
Epithelial Cell SizeDiscrete[1,10] X 5
Bare NucleiDiscrete[1,10] X 6
Bland ChromatinDiscrete[1,10] X 7
Normal NucleoliDiscrete[1,10] X 8
MitosesDiscrete[1,10] X 9
ClassBinary2 for benign,C
4 for malignant
Table 4. Datasets.
Table 4. Datasets.
No.DatasetInstanceAttributeClass
1Hypothyroid3163252
2Chess3196362
3Dis3772292
4Sick3772292
5Spambase4601572
6Musk65981662
7Mushroom8124222
8Magic19,020102
9Adult48,842142
10Census-Income299,285412
Table 5. Comparison of various algorithms from literature based on the WBC dataset.
Table 5. Comparison of various algorithms from literature based on the WBC dataset.
AlgorithmsReferenceZero-One LossF1 Score
NBDuda and Hart et al. (1973) [11]0.02580.8006
TANFriedman et al. (1997) [12]0.04290.7858
KDB (k = 1)Sahami (1996) [13]0.04850.7865
KDB (k = 2)Sahami (1996) [13]0.05210.7869
UKDB (k = 1)0.03010.7917
UKDB (k = 2)0.03850.7932
Table 6. Attribute values in P and corresponding marginal probabilities.
Table 6. Attribute values in P and corresponding marginal probabilities.
xix1 = 9x2 = 5x3 = 8x4 = 1x5 = 2x6 = 3x7 = 2x8 = 1x9 = 5
P ( x i ) 0.02000.04290.04010.58230.55220.04010.23750.63380.0086
Table 7. Win/Draw/Loss comparison results of UKDB (k = 1) and UKDB (k = 2) in terms of zero-one loss.
Table 7. Win/Draw/Loss comparison results of UKDB (k = 1) and UKDB (k = 2) in terms of zero-one loss.
Win/Draw/LossUKDB (k = 1)
UKDB (k = 2)6/2/2
Table 8. Detailed results of UKDB (k = 2) on two sets of data with and without missing values in terms of zero-one loss.
Table 8. Detailed results of UKDB (k = 2) on two sets of data with and without missing values in terms of zero-one loss.
Results with Missing ValuesResults without Missing Values
Chess0.04247 ± 0.00710.0414 ± 0.0061
Magic0.2001 ± 0.02030.1987 ± 0.0101
Spambase0.0760 ± 0.01530.0732 ± 0.0144
Table 9. Win/Draw/Loss comparison results of UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}.
Table 9. Win/Draw/Loss comparison results of UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}.
Win/Draw/LossUKDB (k = 2) with {PMI, PCMI}
UKDB (k = 2) with {MI, CMI}5/5/0
Table 10. The average results of running computational time for UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}.
Table 10. The average results of running computational time for UKDB (k = 2) with {MI, CMI} and {PMI, PCMI}.
DatasetsTime (s)
UKDB (k = 2) with {MI, CMI}UKDB (k = 2) with {PMI, PCMI}
Hypothyroid0.11390.0688
Chess0.06410.0368
Dis0.19690.1172
Sick0.19990.1203
Spambase2.09211.1009
Musk9.43604.7562
Mushroom0.26560.1631
Magic0.14200.1117
Adult0.74360.5131
Census-Income83.673445.5719
Total5.25609.6928
Table 11. The Win/Draw/Loss comparison results of different BNCs in terms of zero-one loss, Bias and Variance.
Table 11. The Win/Draw/Loss comparison results of different BNCs in terms of zero-one loss, Bias and Variance.
ClassifierNBTANKDB (k = 2)
TAN8-2-0
0-1 lossKDB (k = 2)9-1-05-5-0
UKDB (k = 2)10-0-07-2-16-3-1
TAN9-0-1
BiasKDB (k = 2)9-0-15-5-0
UKDB (k = 2)9-0-16-4-02-8-0
TAN3-0-7
VarianceKDB (k = 2)4-0-63-2-5
UKDB (k = 2)3-2-55-3-27-1-2
Table 12. Win/Draw/Loss comparison results of TANe, KDBe and UKDB (k = 2) in terms of zero-one loss.
Table 12. Win/Draw/Loss comparison results of TANe, KDBe and UKDB (k = 2) in terms of zero-one loss.
Win/Draw/LossTANeKDBe (k = 2)
UKDB (k = 2)7/2/16/4/0
Table 13. Win/Draw/Loss comparison results of ETAN, UKDB (k = 1) and UKDB (k = 2) in terms of zero-one loss.
Table 13. Win/Draw/Loss comparison results of ETAN, UKDB (k = 1) and UKDB (k = 2) in terms of zero-one loss.
Win/Draw/LossETANUKDB (k = 2)
UKDB (k = 1)6/2/2
UKDB (k = 2)7/1/26/2/2

Share and Cite

MDPI and ACS Style

Wang, L.; Liu, Y.; Mammadov, M.; Sun, M.; Qi, S. Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance. Entropy 2019, 21, 489. https://doi.org/10.3390/e21050489

AMA Style

Wang L, Liu Y, Mammadov M, Sun M, Qi S. Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance. Entropy. 2019; 21(5):489. https://doi.org/10.3390/e21050489

Chicago/Turabian Style

Wang, Limin, Yang Liu, Musa Mammadov, Minghui Sun, and Sikai Qi. 2019. "Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance" Entropy 21, no. 5: 489. https://doi.org/10.3390/e21050489

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop