Abstract
Self-Training methods are a family of methods that use some supervised method to assign class labels to the unlabeled examples. The resulting model is useful to predict the classification of unseen new domain objects. Most common supervised methods used inside self-training are the inductive ones. In this paper we propose to use the lazy learning method LID to assign classes to the unlabeled examples. A lazy approach such as the one of LID allows to reason by similarity around the labeled examples. Thus, when an unlabeled example is classified as belonging to a class we are sure that it shares relevant features with some labeled examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Armengol, E.: Building partial domain theories from explanations. Knowledge Intelligence 2/08, 19–24 (2008)
Armengol, E., Plaza, E.: Lazy induction of descriptions for relational case-based learning. In: De Raedt, L., Flach, P. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 13–24. Springer, Heidelberg (2001)
Bache, K., Lichman, M.: UCI machine learning repository (2013)
Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT 1998, pp. 92–100. ACM, New York (1998)
Clancey, W.J.: Heuristic classification. Artificial Intelligence 27(3), 289–350 (1985)
López de Mántaras, R.: A distance-based attribute selection measure for decision tree induction. Machine Learning 6, 81–92 (1991)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1), 1–38 (1977)
Huang, J., Sayyad-Shirabad, J., Matwin, S., Su, J.: Improving co-training with agreement-based sampling. In: Szczuka, M., Kryszkiewicz, M., Ramanna, S., Jensen, R., Hu, Q. (eds.) RSCTC 2010. LNCS, vol. 6086, pp. 197–206. Springer, Heidelberg (2010)
Iggane, M., Ennaji, A., Mammass, D., El Yassa, M.: Self-training using a k-nearest neighbor as a base classifier reinforced by support vector machines. International Journal of Computer Applications 56(6), 43–46 (2012)
Nigam, K., McCallum, A.K., Thrun, S., Mitchell, T.M.: Text classification from labeled and unlabeled documents using EM. Machine Learning 39(2/3), 103–134 (2000)
Quinlan, J.R.: Induction of decision trees. Machine Learning 1(1), 81–106 (1986)
Zhu, X.: Semi-supervised learning literature survey. Technical report, Computer Science, University Wisconsin-Madison, Madison, WI, Tech. Rep. 1530 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Armengol, E. (2013). A Lazy Learning Approach for Self-training. In: Torra, V., Narukawa, Y., Navarro-Arribas, G., MegÃas, D. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2013. Lecture Notes in Computer Science(), vol 8234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41550-0_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-41550-0_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41549-4
Online ISBN: 978-3-642-41550-0
eBook Packages: Computer ScienceComputer Science (R0)