Abstract
Real-world problems often require purely deductive reasoning to be supported by other techniques that can cope with noise in the form of incomplete and uncertain data. Abductive inference tackles incompleteness by guessing unknown information, provided that it is compliant with given constraints. Probabilistic reasoning tackles uncertainty by weakening the sharp logical approach. This work aims at bringing both together and at further extending the expressive power of the resulting framework, called Probabilistic Expressive Abductive Logic Programming (PEALP). It adopts a Logic Programming perspective, introducing several kinds of constraints and allowing to set a degree of strength on their validity. Procedures to handle both extensions, compatibly with standard abductive and probabilistic frameworks, are also provided.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
By extension, according to foundational literature (Kakas et al. 1992), literals built on abducible predicates are also called abducibles, meaning that they can be abduced. So, an abducible predicate is a kind of claims that may be abduced, while an abducible literal is a specific claim of that kind that may be abduced.
While generally defining ICs as formulas, with a few exceptions the discussion in Kakas et al. (1992) only considers ICs in the form of ‘plain’ denials or, using Truth Maintenance terminology, nogoods, i.e., negations of conjunctions of literals, possibly due to their straightforward representation as Logic Programming goals (\(\leftarrow l_{1},\dots ,l_{n},\neg l_{n + 1},\dots ,\neg l_{n+m}\)). As such, since goals are clauses, and FOL clauses are universally quantified, ICs are universally quantified in FOL, as well.
The original framework requires that abducible predicates have no definition in P. This requirement may be relaxed with a simple representational trick.
Note that the calls to the abductive derivation are meant to be executed non-deterministically; when successful, they return a suitable partial \({\Delta }_{O}\) which is combined to other partial \({\Delta }_{O}\)’s found in other derivation steps to give the overall \({\Delta }\) returned by a successful run of the complete algorithm.
Actually, this behavior is inefficient: if any literal in the nor constraint (say, \(l_{i}\)) was already known or abduced to be true before starting the consistency check on the constraint, all proofs based on the previous abduction of other literals \(l_{j}\) with \(j < i\) would fail when encountering \(l_{i}\), causing backtracking until the turn of \(l_{i}\) itself comes in the consistency check. This might be optimized by first checking if any literal in the xor constraint is already known or abduced to be true; if so and it is exactly one, then the constraint is automatically satisfied without any further abduction; if there are several such literals, then the constraint immediately fails; in the other cases the described procedure must be started. Anyway, as said, optimization of the proposed approach is not the subject of this paper.
We prefer calling it a ‘strength’, using a more neutral term than ‘likelihood’, because often these values express an intuitive degree of validity of the constraint, rather than a theoretically founded computation of its likelihood of being true. Indeed, as quite usual in the probabilistic logic programming setting, these values are manually set by the programmers of the logic theory. Of course, it might be possible to devise procedures that try to assess these values from the available data, but it is a line of research on its own and is outside of the scope of this paper, which focuses instead on the exploitation of the given values.
The assessment of the likelihood \(p(\delta )\) that abducible \(\delta \) is true is outside the scope of this paper. E.g., as a naive approach, one might assume that this is the a priori probability that the predicate on which \(\delta \) is built is true.
Using (3), the actual probability of an abductive explanation \(\overline {E} \in \mathcal {W}_{G}\) relative to the set \(\mathcal {W}_{G}\) of all possible worlds can be computed using the following normalization:
$$p^{\prime}(\overline{E}) = \frac{p(\overline{E})}{{\sum}_{E \in \mathcal{W}_{G}} p(E)}$$Future work will deal specifically with the definition of a formal probability distribution for the assessment of the likelihood of possible worlds.
References
Alberti, M., Bellodi, E., Cota, G., Lamma, E., Riguzzi, F., Zese, R. (2016). Probabilistic constraint logic theories. In Hommersom, A, & Abdallah, S (Eds.) Proceedings of the 3rd International Workshop on Probabilistic Logic Programming (PLP-2016), co-located with 26th International Conference on Inductive Logic Programming (ILP 2016), CEUR Workshop Proceedings, (Vol. 1661 pp. 15–28).
Arvanitis, A., Muggleton, S.H., Chen, J., Watanabe, H. (2006). Abduction with stochastic logic programs based on a possible worlds semantics. In Short Paper Proceedings of the 16th International Conference on Inductive Logic Programming (ILP-06), University of Coruña.
Christiansen, H. (2008). Implementing probabilistic abductive logic programming with constraint handling rules. In Schrijvers, T., & Frühwirth, T. (Eds.) Constraint handling rules, lecture notes in computer science, vol 5388, springer (pp. 85–118).
Clark, K.L. (1978). Negation as failure. In Gallaire, H., & Minker, J. (Eds.) Logic and Databases, Plenum Press (pp. 293–322).
De Raedt, L., & Kersting, K. (2008). Probabilistic inductive logic programming. In Probabilistic inductive logic programming, springer, lecture notes in artificial intelligence, (Vol. 4911 pp. 1–27).
De Raedt, L., Kimmig, A., Toivonen, H. (2007). Problog: A probabilistic prolog and its application in link discovery. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07) (pp. 2462–2467).
Denecker, M., & Schreye, D.D. (1992). Sldnfa: An abductive procedure for normal abductive programs. In Proceedings of ICSLP, MIT Press (pp. 700–868).
Fierens, D., Van den Broeck, G., Bruynooghe, M., De Raedt, L. (2012). Constraints for probabilistic logic programming. In Roy, D., Mansinghka, V., Goodman, N. (Eds.) Proceedings of the NIPS probabilistic programming workshop.
Fung, T.H., & Kowalski, R.A. (1997). The IFF proof procedure for abductive logic programming. The Journal of Logic Programming, 33, 151–165.
Getoor, L.C. (2002). Learning statistical models from relational data. PhD thesis, Stanford, CA, USA, aAI3038093.
Kakas, A., & Mancarella, P. (1990a). Abductive logic programming. In Proceedings of NACLP workshop on non-monotonic reasoning and logic programming.
Kakas, A., & Mancarella, P. (1990b). Database updates through abduction. In Proceedings of the 16th VLDB, Morgan Kaufmann (pp. 650–661).
Kakas, A.C., & Mancarella, P. (1990c). On the relation of truth maintenance and abduction. In Proceedings of the 1st pacific rim international conference on artificial intelligence.
Kakas, A.C., Kowalski, R.A., Toni, F. (1992). Abductive logic programming. Journal of Logic and Computation, 2, 719–770.
Kate, R.J., & Mooney, R.J. (2009). Probabilistic abduction using markov logic networks. In Proceedings of the IJCAI-09 Workshop on Plan, Activity and Intent Recognition (PAIR-09), Pasadena, CA.
Lloyd, J. (1987). Foundations of Logic Programming, 2nd edn. Springer.
Muggleton, S. (1996). Stochastic logic programs De Raedt, L. (Ed.), (Vol. 32.
Nilsson, N. (1986). Probabilistic logic. Artificial Intelligence, 28, 71–87.
Poole, D. (1993). Probabilistic horn abduction and bayesian networks. Artificial Intelligence, 64(1), 81–129.
Raghavan, S.V. (2011). Bayesian abductive logic programs: a probabilistic logic for abductive reasoning. In Walsh, T. (Ed.) Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11) (pp. 2840–2841).
Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning, 62, 107–136.
Rotella, F., & Ferilli, S. (2013). Probabilistic abductive logic programming using possible worlds. In Proceedings of the 10th Italian Convention on Computational Logic (CILC-2013), Central Europe (CEUR) Workshop Proceedings, (Vol. 1068 pp. 131–145).
Sato, T. (2002). EM Learning for symbolic-statistical models in statistical abduction. In Progress in discovery science ,final report of the japanese discovery science project, Springer (pp. 189–200).
Vennekens, J., Verbaeten, S., Bruynooghe, M. (2004). Logic programs with annotated disjunctions. In Demoen, B., & Lifschitz, V. (Eds.) Programming, Logic (pp. 431–445). Berlin: Springer.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ferilli, S. Extending expressivity and flexibility of abductive logic programming. J Intell Inf Syst 51, 647–672 (2018). https://doi.org/10.1007/s10844-018-0531-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10844-018-0531-6