iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11229-014-0432-3
Logical dynamics of belief change in the community | Synthese Skip to main content
Log in

Logical dynamics of belief change in the community

  • Published:
Synthese Aims and scope Submit manuscript

Wherever we are, it is our friends that make our world. –Henry Drummond (1851-1897)

Abstract

In this paper we explore the relationship between norms of belief revision that may be adopted by members of a community and the resulting dynamic properties of the distribution of beliefs across that community. We show that at a qualitative level many aspects of social belief change can be obtained from a very simple model, which we call ‘threshold influence’. In particular, we focus on the question of what makes the beliefs of a community stable under various dynamical situations. We also consider refinements and alternatives to the ‘threshold’ model, the most significant of which is to consider changes to plausibility judgements rather than mere beliefs. We show first that some such change is mandated by difficult problems with belief-based dynamics related to the need to decide on an order in which different beliefs are considered. Secondly, we show that the resulting plausibility-based account results in a deterministic dynamical system that is non-deterministic at the level of beliefs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Proponents of any one theory of belief change may read \(Rp\) and \(Cp\) according to their favorite theory. The AGM account of Alchourrón et al. (1985) is certainly good enough for our purposes, but nothing we say here will depend too much on the details.

  2. Although success is accepted as a postulate of many accounts of belief change, including AGM, it does impose some limitations. In particular, many higher-order propositions such as the Moore-like propositional form “\(p\) but I do not believe \(p\)” are problematic.

  3. The general framework here is dynamic logic, in which an expression of the form \([\pi ]\varphi \) is a formula that means ‘after performing action \(\pi ,\,\varphi \) is the case’.

  4. Thanks to Zoé Christoff for bringing to our attention a very nice class of propositions showing why one would not always want to assume that one’s friends are authorities. Suppose I do not believe \(p\) and believe that I don’t believe it, but all my friends believe that I do, i.e. they believe that I believe \(p\). It would be odd (to say the least) if I were to be influenced to revise my believe so as to come to believe that I do believe \(p\)!.

  5. The order of the positive and negative clauses is unimportant under our assumption that one cannot be both influenced to believe \(p\) and influenced to believe \(\lnot p\).

  6. This is an obvious consequence of standard axiomatic presentations of PDL, such as Definition 4.78 in Blackburn et al. (2001). For further details, see our Girard et al. (2012), which is a generalisation of other systems of dynamic epistemic logic such as Baltag et al. (1998) and Van Benthem et al. (2006).

  7. The technical details of this language will not be relevant to our present purposes, so we will not go into them here, referring the reader to Seligman et al. (2011) for further details.

  8. Analysis of social logical dynamics by finite state automata was used in Liang and Seligman (2011) to show that some interesting dynamic properties (such as the eventual convergence to a stable distribution) can be expressed in terms of operators similar to those we are considering here, but in the domain of preference rather than belief. Here we are providing a slightly more general characterisation for belief change, which does not depend on any particular account of revision, contraction, strong or weak influence. As in Liang and Seligman (2011), it is important to realise that the machine is not the definition of a dynamical system but a tool to analyse what is already implicit in the definition of the logical operators.

  9. This account of strong and weak influence is more-or-less parallel to that given for preference dynamics in Liang and Seligman (2011).

  10. Here we assume that the agents revise their belief simultaneously. An alternative perspective is to let each agent revise her belief in certain sequential order, corresponding to the scenario in which each agent acts upon others’ beliefs at a different time.

  11. The four clauses describe the four cases in which an agent would stay in an automaton state, and so not change the status of her belief.

  12. For example, if \(d\) is initially friends with another believer, \(e\), who is not connected to \(a,\,b\) or \(c\), then \(d\) will be immune to change.

  13. This method of aggregation has been well-studied, although mainly with regard to preference rather than belief. Our ordering of friends is what is known as a ‘priority graph’ in Andréka et al. (2002), and the method itself is known as ‘lexicographic aggregation’. To see the connection with dictionaries, think of a pair of words (of equal length) and the order they are listed. Word \(X\) comes before word \(Y\) just in case for every letter in \(Y\) that comes before the corresponding letter in word \(X\) (in alphabetic order), there is an earlier letter in \(Y\) that comes after the corresponding letter in word \(X\). If the words are not of equal length, this definition can still be made to work by padding the shorter word with extra ‘space’ characters, which are considered to come before all the letters of the alphabet.

  14. The normal modal operator defined over the ‘better friend’ relation, \(Q\varphi \psi \), means ‘for all my friends who \(\varphi \), every better friend \(\psi \)’. But there is no way of defining \(P\) in terms of \(Q\).

  15. To see that this additional assumption is non-trivial, suppose I have one best friend, \(a\), three other friends, \(b,\,c\) and \(d\), with \(b\) a better friend than \(d\), If \(b\) and \(c\) are incomparable, then neither is a better friend of mine than the other. But then \(a\) has rank 1, \(b\) and \(c\) have rank 2, and \(d\) has rank 3. This implies that \(c\) is a better friend of mine than \(d\), which is an inference we could not make without the ranking assumption.

  16. Since friendship is assumed to be irreflexive, there must be at least two agents in order for there to be any friends at all.

  17. The two axioms look superficially very different, but the first has an equivalent form that displays the difference more clearly:

    $$\begin{aligned} S\varphi \quad \leftrightarrow \quad \bigwedge _{i\le N}F_i(\lnot B\varphi \rightarrow \bigvee _{j<i}\langle F_j\rangle {B\varphi })\wedge \langle F\rangle B\varphi \end{aligned}$$

    This expresses the apparently weaker condition that for every friend who does not believe \(\varphi \), I have a better friend who does. But for this to be false, I must have a friend who doesn’t believe \(\varphi \) and no better friend who does. But then either that friend is a best friend, or I have a best friend (and so a better friend) who does not believe \(\varphi \), preventing strong influence.

  18. The form of the axiom for strong influence adapts the alternative axiom of the ranked version given in Footnote 17.

  19. For more on entrenched belief change, see Nayak et al. (1996) and Rott (2003).

  20. This proposal raises certain problems, especially concerning the transitivity of plausibility judgements. We will address these below.

  21. More precisely, plausibility influence is the operation that transforms the plausibility judgements of all agents in such a way that agent \(a\) deems \(v\) to be at least as plausible as \(u\) iff the pair \(\langle u,v \rangle \) is in the set

    $$\begin{aligned} \le _a \cup \bigcap _{a\asymp b} \le _b\setminus \bigcap _{a\asymp b}\not \le _b \end{aligned}$$

    where \(x\asymp y\) means that \(x\) is friends with \(y\). Note that the order in which the operations of adding and subtracting from the relation are performed is not important because, with at least one friend, it can never be that all my friends both do and do not regard \(v\) as at least as plausible as \(u\).

  22. As before, we will save cluttering our diagrams by assuming that if none of the conditions for a transition apply then the automaton stays in its current state.

  23. Some models have an additional immune state, but agents that are immune are no longer relevant to the spread, as they cannot infect anybody else, nor can they contract the disease again, and so they are simply taken off the equation.

  24. For his results, Morris only considers infinite populations. This is unacceptable from our point of view, but necessary for his mathematical analysis. But let’s ignore this for the sake of comparison.

References

  • Alchourrón, C., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50, 510–530.

    Article  Google Scholar 

  • Andréka, H., Ryan, M., & Schobbens, P. Y. (2002). Operators and laws for combining preferential relations. Journal of Logic and Computation, 12, 12–53.

    Article  Google Scholar 

  • Bailey, N. (1975). The mathematical theory of infectious diseases and its applications. High Wycombe: Charles Griffin & Company.

    Google Scholar 

  • Bala, V., Goyal, S. (1996). Learning from Neighbors. Econometric Institute Reports. Erasmus University. http://books.google.co.nz/books?id=3XrESgAACAAJ

  • Baltag, A., Moss, L., Solecki, S. (1998). The logic of public announcements, common knowledge and private suspicions. In: TARK 98.

  • Baltag, A., & Smets, S. (2008). A qualitative theory of dynamic interactive belief revision. In M. W. G. Bonanno & W. van der Hoek (Eds.), Logic and the foundations of game and decision theory, texts in logic and games (Vol. 3). Amsterdam: Amsterdam University Press.

    Google Scholar 

  • Bass, F. (1969). A new product growth for model consumer durables. Management Science, 15(5), 215–227.

    Google Scholar 

  • Blackburn, P., de Rijke, M., & Venema, Y. (2001). Modal logic. Cambridge, Mass: Cambridge University Press.

    Google Scholar 

  • Burgess, J. (1984). Basic tense logic, handbook of philosophical logic (2nd ed.). Dordrecht: D. Reidel.

    Google Scholar 

  • DeGroot, M. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121.

    Article  Google Scholar 

  • Domingos, P., Richardson, M. (2002). Mining the network value of customers. In: Proceedings of the Seventh International Conference on knowledge discovery and data mining, pp. 57–66. ACM Press.

  • French, J. (1956). A formal theory of social power. Psychological Review, 63(3), 181–194.

    Article  Google Scholar 

  • Gilbert, N., Troitzsch, K. (2005). Simulation for the social scientist, 2 edn. Open University Press. http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20

  • Girard, P., Seligman, J., & Liu, F. (2012). General dynamic dynamic logic. In T. Bolander, T. Braüner, S. Ghilardi, & L. S. Moss (Eds.), Advances in modal logic (pp. 239–260). London: College Publications.

  • Golub, B., & Jackson, M. (2010). Naive learning in social networks and the wisdom of crowds. American Economic Journal: Microeconomics, 2(1), 112–149.

    Google Scholar 

  • Jackson, M. O., et al. (2008). Social and economic networks. Princeton: Princeton University Press.

    Google Scholar 

  • Kempe, D., Kleinberg, J., Tardos, E. (2003). Maximizing the spread of influence through a social network. In: Proceedings the 9th Acm International Conference, pp. 137–146. ACM Press.

  • Kermack, W. O., & McKendrick, A. G. (1927). A contribution to the mathematical theory of epidemics. Proceedings of the Royal Society of London, A, 115, 700–721.

    Article  Google Scholar 

  • Liang, Z., & Seligman, J. (2011). A logical model of the dynamics of peer pressure. Electronic Notes in Theoretical Computer Science, 278, 275–288.

    Article  Google Scholar 

  • Morris, S. (2000). Contagion. The Review of Economic Studies, 67(1), 57–78.

    Article  Google Scholar 

  • Mossel, E., Neeman, J., Tamuz, O. (2012). Majority dynamics and aggregation of information in social networks. arXiv, preprint arXiv:1207.0893.

  • Nayak, A., Nelson, P., & Polansky, H. (1996). Belief change as change in epistemic entrenchment. Synthese, 109(2), 143–174.

    Article  Google Scholar 

  • Rott, H. (2003). Basic entrenchment. Studia Logica, 73, 257–280.

    Article  Google Scholar 

  • Seligman, J., Liu, F., Girard, P. (2011). Logic in the community. In: M. Banerjee, A. Seth (eds.) Logic and its applications—4th Indian Conference, ICLA 2011, Delhi, India, January 5–11, 2011. Proceedings, Lecture Notes in Computer Science, vol. 6521, pp. 178–188.

  • Seligman, J., Liu, F., Girard, P. (2013). Facebook and the epistemic logic of friendship. In: B.C. Schipper (ed.) Proceedings of the 14th Conference on theoretical aspects of rationality and knowledge,break pp. 229–238 (Chennai, India).

  • van Benthem, J. (2007). Dynamic logic for belief revision. Journal of Applied Non-Classical Logic, 17, 129–156.

    Google Scholar 

  • van Benthem, J. (2011). Logical dynamics of information and interaction. Cambridge: Cambridge University Press.

  • van Benthem, J., van Eijck, J., & Kooi, B. (2006). Logics of communication and change. Information and Computation, 204(11), 1620–1662.

    Google Scholar 

  • van Ditmarsch, H. (2005). Prolegomena to dynamic logic for belief revision. Synthese(Knowledge, Rationality & Action), 147, 229–275.

    Google Scholar 

  • Veltman, F. (1985). Logics for conditionals. Ph.D. thesis, University of Amsterdam.

Download references

Acknowledgments

We would like to thank Frank Zenker and Carlo Proietti for their efforts in putting together this volum. A previous version of the paper was presented at the LOGICIC kick-off workshop: Belief Change in Social Context in Amsterdam, at the Centre for Mathematical Social Science seminar series in Auckland, New Zealand, and at the Workshop on Knowledge Representation and Reasoning in Guiyang, China. We would like to thank the participants of each event, and in particular Christian List, Zoé Christoff, Chenwei Shi, Shaun White and Mark C. Wilson for valuable comments and discussions. Finally, we would like to thank the anonymous referees for their useful comments. Fenrong Liu is supported by the Project of National Social Science Foundation of China (No.13AZX018) and Tsinghua University Initiative Scientific Research Program (20131089292).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fenrong Liu.

Appendices

Appendix

In this appendix, we compare our model of belief influence with established research on social networks. We base our discussion on chapters 7–9 of Jackson (2008), which present the studies we think are most relevant to ours. We divide the discussion in three parts, looking at models of diffusion (or contagion), learning and games on networks. One obvious aspect of our approach, in contrast to all those presented in Jackson (2008), is our reliance on a doxastic logical language of social interactions. Crucially, our aim is a logical one: to study logical patterns of reasoning about belief influence, not to provide descriptively adequate models of social phenomena. In this respect, our use of a formal language is a significant formal departure from the more established results in the literature. Nevertheless, there are clear similarities between our models and those studied in Jackson (2008), and this deserves comment.

Diffusion

In our models, we are interested in the effect of unilateral belief change and how they can propagate to a community. One typical way of analysing diffusion of information in a network is to model it as the spread of a disease in a community. Models have networks of agents that can be in one of two states: healthy or infected. Agents can contract a disease when they are exposed with a non-negative probability of infection.Footnote 23

A natural question for us is whether belief influence can display the viral patterns commonly observed in diffusion through network. The analogy between a disease and belief would have to be framed so that a belief is like a virus. But to model that we would have to bias influence in favour of the viral belief, whereas our approach is more symmetric between believing something and its negation. Furthermore, our third belief state of being undecided is still left out in the analogy. One option would be to link it to immunity, but this would not be right, as strong influence may still change an undecided agent to a believer. Setting these concerns aside, let us look a bit closer at models of viral diffusion.

An early influential model of diffusion is known as the Bass Model Bass (1969), but doesn’t have any explicit social network component in its modelling. The models of diffusion most relevant to belief influence are the models known as SIR (“susceptible, infected, removed”, cf., Kermack and McKendrick 1927) and SIS (“susceptible, infected, susceptible”, cf., Bailey et al. 1975). In each model, agents can be in one of two states, “susceptible” in which they may become infected, and “infected” when they contract the disease. In SIR models, agents that have been infected are eventually removed from the network, when they are cured from the disease, at which point they are no longer contagious, nor can they infect others. This account for the “removed” part in SIR, modelling diseases such as chickenpox. But this doesn’t match belief influence. Consider for instance a community in flux, in which each agent keeps changing their beliefs at any state. In SIS models, however, agents become susceptible again after having been infected, modelling diseases such as the flue. Now, in SIS models, edges of the network represent the possibility of physical contact between agents and so the chance of an agent being infected is increased by the number of its network neighbours (its “degree”). The higher the degree, the higher the chance of infection.

One drawback of adapting SIS to model belief propagation is that, in any finite system, the infection will eventually entirely stop (see (Jackson 2008, p. 199 and Exercise 7.5)). Thus to get flux going in SIS models, an infinite number of agents are required, which is an unacceptable assumption for us. Furthermore, as we already noted above, our models have three belief states that react to influence differently, with the undecided state only changeable in cases of strong influence. But none of the states is favoured over the others. Contagion is a better metaphor of the propagation of information than it is of belief. Once an agent receives the information, it cannot be taken from her (except perhaps by her forgetting, which is an additional complexity). This marks an important distinction between the diffusion of information in a network and the propagation of a belief state in a network via influence. In summary, diffusion concerns to spread of a disease, or a piece of information, or even a behaviour throughout a network, which has some superficial similarities with belief influence, but also very different dynamic properties.

Learning

Another common theme of social network research is the influence of network structure on learning and consensus of beliefs. The kind of question asked here is whether a community will come to share a belief, and under which conditions, or who has most influence in such propagation. A recent influential model is the Bayesian model of observational learning by Bala and Goyal (1996), in which agents learn from their experience and that of their neighbours in decision making. The main result from Bala and Goyal is that “in a connected society, local learning ensures that all agents obtain the same payoffs in the long run. Thus, if actions have different payoffs, then all agents choose the same action, and social conformism obtains.” The intuition behind this result is that when agents have access to outcomes acquired by the actions of their neighbours, they will adopt the action that produced the best outcomes, until every agent converges on the same action. A question that naturally arise is whether the agents will converge on the optimal action. In doxastic terms, if the actions are to choose amongst alternatives to believe, then the prediction is that eventually a connected society would come to a consensus on what to believe. With learning, one could ask if the society has come to learn the “right belief”. This approach is hardly consistent with belief influence, even though patterns of influence from neighbours are also an important component of models. One obvious difference is that influence models can be stable without unanimity.

An earlier model of imitation and social influence which seems more natural with respect to belief influence is the DeGroot model DeGroot (1974). In this model, the initial configuration is that of a group of agents who each have their own opinion on a topic, expressed as a vector of probabilities \(p(0) = (p_1(0),..., p_n(0))\). On top of that, each agent provides a hierarchy of the agents as a weighting that sums up to 1. For instance, with three agents \(\{1,2,3\}\), agent 1 may weigh the agents as a vector \((2/3, 0, 1/3)\), meaning that she has no confidence in agent 2, and is twice as confident in herself as she is in agent 3. The dynamics in the system is thus captured by a weighted and directed \(n \times n\) non-negative matrix \(T\). In the matrix, entry \(T_{ij}\) is the weight attributed to agent \(j\) by agent \(i\) operating on the vectors of probability: \(p(t) = Tp(t-1) = T^tp(0)\). So \(p_i(t)\) represents the degree of belief of agent \(i\) at time \(t\). In DeGroot models, unlike those of Bala and Goyal, it is not the case that initial conditions will always lead to consensus. As in our approach, it is possible to characterise whether or not the distribution of opinions across the network will stabilise (cf., Golub and Jackson 2010).

An adaption of this model that is similar to our threshold version of strong influence, called “majority dynamics”, is investigated in Mossel et al. (2012). Majority dynamics works over a set of two issues, say 0 and 1, and at each iteration, each agent adopts the opinion of the majority of her friends. One can see this as the discretization of DeGroot model in which agents can only choose two alternatives instead of the whole range between 0 and 1. As a result “the probability of choosing the correct alternative approaches one as the size of the smallest social type approaches infinity, with a polynomial dependence.”

Although some similarities may be found with what we call “strong belief influence”, there are important dissimilarities. One general issue is that all these models use some sort of transformation of the degrees of belief of agents according to weighting. We parametrise our account with any version of propositional belief revision (and contraction). Furthermore, it is not obvious how one would differentiate between weak and strong influence, nor how one would give interpretations of \(Bp,\,B\lnot p\) and \(Up\) in those models. Furthermore, an indication that our update rules operate differently is that our notion of convergence, namely stability, does not always yield consensus. In the other direction, however, our approach raises interesting issues for learning models. The distinction between weak and strong influence implements the idea of degrees of influence using conventional distinctions in the theory of belief revision. How would one integrate this distinction, formalise it at the level of beliefs and introduce dynamic rules in learning models? Would network still converge under the same conditions as before, or would we need different necessary and sufficient conditions for convergence?

Games on networks

Another topic that has been studied extensively is the influence networks may have on individual behaviour, or actions, for instance in buying or selling products, or engaging in some communal activities. One way of formalising and analysing this is by interpreting them as games, in the sense of Game Theory, and looking for equilibria. Since the actions we take often depend on what others do, game analysis of network interaction can reveal intricate patterns of strategies on networks. And in contrast to what we’ve seen in learning, it may sometimes be more profitable for agents to adopt opposite actions to those of their neighbours, and here games are again useful in making these distinctions. A general setting for games on networks is a probabilistic analysis of how agents react to their neighbours’ actions. Game theoretic notions such as best-response then naturally give rise to studies of equilibria.

Could belief influence be analysed as a strategic interpretation of doxastic actions? For instance, we may say that if all my friends believe \(p\), then I’m better off adopting a belief in \(p\). Some pragmatic interpretations might motivate such an interpretation, for instance in argumentative scenarios, in which idiosyncratic beliefs are deemed implausible and thus unusable. More concretely, one could give the following game interpretation to our models: assign a utility of 1 if all my friends agree with me (they and I believe \(p\) or believe \(\lnot p\)), 0 if some of my friends agree with me and some disagree, and \(-\)1 if all of them disagree with me. One could then look at equilibria in this game and see whether they coincide with our stable configurations. We leave this for future research.

It is no surprise then that we can find game models on networks that come very close to our idea of influence. We focus on one such model, that of Morris (2000). Morris is interested in the case of contagion of actions in populations with emphasis on local interaction games, that is games in which agents react to the actions of a selected number of agents from the population.Footnote 24 Now, each agent has a choice between two actions 0 and 1, and, there is a threshold such that at each tick of a clock, an agent will adopt the action of her neighbours if the number of neighbours is greater than the threshold. This is the same definition as our strong influence with threshold. Say that a subset \(S\) of a network is \(r\)-cohesive if each agent has at least \(r\) of her neighbours in \(S\). The main theorem Morris proved is that both actions 0 and 1 are played by different subsets of the networks in equilibria iff there is some non-empty and strict subset of players \(S\) that is \(q\)-cohesive and such that its complement is (1\(-q\))-cohesive. These conditions for us would also yield stable networks-ignoring weak influence. Now, for the same reasons as above, our approach raises interesting issues for Morris’s model. What would happen to the characterisation of stability if one could distinguished between weak and strong influence? We leave this question open for future research.

In conclusion, our belief influence model is novel in applying the distinction introduced in Liang and Seligman (2011) between weak and strong doxastic influence to the case of doxastic influence in social networks. It would be an interesting future project to see how this distinction would impact on famous results like the ones we’ve considered in this brief discussion.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, F., Seligman, J. & Girard, P. Logical dynamics of belief change in the community. Synthese 191, 2403–2431 (2014). https://doi.org/10.1007/s11229-014-0432-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0432-3

Keywords

Navigation