iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://link.springer.com/chapter/10.1007/978-3-031-12426-6_3
Latent Relational Point Process: Network Reconstruction from Discrete Event Data | SpringerLink
Skip to main content

Latent Relational Point Process: Network Reconstruction from Discrete Event Data

  • Conference paper
  • First Online:
Database and Expert Systems Applications (DEXA 2022)

Abstract

Digital interactions, such as Wi-Fi access, financial transactions, and social media activities, leave crumbs in their path. These vast quantities of fine-granularity data generated by complex real-world relational systems propound unprecedented alternatives to expensive and laborious surveys and studies for researchers who want to learn and understand the underlying network models. Point processes are well-suited for modelling the discrete data commonly observed from digital interactions of today’s complex systems. In this work, we present a latent relational point process framework for recovering the posterior probability of latent relations from discrete event data effectively and efficiently. Our proposed framework comprises a general definition of the latent relational point process, an algorithm for fitting the parameters of an evolutionary version of the model to the data, and goodness-of-fit tests to quantify the suitability of the model to the data. The proposed framework is evaluated for the modelling of a social network from the observations of social interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    A Poisson process will always be simple if its intensity measure \(\lambda \) is diffuse, i.e\(\forall t \in \mathbb {X}, \lambda \{t\} = 0\). See proposition 6.9 in [8]. When \(|\mathcal {K}_{\mathbb {V}}|\) is finite, the ground intensity is also diffuse, since \(\lambda _g \{t\} = |E| \, \lambda _1 \{t\} + (|\mathcal {K}_{\mathbb {V}}| - |E|) \, \lambda _2 \{t\} = |E| \, 0 + (|\mathcal {K}_{\mathbb {V}}| - |E|) \, 0 = 0\).

References

  1. Brugere, I., Gallagher, B., Berger-Wolf, T.Y.: Network structure inference, a survey: motivations, methods, and applications. ACM Comput. Surv. 51(2), 1–39 (2018). https://doi.org/10.1145/3154524

    Article  Google Scholar 

  2. Daley, D.J., Vere-Jones, D.: An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Probability and Its Applications, An Introduction to the Theory of Point Processes, 2nd edn. Springer, New York (2003). https://doi.org/10.1007/b97277

  3. Daley, D.J., Vere-Jones, D.: An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure, 2nd edn. Springer, New York (2007)

    Google Scholar 

  4. Eagle, N., (Sandy) Pentland, A.: Reality mining: sensing complex social systems. Pers. Ubiquitous Comput. 10(4), 255–268 (2006). https://doi.org/10.1007/s00779-005-0046-3

  5. Farajtabar, M., Wang, Y., Gomez-Rodriguez, M., Li, S., Zha, H., Song, L.: COEVOLVE: a joint point process model for information diffusion and network evolution. J. Mach. Learn. Res. 18(1) (2017)

    Google Scholar 

  6. Kolaczyk, E.D.: Statistical Analysis of Network Data: Methods and Models. Springer, New York; London (2009). https://doi.org/10.1007/978-0-387-88146-1

  7. Krings, G., Karsai, M., Bernhardsson, S., Blondel, V.D., Saramäki, J.: Effects of time window size and placement on the structure of an aggregated communication network. EPJ Data Sci. 1(1), 1–16 (2012). https://doi.org/10.1140/epjds4

    Article  Google Scholar 

  8. Last, G., Penrose, M.: Lectures on the Poisson Process, 1st edn. Cambridge University Press, Cambridge (2017)

    Google Scholar 

  9. Lee, S.Y.: Gibbs sampler and coordinate ascent variational inference: a set-theoretical review. Commun. Stat. Theory Methods 51(6), 154–1568 (2021). https://doi.org/10.1080/03610926.2021.1921214

    Article  MathSciNet  Google Scholar 

  10. Linderman, S.W., Wang, Y., Blei, D.M.: Bayesian inference for latent hawkes processes. In: Conference on Neural Information Processing Systems (NIPS 2017) (2017)

    Google Scholar 

  11. Lorch, L., et al.: Quantifying the Effects of Contact Tracing, Testing, and Containment Measures in the Presence of Infection Hotspots. arXiv:2004.07641 [physics, q-bio, stat], October 2020. http://arxiv.org/abs/2004.07641

  12. Masuda, N., Takaguchi, T., Sato, N., Yano, K.: Self-exciting point process modeling of conversation event sequences. In: Holme, P., Saramäki, J. (eds.) Temporal Networks. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36461-7_12

    Chapter  Google Scholar 

  13. Mei, H., Eisner, J.M.: The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process (2017)

    Google Scholar 

  14. Newman, M.E.J.: Estimating network structure from unreliable measurements. Phys. Rev. E 98(6), 062321 (2018). https://doi.org/10.1103/PhysRevE.98.062321

    Article  Google Scholar 

  15. Newman, M.E.J.: Network structure from rich but noisy data. Nat. Phys. 14(6), 542–545 (2018). https://doi.org/10.1038/s41567-018-0076-1

    Article  Google Scholar 

  16. Pernice, V., Staude, B., Cardanobile, S., Rotter, S.: How structure determines correlations in neuronal networks. PLOS Comput. Biol. 7(5), e1002059 (2011). https://doi.org/10.1371/journal.pcbi.1002059

    Article  MathSciNet  Google Scholar 

  17. Ribeiro, B., Perra, N., Baronchelli, A.: Quantifying the effect of temporal resolution on time-varying networks. Sci. Rep. 3(1), 1–5 (2013). https://doi.org/10.1038/srep03006

    Article  Google Scholar 

  18. Robins, G., Pattison, P., Kalish, Y., Lusher, D.: An introduction to exponential random graph (p*) models for social networks. Soc. Netw. 29(2), 173–191 (2007). https://doi.org/10.1016/j.socnet.2006.08.002

    Article  Google Scholar 

  19. Young, J.G., Cantwell, G.T., Newman, M.E.J.: Bayesian inference of network structure from unreliable data. J. Complex Netw. 8(6), cnaa046 (2021). https://doi.org/10.1093/comnet/cnaa046

  20. Young, J.G., Valdovinos, F.S., Newman, M.E.J.: Reconstruction of plant-pollinator networks from observational data. Nat. Commun. 12(1), 1–12 (2021). https://doi.org/10.1038/s41467-021-24149-x

    Article  Google Scholar 

Download references

Acknowledgement

This research is partially supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE) programme as part of the programme DesCartes and by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 grant call (Award ref: MOE-T2EP50120-0019). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the National Research Foundation or of the Ministry of Education, Singapore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guilherme Augusto Zagatti .

Editor information

Editors and Affiliations

A Appendix

A Appendix

1.1 A.1 The Lower Bound of the Posterior

The objective of this section is to derive the lower bound in Eq. 5, using the duality formula from [9]:

$$\begin{aligned} \begin{aligned}&\log \Pr (\theta \mid H_{T^-}) \\&= \log \left[ \int _{\mathbb {E}} \mathcal {L}(H_{T^-} \mid E, \theta ) \, d\mu _{\mathcal {E}} \, (E \mid \theta ) \right] + \log \Pr (\theta ) - \log \Pr (H_{T^-}) \\&\ge \int _{\mathbb {E}} \ell (H_{T^-} \mid E, \theta ) \, d\hat{\mu }_{\mathcal {E}} \, (E) - {\text {KL}}(\hat{\mu }_{\mathcal {E}} \Vert \mu _{\mathcal {E}}) + \log \Pr (\theta ) - \log \Pr (H_{T^-}) \\&= \int _{\mathbb {E}} \ell (H_{T^-} \mid E, \theta ) \hat{f}_{\mathcal {E}} \, (E) \, d\iota (\epsilon ) - \int _{\mathbb {E}} \log \left( \frac{\hat{f}_{\mathcal {E}} \, (E)}{f_{\mathcal {E}} \, (E \mid \theta )} \right) \hat{f}_{\mathcal {E}} \, (E) \, d\iota (\epsilon ) \\&\quad + \log \Pr (\theta ) - \log \Pr (H_{T^-}) \\ \end{aligned} \end{aligned}$$
(16)

1.2 A.2 The Surrogate is the Posterior Distribution

The objective of this section is to show that the surrogate distribution \(\hat{f}_{\mathcal {E}} \, (E)\) in Eq. 7 is the posterior \(\Pr (E \mid H_{T^-}, \theta )\). Applying Bayes theorem to Eq. 7 we have that it is equal to:

$$\begin{aligned} \frac{\frac{\Pr (E, \theta \mid H_{T^-}) \Pr (H_{T^-})}{f_{\mathcal {E}} \, (E \mid \theta ) \Pr (\theta )} \; f_{\mathcal {E}} \, (E \mid \theta )}{\int _{\mathbb {E}} \frac{\Pr (E, \theta \mid H_{T^-}) \Pr (H_{T^-})}{f_{\mathcal {E}} \, (E \mid \theta ) \Pr (\theta )} f_{\mathcal {E}} \, (E \mid \theta ) \, d\iota (\epsilon )} = \frac{\Pr (E, \theta \mid H_{T^-})}{\Pr (\theta \mid H_{T^-})} = \Pr (E \mid H_{T^-}, \theta ) \end{aligned}$$
(17)

1.3 A.3 Simple-Pair Model: Gilbert Graph

We obtain the conditional log-likelihood of the observed data by plugging Eqs. 11 and 12 into Eq. 2 and taking the logarithm:

$$\begin{aligned} \begin{aligned} \ell (H_{T^-} \mid A, \theta )&= \sum _{n=1}^N \left[ \log \lambda _g + \log (a_n \lambda _1 + (1 - a_n) \lambda _2) - \log \lambda ) \right] - \int _0^T \lambda _g du \\&= \sum _{i < j} \left[ a_{i,j} ( N_{i,j} \log \lambda _1 - T \lambda _1) + (1 - a_{i, j}) ( N_{i, j} \log \lambda _2 - T \lambda _2) \right] \end{aligned} \end{aligned}$$
(18)

Above, we are able to split the \(\log \) because either \(a_n = 1\) or 0. Also note, \(\sum _{n=1}^N = \sum _{i < j} N_{i,j}\), \(|E| = \sum _{i < j} a_{i,j}\) and \(|\mathcal {K}_{\mathbb {V}}| - |E| = \sum _{i < j} (1 - a_{i,j})\).

By plugging Eqs. 10 and 18 into Eq. 6 and assuming uniform priors such that \(\Pr (\theta )\) is a constatnt, we find expressions for the maximisation step:

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{}\sum _A \left[ \sum _{i< j} a_{i,j} \left( \frac{N_{i,j}}{\hat{\lambda }_1} - T \right) \right] \hat{f}_{\mathcal {E}} \, (A) + 0 = 0 \\ &{}\sum _A \left[ \sum _{i< j} (1 - a_{i,j}) \left( \frac{N_{i,j}}{\hat{\lambda }_2} - T \right) \right] \hat{f}_{\mathcal {E}} \, (A) + 0 = 0 \\ &{}\sum _A \left[ \frac{\sum _{i< j} a_{i,j}}{p} - \frac{\sum _{i < j} (1-a_{i,j})}{1-p} \right] \hat{f}_{\mathcal {E}} \, (A) + 0 = 0 \end{aligned} \end{array}\right. } \end{aligned}$$
(19)

Let \(\hat{f}_{\mathcal {E}} \, (a_{i,j}) = \sum _A a_{i,j} \hat{f}_{\mathcal {E}} \, (A) = \Pr (a_{i,j} = 1 \mid H_{T^-}, \theta )\), then we can simplify the first equality in Eq. 19 to find an expression for updating \(\lambda _1\):

$$\begin{aligned} \sum _{i< j} \left( \frac{N_{i,j}}{\hat{\lambda }_1} - T \right) \sum _A a_{i,j} \hat{f}_{\mathcal {E}} \, (A) = 0 \iff \hat{\lambda }_1 = \frac{\sum _{i< j} \hat{f}_{\mathcal {E}} \, (a_{i,j}) N_{i,j}}{T \sum _{i < j} \hat{f}_{\mathcal {E}} \, (a_{i,j})} \end{aligned}$$
(20)

Likewise, we find the remaining estimates of Eq. 13—note that \(\left( {\begin{array}{c}|\mathbb {V}|\\ 2\end{array}}\right) = \sum _{i < j} 1\).

Finally, we obtain Eq. 14 by plugging the parameter estimates of Eq. 13, the network prior from Eq. 10 and the likelihood from Eq. 18 into Eq. 7:

$$\begin{aligned} \begin{aligned}&\hat{f}_{\mathcal {E}} \, (A) = \\&= \frac{\prod _{i< j} \left( \hat{\lambda }_1^{N_{i,j}} \exp (-T \hat{\lambda }_1) \, \hat{p} \right) ^{a_{i,j}} \left( \hat{\lambda }_2^{N_{i,j}} \exp (-T \hat{\lambda }_2) \, (1-\hat{p}) \right) ^{(1-a_{i,j})} }{\sum _A \prod _{i< j} \left( \hat{\lambda }_1^{N_{i,j}} \exp (-T \hat{\lambda }_1) \, \hat{p} \right) ^{a_{i,j}} \left( \hat{\lambda }_2^{N_{i,j}} \exp (-T \hat{\lambda }_2) \, (1-\hat{p}) \right) ^{(1-a_{i,j})}} \\&= \prod _{i< j} \frac{\left( \hat{\lambda }_1^{N_{i,j}} \exp (-T \hat{\lambda }_1) \, \hat{p} \right) ^{a_{i,j}} \left( \hat{\lambda }_2^{N_{i,j}} \exp (-T \hat{\lambda }_2) \, (1-\hat{p}) \right) ^{(1-a_{i,j})} }{ \left( \hat{\lambda }_1^{N_{i,j}} \exp (-T \hat{\lambda }_1) \, \hat{p} \right) + \left( \hat{\lambda }_2^{N_{i,j}} \exp (-T \hat{\lambda }_2) \, (1-\hat{p}) \right) } \\&= \prod _{i < j} \left[ \hat{f}_{\mathcal {E}} \, (a_{i,j}) \right] ^{a_{i, j}} \left[ 1 - \hat{f}_{\mathcal {E}} \, (a_{i,j}) \right] ^{(1 - a_{i,j})} \end{aligned} \end{aligned}$$
(21)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zagatti, G.A., Ng, SK., Bressan, S. (2022). Latent Relational Point Process: Network Reconstruction from Discrete Event Data. In: Strauss, C., Cuzzocrea, A., Kotsis, G., Tjoa, A.M., Khalil, I. (eds) Database and Expert Systems Applications. DEXA 2022. Lecture Notes in Computer Science, vol 13427. Springer, Cham. https://doi.org/10.1007/978-3-031-12426-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-12426-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-12425-9

  • Online ISBN: 978-3-031-12426-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics