default search action
Steve Hanneke
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c66]Pramith Devulapalli, Steve Hanneke:
The Dimension of Self-Directed Learning. ALT 2024: 544-573 - [c65]Steve Hanneke, Aryeh Kontorovich, Guy Kornowski:
Efficient Agnostic Learning with Average Smoothness. ALT 2024: 719-731 - [c64]Idan Attias, Steve Hanneke, Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas:
Universal Rates for Regression: Separations between Cut-Off and Absolute Loss. COLT 2024: 359-405 - [c63]Zachary Chase, Bogdan Chornomaz, Steve Hanneke, Shay Moran, Amir Yehudayoff:
Dual VC Dimension Obstructs Sample Compression by Embeddings. COLT 2024: 923-946 - [c62]Steve Hanneke:
The Star Number and Eluder Dimension: Elementary Observations About the Dimensions of Disagreement. COLT 2024: 2308-2359 - [c61]Steve Hanneke, Shay Moran, Tom Waknine:
List Sample Compression and Uniform Convergence. COLT 2024: 2360-2388 - [c60]Steve Hanneke, Shay Moran, Tom Waknine:
Open problem: Direct Sums in Learning Theory. COLT 2024: 5325-5329 - [c59]Steve Hanneke, Kasper Green Larsen, Nikita Zhivotovskiy:
Revisiting Agnostic PAC Learning. FOCS 2024: 1968-1982 - [c58]Simone Fioravanti, Steve Hanneke, Shay Moran, Hilla Schefler, Iska Tsubari:
Ramsey Theorems for Trees and a General 'Private Learning Implies Online Learning' Theorem. FOCS 2024: 1983-2009 - [c57]Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi:
Agnostic Sample Compression Schemes for Regression. ICML 2024 - [i60]Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran:
Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs. CoRR abs/2402.07453 (2024) - [i59]Pramith Devulapalli, Steve Hanneke:
The Dimension of Self-Directed Learning. CoRR abs/2402.13400 (2024) - [i58]Steve Hanneke, Shay Moran, Tom Waknine:
List Sample Compression and Uniform Convergence. CoRR abs/2403.10889 (2024) - [i57]Zachary Chase, Bogdan Chornomaz, Steve Hanneke, Shay Moran, Amir Yehudayoff:
Dual VC Dimension Obstructs Sample Compression by Embeddings. CoRR abs/2405.17120 (2024) - [i56]Simone Fioravanti, Steve Hanneke, Shay Moran, Hilla Schefler, Iska Tsubari:
Ramsey Theorems for Trees and a General 'Private Learning Implies Online Learning' Theorem. CoRR abs/2407.07765 (2024) - [i55]Steve Hanneke, Kasper Green Larsen, Nikita Zhivotovskiy:
Revisiting Agnostic PAC Learning. CoRR abs/2407.19777 (2024) - [i54]Steve Hanneke, Samory Kpotufe:
A More Unified Theory of Transfer Learning. CoRR abs/2408.16189 (2024) - [i53]Steve Hanneke, Kun Wang:
A Complete Characterization of Learnability for Stochastic Noisy Bandits. CoRR abs/2410.09597 (2024) - [i52]Idan Attias, Steve Hanneke, Arvind Ramaswami:
Sample Compression Scheme Reductions. CoRR abs/2410.13012 (2024) - [i51]Steve Hanneke, Vinod Raman, Amirreza Shaeiri, Unique Subedi:
Multiclass Transductive Online Learning. CoRR abs/2411.01634 (2024) - 2023
- [c56]Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran:
Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension. COLT 2023: 773-836 - [c55]Nataly Brukhim, Steve Hanneke, Shay Moran:
Improper Multiclass Boosting. COLT 2023: 5433-5452 - [c54]Steve Hanneke, Shay Moran, Qian Zhang:
Universal Rates for Multiclass Learning. COLT 2023: 5615-5681 - [c53]Steve Hanneke, Shay Moran, Vinod Raman, Unique Subedi, Ambuj Tewari:
Multiclass Online Learning and Uniform Convergence. COLT 2023: 5682-5696 - [c52]Steve Hanneke, Samory Kpotufe, Yasaman Mahdaviyeh:
Limits of Model Selection under Transfer Learning. COLT 2023: 5781-5812 - [c51]Steve Hanneke, Liu Yang:
Bandit Learnability can be Undecidable. COLT 2023: 5813-5849 - [c50]Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya O. Tolstikhin:
Fine-Grained Distribution-Dependent Learning Curves. COLT 2023: 5890-5924 - [c49]Idan Attias, Steve Hanneke:
Adversarially Robust PAC Learnability of Real-Valued Functions. ICML 2023: 1172-1199 - [c48]Idan Attias, Steve Hanneke, Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas:
Optimal Learners for Realizable Regression: PAC Learning and Online Learning. NeurIPS 2023 - [c47]Maria-Florina Balcan, Steve Hanneke, Rattana Pukdee, Dravyansh Sharma:
Reliable learning in challenging environments. NeurIPS 2023 - [c46]Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty:
Adversarial Resilience in Sequential Prediction via Abstention. NeurIPS 2023 - [c45]Steve Hanneke, Shay Moran, Jonathan Shafer:
A Trichotomy for Transductive Online Learning. NeurIPS 2023 - [c44]Guy Kornowski, Steve Hanneke, Aryeh Kontorovich:
Near-optimal learning with average Hölder smoothness. NeurIPS 2023 - [i50]Moïse Blanchard, Steve Hanneke, Patrick Jaillet:
Contextual Bandits and Optimistically Universal Learning. CoRR abs/2301.00241 (2023) - [i49]Steve Hanneke, Aryeh Kontorovich, Guy Kornowski:
Near-optimal learning with average Hölder smoothness. CoRR abs/2302.06005 (2023) - [i48]Moïse Blanchard, Steve Hanneke, Patrick Jaillet:
Non-stationary Contextual Bandits and Universal Learning. CoRR abs/2302.07186 (2023) - [i47]Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran:
Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension. CoRR abs/2302.13849 (2023) - [i46]Maria-Florina Balcan, Steve Hanneke, Rattana Pukdee, Dravyansh Sharma:
Reliable Learning for Test-time Attacks and Distribution Shift. CoRR abs/2304.03370 (2023) - [i45]Steve Hanneke, Samory Kpotufe, Yasaman Mahdaviyeh:
Limits of Model Selection under Transfer Learning. CoRR abs/2305.00152 (2023) - [i44]Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty:
Adversarial Resilience in Sequential Prediction via Abstention. CoRR abs/2306.13119 (2023) - [i43]Steve Hanneke, Shay Moran, Qian Zhang:
Universal Rates for Multiclass Learning. CoRR abs/2307.02066 (2023) - [i42]Idan Attias, Steve Hanneke, Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas:
Optimal Learners for Realizable Regression: PAC Learning and Online Learning. CoRR abs/2307.03848 (2023) - [i41]Steve Hanneke, Aryeh Kontorovich, Guy Kornowski:
Efficient Agnostic Learning with Average Smoothness. CoRR abs/2309.17016 (2023) - [i40]Steve Hanneke, Shay Moran, Jonathan Shafer:
A Trichotomy for Transductive Online Learning. CoRR abs/2311.06428 (2023) - 2022
- [c43]Omar Montasser, Steve Hanneke, Nathan Srebro:
Transductive Robust Learning Guarantees. AISTATS 2022: 11461-11471 - [c42]Moïse Blanchard, Romain Cosson, Steve Hanneke:
Universal Online Learning with Unbounded Losses: Memory Is All You Need. ALT 2022: 107-127 - [c41]Steve Hanneke:
Universally Consistent Online Learning with Arbitrarily Dependent Responses. ALT 2022: 488-497 - [c40]Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma:
Robustly-reliable learners under poisoning attacks. COLT 2022: 4498-4534 - [c39]Idan Attias, Steve Hanneke, Yishay Mansour:
A Characterization of Semi-Supervised Adversarially Robust PAC Learnability. NeurIPS 2022 - [c38]Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran:
On Optimal Learning Under Targeted Data Poisoning. NeurIPS 2022 - [c37]Steve Hanneke, Amin Karbasi, Shay Moran, Grigoris Velegkas:
Universal Rates for Interactive Learning. NeurIPS 2022 - [c36]Omar Montasser, Steve Hanneke, Nati Srebro:
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization. NeurIPS 2022 - [i39]Moïse Blanchard, Romain Cosson, Steve Hanneke:
Universal Online Learning with Unbounded Losses: Memory Is All You Need. CoRR abs/2201.08903 (2022) - [i38]Idan Attias, Steve Hanneke, Yishay Mansour:
A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability. CoRR abs/2202.05420 (2022) - [i37]Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma:
Robustly-reliable learners under poisoning attacks. CoRR abs/2203.04160 (2022) - [i36]Steve Hanneke:
Universally Consistent Online Learning with Arbitrarily Dependent Responses. CoRR abs/2203.06046 (2022) - [i35]Idan Attias, Steve Hanneke:
Adversarially Robust Learning of Real-Valued Functions. CoRR abs/2206.12977 (2022) - [i34]Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya O. Tolstikhin:
Fine-Grained Distribution-Dependent Learning Curves. CoRR abs/2208.14615 (2022) - [i33]Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization. CoRR abs/2209.07369 (2022) - [i32]Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran:
On Optimal Learning Under Targeted Data Poisoning. CoRR abs/2210.02713 (2022) - 2021
- [j14]Steve Hanneke:
Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes. J. Mach. Learn. Res. 22: 130:1-130:116 (2021) - [c35]Steve Hanneke, Liu Yang:
Toward a General Theory of Online Selective Sampling: Trading Off Mistakes and Queries. AISTATS 2021: 3997-4005 - [c34]Steve Hanneke, Aryeh Kontorovich:
Stable Sample Compression Schemes: New Applications and an Optimal SVM Margin Bound. ALT 2021: 697-721 - [c33]Avrim Blum, Steve Hanneke, Jian Qian, Han Shao:
Robust learning under clean-label attack. COLT 2021: 591-634 - [c32]Steve Hanneke, Roi Livni, Shay Moran:
Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games. COLT 2021: 2289-2314 - [c31]Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning with Unknown Perturbation Sets. COLT 2021: 3452-3482 - [c30]Steve Hanneke:
Open Problem: Is There an Online Learning Algorithm That Learns Whenever Online Learning Is Possible? COLT 2021: 4642-4646 - [c29]Noga Alon, Steve Hanneke, Ron Holzman, Shay Moran:
A Theory of PAC Learnability of Partial Concept Classes. FOCS 2021: 658-671 - [c28]Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, Amir Yehudayoff:
A theory of universal learning. STOC 2021: 532-541 - [i31]Steve Hanneke, Roi Livni, Shay Moran:
Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games. CoRR abs/2102.01646 (2021) - [i30]Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning with Unknown Perturbation Sets. CoRR abs/2102.02145 (2021) - [i29]Avrim Blum, Steve Hanneke, Jian Qian, Han Shao:
Robust learning under clean-label attack. CoRR abs/2103.00671 (2021) - [i28]Noga Alon, Steve Hanneke, Ron Holzman, Shay Moran:
A Theory of PAC Learnability of Partial Concept Classes. CoRR abs/2107.08444 (2021) - [i27]Steve Hanneke:
Open Problem: Is There an Online Learning Algorithm That Learns Whenever Online Learning Is Possible? CoRR abs/2107.09542 (2021) - [i26]Omar Montasser, Steve Hanneke, Nathan Srebro:
Transductive Robust Learning Guarantees. CoRR abs/2110.10602 (2021) - 2020
- [j13]Steve Hanneke, Lev Reyzin:
Special issue on ALT 2017: Guest Editors' Introduction. Theor. Comput. Sci. 808: 1 (2020) - [c27]Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy:
Proper Learning, Helly Number, and an Optimal SVM Bound. COLT 2020: 582-609 - [c26]Steve Hanneke:
Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes. ITA 2020: 1-95 - [c25]Steve Hanneke, Aryeh Kontorovich, Sivan Sabato, Roi Weiss:
Universal Bayes Consistency in Metric Spaces. ITA 2020: 1-33 - [c24]Omar Montasser, Steve Hanneke, Nati Srebro:
Reducing Adversarially Robust Learning to Non-Robust PAC Learning. NeurIPS 2020 - [i25]Steve Hanneke, Samory Kpotufe:
On the Value of Target Data in Transfer Learning. CoRR abs/2002.04747 (2020) - [i24]Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy:
Proper Learning, Helly Number, and an Optimal SVM Bound. CoRR abs/2005.11818 (2020) - [i23]Steve Hanneke, Samory Kpotufe:
A No-Free-Lunch Theorem for MultiTask Learning. CoRR abs/2006.15785 (2020) - [i22]Omar Montasser, Steve Hanneke, Nathan Srebro:
Reducing Adversarially Robust Learning to Non-Robust PAC Learning. CoRR abs/2010.12039 (2020) - [i21]Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, Amir Yehudayoff:
A Theory of Universal Learning. CoRR abs/2011.04483 (2020) - [i20]Steve Hanneke, Aryeh Kontorovich:
Stable Sample Compression Schemes: New Applications and an Optimal SVM Margin Bound. CoRR abs/2011.04586 (2020)
2010 – 2019
- 2019
- [j12]Steve Hanneke, Aryeh Kontorovich:
Optimality of SVM: Novel proofs and tighter bounds. Theor. Comput. Sci. 796: 99-113 (2019) - [c23]Steve Hanneke, Liu Yang:
Statistical Learning under Nonstationary Mixing Processes. AISTATS 2019: 1678-1686 - [c22]Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi:
Sample Compression for Real-Valued Learners. ALT 2019: 466-488 - [c21]Steve Hanneke, Aryeh Kontorovich:
A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes. ALT 2019: 489-505 - [c20]Omar Montasser, Steve Hanneke, Nathan Srebro:
VC Classes are Adversarially Robustly Learnable, but Only Improperly. COLT 2019: 2512-2530 - [c19]Steve Hanneke, Samory Kpotufe:
On the Value of Target Data in Transfer Learning. NeurIPS 2019: 9867-9877 - [i19]Omar Montasser, Steve Hanneke, Nathan Srebro:
VC Classes are Adversarially Robustly Learnable, but Only Improperly. CoRR abs/1902.04217 (2019) - [i18]Steve Hanneke, Aryeh Kontorovich, Sivan Sabato, Roi Weiss:
Universal Bayes consistency in metric spaces. CoRR abs/1906.09855 (2019) - 2018
- [j11]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
Bounds on the minimax rate for estimating a prior over a VC class from independent learning tasks. Theor. Comput. Sci. 716: 124-140 (2018) - [j10]Nikita Zhivotovskiy, Steve Hanneke:
Localization of VC classes: Beyond local Rademacher complexities. Theor. Comput. Sci. 742: 27-49 (2018) - [j9]Steve Hanneke, Liu Yang:
Testing piecewise functions. Theor. Comput. Sci. 745: 23-35 (2018) - [c18]Steve Hanneke, Adam Tauman Kalai, Gautam Kamath, Christos Tzamos:
Actively Avoiding Nonsense in Generative Models. COLT 2018: 209-227 - [i17]Steve Hanneke, Adam Kalai, Gautam Kamath, Christos Tzamos:
Actively Avoiding Nonsense in Generative Models. CoRR abs/1802.07229 (2018) - [i16]Steve Hanneke, Aryeh Kontorovich:
A New Lower Bound for Agnostic Learning with Sample Compression Schemes. CoRR abs/1805.08140 (2018) - [i15]Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi:
Sample Compression for Real-Valued Learners. CoRR abs/1805.08254 (2018) - [i14]Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi:
Agnostic Sample Compression for Linear Regression. CoRR abs/1810.01864 (2018) - 2017
- [e1]Steve Hanneke, Lev Reyzin:
International Conference on Algorithmic Learning Theory, ALT 2017, 15-17 October 2017, Kyoto University, Kyoto, Japan. Proceedings of Machine Learning Research 76, PMLR 2017 [contents] - [i13]Amit Dhurandhar, Steve Hanneke, Liu Yang:
Learning with Changing Features. CoRR abs/1705.00219 (2017) - [i12]Steve Hanneke:
Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes. CoRR abs/1706.01418 (2017) - [i11]Steve Hanneke, Liu Yang:
Testing Piecewise Functions. CoRR abs/1706.07669 (2017) - 2016
- [j8]Steve Hanneke:
The Optimal Sample Complexity of PAC Learning. J. Mach. Learn. Res. 17: 38:1-38:15 (2016) - [j7]Steve Hanneke:
Refined Error Bounds for Several Learning Algorithms. J. Mach. Learn. Res. 17: 135:1-135:55 (2016) - [c17]Nikita Zhivotovskiy, Steve Hanneke:
Localization of VC Classes: Beyond Local Rademacher Complexities. ALT 2016: 18-33 - 2015
- [j6]Yair Wiener, Steve Hanneke, Ran El-Yaniv:
A compression technique for analyzing disagreement-based active learning. J. Mach. Learn. Res. 16: 713-745 (2015) - [j5]Steve Hanneke, Liu Yang:
Minimax analysis of active learning. J. Mach. Learn. Res. 16: 3487-3602 (2015) - [c16]Steve Hanneke, Varun Kanade, Liu Yang:
Learning with a Drifting Target Concept. ALT 2015: 149-164 - [c15]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
Bounds on the Minimax Rate for Estimating a Prior over a VC Class from Independent Learning Tasks. ALT 2015: 270-284 - [i10]Steve Hanneke, Varun Kanade, Liu Yang:
Learning with a Drifting Target Concept. CoRR abs/1505.05215 (2015) - [i9]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
Bounds on the Minimax Rate for Estimating a Prior over a VC Class from Independent Learning Tasks. CoRR abs/1505.05231 (2015) - [i8]Steve Hanneke:
The Optimal Sample Complexity of PAC Learning. CoRR abs/1507.00473 (2015) - [i7]Steve Hanneke:
Refined Error Bounds for Several Learning Algorithms. CoRR abs/1512.07146 (2015) - [i6]Steve Hanneke, Tommi S. Jaakkola, Liu Yang:
Statistical Learning under Nonstationary Mixing Processes. CoRR abs/1512.08064 (2015) - 2014
- [j4]Steve Hanneke:
Theory of Disagreement-Based Active Learning. Found. Trends Mach. Learn. 7(2-3): 131-309 (2014) - [i5]Yair Wiener, Steve Hanneke, Ran El-Yaniv:
A Compression Technique for Analyzing Disagreement-Based Active Learning. CoRR abs/1404.1504 (2014) - [i4]Steve Hanneke, Liu Yang:
Minimax Analysis of Active Learning. CoRR abs/1410.0996 (2014) - 2013
- [j3]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
A theory of transfer learning with applications to active learning. Mach. Learn. 90(2): 161-189 (2013) - [c14]Liu Yang, Steve Hanneke:
Activized Learning with Uniform Classification Noise. ICML (2) 2013: 370-378 - 2012
- [j2]Steve Hanneke:
Activized Learning: Transforming Passive to Active with Improved Label Complexity. J. Mach. Learn. Res. 13: 1469-1587 (2012) - [c13]Maria-Florina Balcan, Steve Hanneke:
Robust Interactive Learning. COLT 2012: 20.1-20.34 - [i3]Steve Hanneke, Liu Yang:
Surrogate Losses in Passive and Active Learning. CoRR abs/1207.3772 (2012) - 2011
- [c12]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
Identifiability of Priors from Bounded Sample Sizes with Applications to Transfer Learning. COLT 2011: 789-806 - [c11]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
The Sample Complexity of Self-Verifying Bayesian Active Learning. AISTATS 2011: 816-822 - [i2]Steve Hanneke:
Activized Learning: Transforming Passive to Active with Improved Label Complexity. CoRR abs/1108.1766 (2011) - [i1]Maria-Florina Balcan, Steve Hanneke:
Robust Interactive Learning. CoRR abs/1111.1422 (2011) - 2010
- [j1]Maria-Florina Balcan, Steve Hanneke, Jennifer Wortman Vaughan:
The true sample complexity of active learning. Mach. Learn. 80(2-3): 111-139 (2010) - [c10]Liu Yang, Steve Hanneke, Jaime G. Carbonell:
Bayesian Active Learning Using Arbitrary Binary Valued Queries. ALT 2010: 50-58 - [c9]Steve Hanneke, Liu Yang:
Negative Results for Active Learning with Convex Losses. AISTATS 2010: 321-325
2000 – 2009
- 2009
- [c8]Steve Hanneke:
Adaptive Rates of Convergence in Active Learning. COLT 2009 - [c7]Steve Hanneke, Eric P. Xing:
Network Completion and Survey Sampling. AISTATS 2009: 209-215 - 2008
- [c6]Maria-Florina Balcan, Steve Hanneke, Jennifer Wortman:
The True Sample Complexity of Active Learning. COLT 2008: 45-56 - 2007
- [c5]Steve Hanneke:
Teaching Dimension and the Complexity of Active Learning. COLT 2007: 66-81 - [c4]Fan Guo, Steve Hanneke, Wenjie Fu, Eric P. Xing:
Recovering temporally rewiring networks: a model-based approach. ICML 2007: 321-328 - [c3]Steve Hanneke:
A bound on the label complexity of agnostic active learning. ICML 2007: 353-360 - 2006
- [c2]Steve Hanneke, Eric P. Xing:
Discrete Temporal Models of Social Networks. SNA@ICML 2006: 115-125 - [c1]Steve Hanneke:
An analysis of graph cut size for transductive learning. ICML 2006: 393-399
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-11 20:44 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint