default search action
30th COLT 2017: Amsterdam, The Netherlands
- Satyen Kale, Ohad Shamir:
Proceedings of the 30th Conference on Learning Theory, COLT 2017, Amsterdam, The Netherlands, 7-10 July 2017. Proceedings of Machine Learning Research 65, PMLR 2017
Preface
- Satyen Kale, Ohad Shamir:
Preface: Conference on Learning Theory (COLT), 2017. 1-3
Papers
- Alekh Agarwal, Akshay Krishnamurthy, John Langford, Haipeng Luo, Robert E. Schapire:
Open Problem: First-Order Regret Bounds for Contextual Bandits. 4-7 - Benjamin Fish, Lev Reyzin:
Open Problem: Meeting Times for Learning Random Automata. 8-11 - Alekh Agarwal, Haipeng Luo, Behnam Neyshabur, Robert E. Schapire:
Corralling a Band of Bandit Algorithms. 12-38 - Arpit Agarwal, Shivani Agarwal, Sepehr Assadi, Sanjeev Khanna:
Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons. 39-75 - Shipra Agrawal, Vashist Avadhanula, Vineet Goyal, Assaf Zeevi:
Thompson Sampling for the MNL-Bandit. 76-78 - Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi:
Homotopy Analysis for Tensor PCA. 79-104 - Alexandr Andoni, Daniel J. Hsu, Kevin Shi, Xiaorui Sun:
Correspondence retrieval. 105-126 - Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour:
Efficient PAC Learning from the Crowd. 127-150 - Mitali Bafna, Jonathan R. Ullman:
The Price of Selection in Differential Privacy. 151-168 - Sivaraman Balakrishnan, Simon S. Du, Jerry Li, Aarti Singh:
Computationally Efficient Robust Sparse Estimation in High Dimensions. 169-212 - Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik, Colin White:
Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems. 213-274 - Eric Balkanski, Yaron Singer:
The Sample Complexity of Optimizing a Convex Function. 275-301 - Avrim Blum, Yishay Mansour:
Efficient Co-Training of Linear Separators under Weak Dependence. 302-318 - Nicolas Brosse, Alain Durmus, Eric Moulines, Marcelo Pereyra:
Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo. 319-342 - Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet, John C. Urschel:
Rates of estimation for determinantal point processes. 343-345 - Nader H. Bshouty, Dana Drachsler-Cohen, Martin T. Vechev, Eran Yahav:
Learning Disjunctions of Predicates. 346-369 - Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart:
Testing Bayesian Networks. 370-448 - Sebastian Casalaina-Martin, Rafael M. Frongillo, Tom Morgan, Bo Waggoner:
Multi-Observation Elicitation. 449-464 - Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile, Sébastien Gerchinovitz:
Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning. 465-481 - Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao, Ruosong Wang:
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration. 482-534 - Lijie Chen, Jian Li, Mingda Qiao:
Towards Instance Optimal Bounds for Best Arm Identification. 535-592 - Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli:
Thresholding Based Outlier Robust PCA. 593-628 - Alon Cohen, Tamir Hazan, Tomer Koren:
Tight Bounds for Bandit Combinatorial Optimization. 629-642 - Ashok Cutkosky, Kwabena Boahen:
Online Learning Without Prior Information. 643-677 - Arnak S. Dalalyan:
Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. 678-689 - Amit Daniely:
Depth Separation for Neural Networks. 690-696 - Constantinos Daskalakis, Qinxuan Pan:
Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing. 697-703 - Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis:
Ten Steps of EM Suffice for Mixtures of Two Gaussians. 704-710 - Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart:
Learning Multivariate Log-concave Distributions. 711-727 - Vitaly Feldman, Thomas Steinke:
Generalization for Adaptively-chosen Estimators via Stable Median. 728-757 - Moran Feldman, Christopher Harshaw, Amin Karbasi:
Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization. 758-784 - Vitaly Feldman:
A General Characterization of the Statistical Query Complexity. 785-830 - Nicolas Flammarion, Francis R. Bach:
Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$. 831-875 - Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
ZigZag: A New Approach to Adaptive Online Learning. 876-924 - Rafael M. Frongillo, Andrew B. Nobel:
Memoryless Sequences for Differentiable Losses. 925-939 - David Gamarnik, Quan Li, Hongyi Zhang:
Matrix Completion from $O(n)$ Samples in Linear Time. 940-947 - David Gamarnik, Ilias Zadik:
High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition. 948-953 - Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike von Luxburg:
Two-Sample Tests for Large Random Graphs Using Network Statistics. 954-977 - Amir Globerson, Roi Livni, Shai Shalev-Shwartz:
Effective Semisupervised Learning on Manifolds. 978-1003 - Surbhi Goel, Varun Kanade, Adam R. Klivans, Justin Thaler:
Reliably Learning the ReLU in Polynomial Time. 1004-1042 - Alon Gonen, Shai Shalev-Shwartz:
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems. 1043-1063 - Nick Harvey, Christopher Liaw, Abbas Mehrabian:
Nearly-tight VC-dimension bounds for piecewise linear neural networks. 1064-1068 - Avinatan Hassidim, Yaron Singer:
Submodular Optimization under Noise. 1069-1122 - David P. Helmbold, Philip M. Long:
Surprising properties of dropout in deep networks. 1123-1146 - Lunjia Hu, Ruihan Wu, Tianhong Li, Liwei Wang:
Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes. 1147-1156 - Bin Hu, Peter Seiler, Anders Rantzer:
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints. 1157-1189 - Ravindran Kannan, Santosh S. Vempala:
The Hidden Hubs Problem. 1190-1213 - Michael J. Kearns, Zhiwei Steven Wu:
Predicting with Distributions. 1214-1241 - Tomer Koren, Roi Livni, Yishay Mansour:
Bandits with Movement Costs and Adaptive Pricing. 1242-1268 - Joon Kwon, Vianney Perchet, Claire Vernade:
Sparse Stochastic Bandits. 1269-1270 - Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora:
On the Ability of Neural Nets to Express Distributions. 1271-1296 - Marc Lelarge, Léo Miolane:
Fundamental limits of symmetric low-rank matrix estimation. 1297-1301 - Jerry Li, Ludwig Schmidt:
Robust and Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities. 1302-1382 - Andrea Locatelli, Alexandra Carpentier, Samory Kpotufe:
Adaptivity to Noise Parameters in Nonparametric Active Learning. 1383-1416 - Shachar Lovett, Jiapeng Zhang:
Noisy Population Recovery from Unknown Noise. 1417-1431 - Pasin Manurangsi, Aviad Rubinstein:
Inapproximability of VC Dimension and Littlestone's Dimension. 1432-1460 - Andreas Maurer:
A Second-order Look at Stability and Generalization. 1461-1475 - Song Mei, Theodor Misiakiewicz, Andrea Montanari, Roberto Imbuzeiro Oliveira:
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality. 1476-1515 - Dana Moshkovitz, Michal Moshkovitz:
Mixing Implies Lower Bounds for Space Bounded Learning. 1516-1566 - Gergely Neu, Vicenç Gómez:
Fast rates for online learning in Linearly Solvable Markov Decision Processes. 1567-1588 - Yury Polyanskiy, Ananda Theertha Suresh, Yihong Wu:
Sample complexity of population recovery. 1589-1618 - Aaron Potechin, David Steurer:
Exact tensor completion with sum-of-squares. 1619-1673 - Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky:
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis. 1674-1703 - Alexander Rakhlin, Karthik Sridharan:
On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities. 1704-1722 - Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher:
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization. 1723-1742 - Yevgeny Seldin, Gábor Lugosi:
An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits. 1743-1759 - Tselil Schramm, David Steurer:
Fast and robust tensor decomposition with applications to dictionary learning. 1760-1793 - Max Simchowitz, Kevin G. Jamieson, Benjamin Recht:
The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime. 1794-1834 - Salil P. Vadhan:
On Learning vs. Refutation. 1835-1848 - Daniel Vainsencher, Shie Mannor, Huan Xu:
Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization. 1849-1881 - Jialei Wang, Weiran Wang, Nathan Srebro:
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox. 1882-1919 - Blake E. Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, Nathan Srebro:
Learning Non-Discriminatory Predictors. 1920-1953 - Lijun Zhang, Tianbao Yang, Rong Jin:
Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds. 1954-1979 - Yuchen Zhang, Percy Liang, Moses Charikar:
A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics. 1980-2022 - Nikita Zhivotovskiy:
Optimal learning via local entropies and sample compression. 2023-2065
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.