Partner: A. Kozachinskiy


Conference papers
1.Delle Rose V., Kozachinskiy A., Steifer T., Effective Littlestone Dimension, 36th International Conference on Algorithmic Learning Theory, 2025-02-24/02-27, Mediolan (IT), No.272:405-417, pp.1-13, 2025
Abstract:

Delle Rose et al. (COLT’23) introduced an effective version of the Vapnik-Chervonenkis dimension, and showed that it characterizes improper PAC learning with total computable learners. In this paper, we introduce and study a similar effectivization of the notion of Littlestone dimension. Finite effective Littlestone dimension is a necessary condition for computable online learning but is not a sufficient one—which we already establish for classes of the effective Littlestone dimension 2. However, the effective Littlestone dimension equals the optimal mistake bound for computable learners in two special cases: a) for classes of Littlestone dimension 1 and b) when the learner receives as additional information an upper bound on the numbers to be guessed. Interestingly, a finite effective Littlestone dimension also guarantees that the class consists only of computable functions.

Keywords:

online learning, Littlestone dimension, computability

Affiliations:
Delle Rose V.-University of Siena (IT)
Kozachinskiy A.-other affiliation
Steifer T.-IPPT PAN
2.Kozachinskiy A., Shen A., Steifer T., Optimal Bounds for Dissatisfaction in Perpetual Voting, 39-AAAI, Thirty-Ninth AAAI Conference on Artificial Intelligence, 2025-02-25/03-04, Philadelphia (US), DOI: 10.1609/aaai.v39i13.33529, No.39(13), pp.13977-13984, 2025
Abstract:

In perpetual voting, multiple decisions are made at different moments in time. Taking the history of previous decisions into account allows us to satisfy properties such as proportionality over periods of time. In this paper, we consider the following question: is there a perpetual approval voting method that guarantees that no voter is dissatisfied too many times? We identify a sufficient condition on voter behavior ---which we call 'bounded conflicts' condition---under which a sublinear growth of dissatisfaction is possible. We provide a tight upper bound on the growth of dissatisfaction under bounded conflicts, using techniques from Kolmogorov complexity. We also observe that the approval voting with binary choices mimics the machine learning setting of prediction with expert advice. This allows us to present a voting method with sublinear guarantees on dissatisfaction under bounded conflicts, based on the standard techniques from prediction with expert advice.

Affiliations:
Kozachinskiy A.-other affiliation
Shen A.-other affiliation
Steifer T.-IPPT PAN
3.Kozachinskiy A., Steifer T., Simple Online Learning with Consistent Oracle, COLT 2024, 37th Annual Conference on Learning Theory, 2024-06-30/07-03, Edmonton (CA), Vol.247, pp.1-16, 2024
Abstract:

We consider online learning in the model where a learning algorithm can access the class only via the consistent oracle—an oracle, that, at any moment, can give a function from the class that agrees with all examples seen so far. This model was recently considered by Assos et al. (COLT’23). It is motivated by the fact that standard methods of online learning rely on computing the Littlestone dimension of subclasses, a computationally intractable problem. Assos et al. gave an online learning algorithm in this model that makes at most Cd mistakes on classes of Littlestone dimension d, for some absolute unspecified constant C > 0. We give a novel algorithm that makes at most O(256d) mistakes. Our proof is significantly simpler and uses only very basic properties of the Littlestone dimension. We also show that there exists no algorithm in this model that makes less than 3d mistakes. Our algorithm (as well as the algorithm of Assos et al.) solves an open problem by Hasrati and Ben-David (ALT’23). Namely, it demonstrates that every class of finite Littlestone dimension with recursively enumerable representation admits a computable online learner (that may be undefined on unrealizable samples).

Keywords:

Online learning, consistent oracle, Littlestone dimension

Affiliations:
Kozachinskiy A.-other affiliation
Steifer T.-IPPT PAN
4.Delle Rose V., Kozachinskiy A., Rojas C., Steifer T., Find a witness or shatter: the landscape of computable PAC learning, COLT 2023, The Thirty Sixth Annual Conference on Learning Theory, 2023-07-12/07-15, Bangalore (IN), No.195, pp.1-14, 2023
Abstract:

This paper contributes to the study of CPAC learnability—a computable version of PAC learning—by solving three open questions from recent papers. Firstly, we prove that every improperly CPAC learnable class is contained in a class which is properly CPAC learnable with polynomial sample complexity. This confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that there exists a decidable class of hypotheses which is properly CPAC learnable, but only with uncomputably fast-growing sample complexity. This solves a question from Sterkenburg (COLT2022). Finally, we construct a decidable class of finite Littlestone dimension which is not improperly CPAC learnable, strengthening a recent result of Sterkenburg (2022) and answering a question posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our results provide a complete landscape for the learnability problem in the CPAC setting

Keywords:

PAC learnability, CPAC learnability, VC dimension, Littlestone dimension, computability, foundations of machine learning

Affiliations:
Delle Rose V.-University of Siena (IT)
Kozachinskiy A.-other affiliation
Rojas C.-other affiliation
Steifer T.-IPPT PAN