Journal: Journal of Machine Learning Research
Loading...
Abbreviation
J. Mach. Learn. Res.
Publisher
Microtome Publishing
88 results
Search Results
Publications 1 - 10 of 88
- Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional RegressionItem type: Journal Article
Journal of Machine Learning ResearchĆevid, Domagoj; Michel, Loris; Näf, Jeffrey; et al. (2022)Random Forest (Breiman, 2001) is a successful and widely used regression and classification algorithm. Part of its appeal and reason for its versatility is its (implicit) construction of a kernel-type weighting function on training data, which can also be used for targets other than the original mean estimation. We propose a novel forest construction for multivariate responses based on their joint conditional distribution, independent of the estimation target and the data model. It uses a new splitting criterion based on the MMD distributional metric, which is suitable for detecting heterogeneity in multivariate distributions. The induced weights define an estimate of the full conditional distribution, which in turn can be used for arbitrary and potentially complicated targets of interest. The method is very versatile and convenient to use, as we illustrate on a wide range of examples. The code is available as Python and R packages drf. Keywords: causality, distributional regression, fairness, Maximal Mean Discrepancy, Random Forests, two-sample testing - Weisfeiler-Lehman Graph KernelsItem type: Journal Article
Journal of Machine Learning ResearchShervashidze, Nino; Schweitzer, Pascal; van Leeuwen, Erik J.; et al. (2011) - Convex Reinforcement Learning in Finite TrialsItem type: Journal Article
Journal of Machine Learning ResearchMutti, Mirco; De Santi, Riccardo; De Bartolomeis, Piersilvio; et al. (2023)Convex Reinforcement Learning (RL) is a recently introduced framework that generalizes the standard RL objective to any convex (or concave) function of the state distribution induced by the agent's policy. This framework subsumes several applications of practical interest, such as pure exploration, imitation learning, and risk-averse RL, among others. However, the previous convex RL literature implicitly evaluates the agent's performance over infinite realizations (or trials), while most of the applications require excellent performance over a handful, or even just one, trials. To meet this practical demand, we formulate convex RL in finite trials, where the objective is any convex function of the empirical state distribution computed over a finite number of realizations. In this paper, we provide a comprehensive theoretical study of the setting, which includes an analysis of the importance of non-Markovian policies to achieve optimality, as well as a characterization of the computational and statistical complexity of the problem in various configurations. - Graph KernelsItem type: Journal Article
Journal of Machine Learning ResearchVishwanathan, S.V.N.; Schraudolph, Nicol N.; Kondor, Risi; et al. (2010) - Morpho-Mnist: Quantitative assessment and diagnostics for representation learningItem type: Journal Article
Journal of Machine Learning ResearchCastro, Daniel C.; Tan, Jeremy; Kainz, Bernhard; et al. (2019)Revealing latent structure in data is an active field of research, having introduced excitingtechnologies such as variational autoencoders and adversarial networks, and is essentialto push machine learning towards unsupervised knowledge discovery. However, a majorchallenge is the lack of suitable benchmarks for an objective and quantitative evaluation oflearned representations. To address this issue we introduce Morpho-MNIST, a frameworkthat aims to answer: “to what extent has my model learned to represent specific factors ofvariation in the data?” We extend the popular MNIST dataset by adding a morphometricanalysis enabling quantitative comparison of trained models, identification of the rolesof latent variables, and characterisation of sample diversity. We further propose a setof quantifiable perturbations to assess the performance of unsupervised and supervisedmethods on challenging tasks such as outlier detection and domain adaptation. Data andcode are available athttps://github.com/dccastro/Morpho-MNIST. - A variational approach to path estimation and parameter inference of hidden diffusion processesItem type: Journal Article
Journal of Machine Learning ResearchSutter, Tobias; Ganguly, Arnab; Koeppl, Heinz (2016) - Multi-Player Bandits: The Adversarial CaseItem type: Journal Article
Journal of Machine Learning ResearchAlatur, Pragnya; Levy, Kfir Y.; Krause, Andreas (2020)We consider a setting where multiple players sequentially choose among a common set of actions (arms). Motivated by an application to cognitive radio networks, we assume that players incur a loss upon colliding, and that communication between players is not possible. Existing approaches assume that the system is stationary. Yet this assumption is often violated in practice, e.g., due to signal strength fluctuations. In this work, we design the first multi-player Bandit algorithm that provably works in arbitrarily changing environments, where the losses of the arms may even be chosen by an adversary. This resolves an open problem posed by Rosenski et al. (2016). - Parallelizing Exploration-Exploitation Tradeoffs in Gaussian Process Bandit OptimizationItem type: Journal Article
Journal of Machine Learning ResearchDesautels, Thomas; Krause, Andreas; Burdick, Joel W. (2014) - The xyz algorithm for fast interaction search in high-dimensional dataItem type: Journal Article
Journal of Machine Learning ResearchThanei, Gian-Andrea; Meinshausen, Nicolai; Shah, Rajen D. (2018)When performing regression on a data set with p variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables. For small-scale problems, these interactions can be computed explicitly but this leads to a computational complexity of at least O(p2) if done naively. This cost can be prohibitive if p is very large. We introduce a new randomised algorithm that is able to discover interactions with high probability and under mild conditions has a runtime that is subquadratic in p. We show that strong interactions can be discovered in almost linear time, whilst finding weaker interactions requires O(pα) operations for 1<α<2 depending on their strength. The underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. The algorithm is called xyz and is implemented in the language R. We demonstrate its efficiency for application to genome-wide association studies, where more than 1011 interactions can be screened in under 280 seconds with a single-core 1.2 GHz CPU. - A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality GuaranteesItem type: Journal Article
Journal of Machine Learning ResearchChichignoud, Michael; Lederer, Johannes; Wainwright, Martin J. (2016)
Publications 1 - 10 of 88