Journal: Journal of Machine Learning Research

Loading...

Abbreviation

J. Mach. Learn. Res.

Publisher

Microtome Publishing

Journal Volumes

ISSN

1532-4435
1533-7928

Description

Search Results

Publications 1 - 10 of 88
  • Ćevid, Domagoj; Michel, Loris; Näf, Jeffrey; et al. (2022)
    Journal of Machine Learning Research
    Random Forest (Breiman, 2001) is a successful and widely used regression and classification algorithm. Part of its appeal and reason for its versatility is its (implicit) construction of a kernel-type weighting function on training data, which can also be used for targets other than the original mean estimation. We propose a novel forest construction for multivariate responses based on their joint conditional distribution, independent of the estimation target and the data model. It uses a new splitting criterion based on the MMD distributional metric, which is suitable for detecting heterogeneity in multivariate distributions. The induced weights define an estimate of the full conditional distribution, which in turn can be used for arbitrary and potentially complicated targets of interest. The method is very versatile and convenient to use, as we illustrate on a wide range of examples. The code is available as Python and R packages drf. Keywords: causality, distributional regression, fairness, Maximal Mean Discrepancy, Random Forests, two-sample testing
  • Weisfeiler-Lehman Graph Kernels
    Item type: Journal Article
    Shervashidze, Nino; Schweitzer, Pascal; van Leeuwen, Erik J.; et al. (2011)
    Journal of Machine Learning Research
  • Mutti, Mirco; De Santi, Riccardo; De Bartolomeis, Piersilvio; et al. (2023)
    Journal of Machine Learning Research
    Convex Reinforcement Learning (RL) is a recently introduced framework that generalizes the standard RL objective to any convex (or concave) function of the state distribution induced by the agent's policy. This framework subsumes several applications of practical interest, such as pure exploration, imitation learning, and risk-averse RL, among others. However, the previous convex RL literature implicitly evaluates the agent's performance over infinite realizations (or trials), while most of the applications require excellent performance over a handful, or even just one, trials. To meet this practical demand, we formulate convex RL in finite trials, where the objective is any convex function of the empirical state distribution computed over a finite number of realizations. In this paper, we provide a comprehensive theoretical study of the setting, which includes an analysis of the importance of non-Markovian policies to achieve optimality, as well as a characterization of the computational and statistical complexity of the problem in various configurations.
  • Graph Kernels
    Item type: Journal Article
    Vishwanathan, S.V.N.; Schraudolph, Nicol N.; Kondor, Risi; et al. (2010)
    Journal of Machine Learning Research
  • Castro, Daniel C.; Tan, Jeremy; Kainz, Bernhard; et al. (2019)
    Journal of Machine Learning Research
    Revealing latent structure in data is an active field of research, having introduced excitingtechnologies such as variational autoencoders and adversarial networks, and is essentialto push machine learning towards unsupervised knowledge discovery. However, a majorchallenge is the lack of suitable benchmarks for an objective and quantitative evaluation oflearned representations. To address this issue we introduce Morpho-MNIST, a frameworkthat aims to answer: “to what extent has my model learned to represent specific factors ofvariation in the data?” We extend the popular MNIST dataset by adding a morphometricanalysis enabling quantitative comparison of trained models, identification of the rolesof latent variables, and characterisation of sample diversity. We further propose a setof quantifiable perturbations to assess the performance of unsupervised and supervisedmethods on challenging tasks such as outlier detection and domain adaptation. Data andcode are available athttps://github.com/dccastro/Morpho-MNIST.
  • Sutter, Tobias; Ganguly, Arnab; Koeppl, Heinz (2016)
    Journal of Machine Learning Research
  • Alatur, Pragnya; Levy, Kfir Y.; Krause, Andreas (2020)
    Journal of Machine Learning Research
    We consider a setting where multiple players sequentially choose among a common set of actions (arms). Motivated by an application to cognitive radio networks, we assume that players incur a loss upon colliding, and that communication between players is not possible. Existing approaches assume that the system is stationary. Yet this assumption is often violated in practice, e.g., due to signal strength fluctuations. In this work, we design the first multi-player Bandit algorithm that provably works in arbitrarily changing environments, where the losses of the arms may even be chosen by an adversary. This resolves an open problem posed by Rosenski et al. (2016).
  • Desautels, Thomas; Krause, Andreas; Burdick, Joel W. (2014)
    Journal of Machine Learning Research
  • Thanei, Gian-Andrea; Meinshausen, Nicolai; Shah, Rajen D. (2018)
    Journal of Machine Learning Research
    When performing regression on a data set with p variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables. For small-scale problems, these interactions can be computed explicitly but this leads to a computational complexity of at least O(p2) if done naively. This cost can be prohibitive if p is very large. We introduce a new randomised algorithm that is able to discover interactions with high probability and under mild conditions has a runtime that is subquadratic in p. We show that strong interactions can be discovered in almost linear time, whilst finding weaker interactions requires O(pα) operations for 1<α<2 depending on their strength. The underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. The algorithm is called xyz and is implemented in the language R. We demonstrate its efficiency for application to genome-wide association studies, where more than 1011 interactions can be screened in under 280 seconds with a single-core 1.2 GHz CPU.
  • Chichignoud, Michael; Lederer, Johannes; Wainwright, Martin J. (2016)
    Journal of Machine Learning Research
Publications 1 - 10 of 88