Journal: Pattern Recognition Letters

Loading...

Abbreviation

Pattern recogn. lett.

Publisher

Elsevier

Journal Volumes

ISSN

0167-8655
1872-7344

Description

Search Results

Publications1 - 10 of 13
  • Alvén, Jennifer; Kahl, Fredrik; Landgren, Matilda; et al. (2019)
    Pattern Recognition Letters
  • Ignatov, Dmitry; Ignatov, Andrey (2020)
    Pattern Recognition Letters
    Despite the growing popularity of deep learning technologies, high memory requirements and power consumption are essentially limiting their application in mobile and IoT areas. While binary convolutional networks can alleviate these problems, the limited bitwidth of weights is often leading to significant degradation of prediction accuracy. In this paper, we present a method for training binary networks that maintains a stable predefined level of their information capacity throughout the training process by applying Shannon entropy based penalty to convolutional filters. The results of experiments conducted on the SVHN, CIFAR and ImageNet datasets demonstrate that the proposed approach can statistically significantly improve the accuracy of binary networks. (© 2020 Elsevier Ltd)
  • Ahammer, Helmut; Kröpfl, Julia M.A.; Hackl, Christoph M.; et al. (2008)
    Pattern Recognition Letters
  • Guillamet, D.; Vitria, J.; Schiele, B. (2003)
    Pattern Recognition Letters
  • Guyon, Isabelle; Li, Jiwen; Mader, Theodor; et al. (2007)
    Pattern Recognition Letters
  • Vandaele, Robin; Rieck, Bastian Alexander; Saeys, Yvan; et al. (2021)
    Pattern Recognition Letters
    The rising field of Topological Data Analysis (TDA) provides a new approach to learning from data through persistence diagrams, which are topological signatures that quantify topological properties of data in a comparable manner. For point clouds, these diagrams are often derived from the Vietoris-Rips filtration—based on the metric equipped on the data—which allows one to deduce topological patterns such as components and cycles of the underlying space. In metric trees these diagrams often fail to capture other crucial topological properties, such as the present leaves and multifurcations. Prior methods and results for persistent homology attempting to overcome this issue mainly target Rips graphs, which are often unfavorable in case of non-uniform density across our point cloud. We therefore introduce a new theoretical foundation for learning a wider variety of topological patterns through any given graph. Given particular powerful functions defining persistence diagrams to summarize topological patterns, including the normalized centrality or eccentricity, we prove a new stability result, explicitly bounding the bottleneck distance between the true and empirical diagrams for metric trees. This bound is tight if the metric distortion obtained through the graph and its maximal edge-weight are small. Through a case study of gene expression data, we demonstrate that our newly introduced diagrams provide novel quality measures and insights into cell trajectory inference.
  • Bermudez-Cameo, Jesus; Saurer, Olivier; Lopez-Nicolas, Gonzalo; et al. (2017)
    Pattern Recognition Letters
  • Timofte, Radu; Van Gool, Luc (2014)
    Pattern Recognition Letters
  • Roth, Wolfgang; Peharz, Robert; Tschiatschek, Sebastian; et al. (2018)
    Pattern Recognition Letters
  • Srivastava, Amber; Velicheti, Raj K.; Salapaka, Srinivasa M. (2022)
    Pattern Recognition Letters
    Many studies involving large Markov chains require determining a smaller representative (aggregated) chain. Each superstate in the representative chain represents a group of related states in the original Markov chain. Typically, the choice of number of superstates in the aggregated chain is ambiguous, and based on the limited prior know-how. This paper presents a structured methodology of determining the best candidate for the number of superstates. It achieves this by comparing aggregated chains of different sizes. To facilitate this comparison a new quantity called heterogeneity of a superstate is developed, and subsequently it is used to establish the notion of marginal return of an aggregated chain. In particular, the notion of marginal return captures the decrease in the heterogeneity upon a unit increase in the number of superstates in the aggregated chain. Maximum Entropy Principle (MEP), from statistical mechanics, justifies the developed notion of marginal return, as well as the quantification of heterogeneity. Through simulations on synthetic Markov chains, where the number of superstates are known a priori, it is observed that the aggregated chain with the largest marginal return identifies this number. In case of Markov chains that model real-life scenarios it is shown that the aggregated model with the largest marginal return identifies an inherent structure unique to the scenario being modelled; thus, substantiating the efficacy of the proposed methodology.
Publications1 - 10 of 13