Journal: Pattern Recognition Letters
Loading...
Abbreviation
Pattern recogn. lett.
Publisher
Elsevier
13 results
Search Results
Publications1 - 10 of 13
- Shape-aware label fusion for multi-atlas frameworksItem type: Journal Article
Pattern Recognition LettersAlvén, Jennifer; Kahl, Fredrik; Landgren, Matilda; et al. (2019) - Controlling information capacity of binary neural networkItem type: Journal Article
Pattern Recognition LettersIgnatov, Dmitry; Ignatov, Andrey (2020)Despite the growing popularity of deep learning technologies, high memory requirements and power consumption are essentially limiting their application in mobile and IoT areas. While binary convolutional networks can alleviate these problems, the limited bitwidth of weights is often leading to significant degradation of prediction accuracy. In this paper, we present a method for training binary networks that maintains a stable predefined level of their information capacity throughout the training process by applying Shannon entropy based penalty to convolutional filters. The results of experiments conducted on the SVHN, CIFAR and ImageNet datasets demonstrate that the proposed approach can statistically significantly improve the accuracy of binary networks. (© 2020 Elsevier Ltd) - Image statistics and data mining of anal intraepithelial neoplasiaItem type: Journal Article
Pattern Recognition LettersAhammer, Helmut; Kröpfl, Julia M.A.; Hackl, Christoph M.; et al. (2008) - Introducing a weighted non-negative matrix factorization for image classificationItem type: Journal Article
Pattern Recognition LettersGuillamet, D.; Vitria, J.; Schiele, B. (2003) - Competitive baseline methods set new standards for the NIPS 2003 feature selection benchmarkItem type: Journal Article
Pattern Recognition LettersGuyon, Isabelle; Li, Jiwen; Mader, Theodor; et al. (2007) - Stable topological signatures for metric trees through graph approximationsItem type: Journal Article
Pattern Recognition LettersVandaele, Robin; Rieck, Bastian Alexander; Saeys, Yvan; et al. (2021)The rising field of Topological Data Analysis (TDA) provides a new approach to learning from data through persistence diagrams, which are topological signatures that quantify topological properties of data in a comparable manner. For point clouds, these diagrams are often derived from the Vietoris-Rips filtration—based on the metric equipped on the data—which allows one to deduce topological patterns such as components and cycles of the underlying space. In metric trees these diagrams often fail to capture other crucial topological properties, such as the present leaves and multifurcations. Prior methods and results for persistent homology attempting to overcome this issue mainly target Rips graphs, which are often unfavorable in case of non-uniform density across our point cloud. We therefore introduce a new theoretical foundation for learning a wider variety of topological patterns through any given graph. Given particular powerful functions defining persistence diagrams to summarize topological patterns, including the normalized centrality or eccentricity, we prove a new stability result, explicitly bounding the bottleneck distance between the true and empirical diagrams for metric trees. This bound is tight if the metric distortion obtained through the graph and its maximal edge-weight are small. Through a case study of gene expression data, we demonstrate that our newly introduced diagrams provide novel quality measures and insights into cell trajectory inference. - Exploiting line metric reconstruction from non-central circular panoramasItem type: Journal Article
Pattern Recognition LettersBermudez-Cameo, Jesus; Saurer, Olivier; Lopez-Nicolas, Gonzalo; et al. (2017) - Adaptive and Weighted Collaborative Representations for image classificationItem type: Conference Paper
Pattern Recognition LettersTimofte, Radu; Van Gool, Luc (2014) - Hybrid generative-discriminative training of Gaussian mixture modelsItem type: Journal Article
Pattern Recognition LettersRoth, Wolfgang; Peharz, Robert; Tschiatschek, Sebastian; et al. (2018) - On the choice of number of superstates in the aggregation of Markov chainsItem type: Journal Article
Pattern Recognition LettersSrivastava, Amber; Velicheti, Raj K.; Salapaka, Srinivasa M. (2022)Many studies involving large Markov chains require determining a smaller representative (aggregated) chain. Each superstate in the representative chain represents a group of related states in the original Markov chain. Typically, the choice of number of superstates in the aggregated chain is ambiguous, and based on the limited prior know-how. This paper presents a structured methodology of determining the best candidate for the number of superstates. It achieves this by comparing aggregated chains of different sizes. To facilitate this comparison a new quantity called heterogeneity of a superstate is developed, and subsequently it is used to establish the notion of marginal return of an aggregated chain. In particular, the notion of marginal return captures the decrease in the heterogeneity upon a unit increase in the number of superstates in the aggregated chain. Maximum Entropy Principle (MEP), from statistical mechanics, justifies the developed notion of marginal return, as well as the quantification of heterogeneity. Through simulations on synthetic Markov chains, where the number of superstates are known a priori, it is observed that the aggregated chain with the largest marginal return identifies this number. In case of Markov chains that model real-life scenarios it is shown that the aggregated model with the largest marginal return identifies an inherent structure unique to the scenario being modelled; thus, substantiating the efficacy of the proposed methodology.
Publications1 - 10 of 13