Search
Results
-
Nonlinear 1-Bit Precoding for Massive MU-MIMO with Higher-Order Modulation
(2016)2016 50th Asilomar Conference on Signals, Systems and ComputersMassive multi-user (MU) multiple-input multiple-output (MIMO) is widely believed to be a core technology for the upcoming fifth-generation (5G) wireless communication standards. The use of low-precision digital-to-analog converters (DACs) in MU-MIMO base stations is of interest because it reduces the power consumption, system costs, and raw baseband data rates. In this paper, we develop novel algorithms for downlink precoding in massive ...Conference Paper -
Decentralized Data Detection for Massive MU-MIMO on a Xeon Phi Cluster
(2016)2016 50th Asilomar Conference on Signals, Systems and ComputersConventional centralized data detection algorithms for massive multi-user multiple-input multiple-output (MU-MIMO) systems, such as minimum mean square error (MMSE) equalization, result in excessively high raw baseband data rates and computing complexity at the centralized processing unit. Hence, practical base-station (BS) designs for massive MU-MIMO that rely on state-of-the-art hardware processors and I/O interconnect standards must ...Conference Paper -
Estimating Sparse Signals with Smooth Support via Convex Programming and Block Sparsity
(2016)Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016)Conventional algorithms for sparse signal recovery and sparse representation rely on l1-norm regularized variational methods. However, when applied to the reconstruction of sparse images, i.e., images where only a few pixels are non-zero, simple l1-norm-based methods ignore potential correlations in the support between adjacent pixels. In a number of applications, one is interested in images that are not only sparse, but also have a support ...Conference Paper -
Headless Horseman: Adversarial Attacks on Transfer Learning Models
(2020)ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)Transfer learning facilitates the training of task-specific classifiers using pre-trained models as feature extractors. We present a family of transferable adversarial attacks against such classifiers, generated without access to the classification head; we call these headless attacks. We first demonstrate successful transfer attacks against a victim network using only its feature extractor. This motivates the introduction of a label-blind ...Conference Paper -
MSE-Optimal Neural Network Initialization via Layer Fusion
(2020)2020 54th Annual Conference on Information Sciences and Systems (CISS)Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks. However, the use of stochastic gradient descent combined with the nonconvexity of the underlying optimization problems renders parameter learning susceptible to initialization. To address this issue, a variety of methods that rely on random parameter initialization or knowledge distillation have been proposed in the past. In this ...Conference Paper -
PhaseLin: Linear Phase Retrieval
(2018)2018 52nd Annual Conference on Information Sciences and Systems (CISS)Phase retrieval deals with the recovery of complex-or real-valued signals from magnitude measurements. As shown recently, the method PhaseMax enables phase retrieval via convex optimization and without lifting the problem to a higher dimension. To succeed, PhaseMax requires an initial guess of the solution, which can be calculated via spectral initializers. In this paper, we show that with the availability of an initial guess, phase ...Conference Paper -
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
(2019)Proceedings of Machine Learning Research ~ Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019Conference Paper -
-
Dealbreaker: A Nonlinear Latent Variable Model for Educational Data
(2016)Proceedings of Machine Learning Research ~ Proceedings of the International Conference on Machine Learning, 20-22 June 2016, New York, New York, USAStatistical models of student responses on assessment questions, such as those in homeworks and exams, enable educators and computer-based personalized learning systems to gain insights into students' knowledge using machine learning. Popular student-response models, including the Rasch model and item response theory models, represent the probability of a student answering a question correctly using an affine function of latent factors. ...Conference Paper -
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic
(2021)International Conference on Learning Representations ICLR 2021Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity. Nonetheless, these products are accumulated using high-precision (typically 32-bit) additions, an operation that dominates the arithmetic complexity of inference when using extreme quantization (e.g., binary weights). To further optimize inference, we propose WrapNet that adapts neural networks to use ...Conference Paper