Browsing by Author "Smith, Shaden"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Item A Medium-Grained Algorithm for Distributed Sparse Tensor Factorization(2016-05-02) Smith, Shaden; Karypis, GeorgeModeling multi-way data can be accomplished using tensors, which are data structures indexed along three or more dimensions. Tensors are increasingly used to analyze extremely large and sparse multi-way datasets in life sciences, engineering, and business. The canonical polyadic decomposition (CPD) is a popular tensor factorization for discovering latent features and is most commonly found via the method of alternating least squares (CPD-ALS). The computational time and memory required to compute CPD limits the size and dimensionality of the tensors that can be solved on a typical workstation, making distributed solution approaches the only viable option. Most methods for distributed-memory systems have focused on distributing the tensor in a coarse-grained, one-dimensional fashion that prohibitively requires the dense matrix factors to be fully replicated on each node. Recent work overcomes this limitation by using a fine-grained decomposition of the tensor nonzeros, at the cost of computationally expensive hypergraph partitioning. To that effect, we present a medium-grained decomposition that avoids complete factor replication and communication, while eliminating the need for expensive pre-processing steps. We use a hybrid MPI+OpenMP implementation that exploits multi-core architectures with a low memory footprint. We theoretically analyze the scalability of the coarse-, medium-, and fine-grained decompositions and experimentally compare them across a variety of datasets. Experiments show that the medium-grained decomposition reduces communication volume by 36-90% compared to the coarse-grained decomposition, is 41-76x faster than a state-of- the-art MPI code, and is 1.5-5.0x faster than the fine-grained decomposition with 1024 cores.Item Accelerating the Tucker Decomposition with Compressed Sparse Tensors(2017-09-04) Smith, Shaden; Karypis, GeorgeThe Tucker decomposition is a higher-order analogue of the singular value decomposition and is a popular method of performing analysis on multi-way data (tensors). Computing the Tucker decomposition of a sparse tensor is demanding in terms of both memory and computational resources. The primary kernel of the factorization is a chain of tensor-matrix multiplications (TTMc). State-of-the-art algorithms accelerate the underlying computations by trading off memory to memoize the intermediate results of TTMc in order to reuse them across iterations. We present an algorithm based on a compressed data structure for sparse tensors and show that many computational redundancies during TTMc can be identified and pruned without the memory overheads of memoization. In addition, our algorithm can further reduce the number of operations by exploiting an additional amount of user-specified memory. We evaluate our algorithm on a collection of real-world and synthetic datasets and demonstrate up to 20.7x speedup while using 28.5x less memory than the state-of-the-art parallel algorithm.Item Algorithms for Large-Scale Sparse Tensor Factorization(2019-04) Smith, ShadenTensor factorization is a technique for analyzing data that features interactions of data along three or more axes, or modes. Many fields such as retail, health analytics, and cybersecurity utilize tensor factorization to gain useful insights and make better decisions. The tensors that arise in these domains are increasingly large, sparse, and high dimensional. Factoring these tensors is computationally expensive, if not infeasible. The ubiquity of multi-core processors and large-scale clusters motivates the development of scalable parallel algorithms to facilitate these computations. However, sparse tensor factorizations often achieve only a small fraction of potential performance due to challenges including data-dependent parallelism and memory accesses, high memory consumption, and frequent fine-grained synchronizations among compute cores. This thesis presents a collection of algorithms for factoring sparse tensors on modern parallel architectures. This work is focused on developing algorithms that are scalable while being memory- and operation-efficient. We address a number of challenges across various forms of tensor factorizations and emphasize results on large, real-world datasets.Item Big Data and Recommender Systems(2016-09-12) Anastasiu, David C.; Christakopoulou, Evangelia; Smith, Shaden; Sharma, Mohit; Karypis, GeorgeRecommender systems are ubiquitous in today's marketplace and have great commercial importance, as evidenced by the large number of companies that sell recommender systems solutions. Successful recommender systems use past product purchase and satisfaction data to make high quality personalized recommendations. The vast amounts of data available to recommender systems today forces a total re-evaluation of the methods used to compute recommendations. In this paper, we provide an overview of recommender systems in the era of Big Data. We highlight prevailing recommendation algorithms and how they have been adapted to operate in parallel and distributed computing environments. Within the recommender systems context, we focus our discussion on two specific challenges: how to scale up finding nearest neighbors and how to scale latent factor recommendation methods.Item Big Data Frequent Pattern Mining(2014-07-09) Anastasiu, David C.; Iverson, Jeremy; Smith, Shaden; Karypis, GeorgeFrequent pattern mining is an essential data mining task, with a goal of discovering knowledge in the form of repeated patterns. Many efficient pattern mining algorithms have been discovered in the last two decades, yet most do not scale to the type of data we are presented with today, the so-called "Big Data". Scalable parallel algorithms hold the key to solving the problem in this context. In this chapter, we review recent advances in parallel frequent pattern mining, analyzing them through the Big Data lens. We identify three areas as challenges to designing parallel frequent pattern mining algorithms: memory scalability, work partitioning, and load balancing. With these challenges as a frame of reference, we extract and describe key algorithmic design patterns from the wealth of research conducted in this domain.Item DMS: Distributed Sparse Tensor Factorization with Alternating Least Squares(2015-05-12) Smith, Shaden; Karypis, GeorgeTensors are data structures indexed along three or more dimensions. Tensors have found increasing use in domains such as data mining and recommender systems where dimensions can have enormous length and are resultingly very sparse. The canonical polyadic decomposition (CPD) is the most popular tensor factorization for discovering latent features and is most commonly found via the method of alternating least squares (CPD-ALS). Factoring large, sparse tensors is a computationally challenging task which can no longer be done in the memory of a typical workstation. State of the art methods for distributed memory systems have focused on decomposing the tensor in a one-dimensional (1D) fashion that prohibitively requires the dense matrix factors to be fully replicated on each node. To that effect, we present DMS, a novel distributed CPD-ALS algorithm. DMS utilizes a 3D decomposition that avoids complete factor replication and communication. DMS has a hybrid MPI+OpenMP implementation that utilizes multi-core architectures with a low memory footprint. We theoretically evaluate DMS against leading CPD-ALS methods and experimentally compare them across a variety of datasets. Our 3D decomposition reduces communication volume by 74% on average and is over 35x faster than state of the art MPI code on a tensor with 1.7 billion nonzeros.Item Sparse Tensor Factorization on Many-Core Processors with High-Bandwidth Memory(2017-06-05) Smith, Shaden; Park, Jongsoo; Karypis, GeorgeHPC systems are increasingly used for data intensive computations which exhibit irregular memory accesses, non-uniform work distributions, large memory footprints, and high memory bandwidth demands. To address these challenging demands, HPC systems are turning to many-core architectures that feature a large number of energy-efficient cores backed by high-bandwidth memory. These features are exemplified in Intel's recent Knights Landing many-core processor (KNL), which typically has 68 cores and 16GB of on-package multi-channel DRAM (MCDRAM). This work investigates how the novel architectural features offered by KNL can be used in the context of decomposing sparse, unstructured tensors using the canonical polyadic decomposition (CPD). The CPD is used extensively to analyze large multi-way datasets arising in various areas including precision healthcare, cybersecurity, and e-commerce. Towards this end, we (i) develop problem decompositions for the CPD which are amenable to hundreds of concurrent threads while maintaining load balance and low synchronization costs; and (ii) explore the utilization of architectural features such as MCDRAM. Using one KNL processor, our algorithm achieves up to 1.8x speedup over a dual socket Intel Xeon system with 44 cores.Item SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication(2015-05-13) Smith, Shaden; Ravindran, Niranjay; Sidiropoulos, Nicholas D.; Karypis, GeorgeMulti-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricized tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reorderings and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.Item Tensor-Matrix Products with a Compressed Sparse Tensor(2015-10-12) Smith, Shaden; Karypis, GeorgeThe Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensor-matrix products fall into two classes. The first class saves floating point operations by storing a compressed tensor for each dimension of the data. These methods are fast but suffer high memory costs. The second class uses a single uncompressed tensor at the cost of additional floating point operations. In this work, we bridge the gap between the two approaches and introduce the compressed sparse fiber (CSF) a data structure for sparse tensors along with a novel parallel algorithm for tensor-matrix multiplication. CSF offers similar operation reductions as existing compressed methods while using only a single tensor structure. We validate our contributions with experiments comparing against state-of-the-art methods on a diverse set of datasets. Our work uses 58% less memory than the state-of-the-art while achieving 81% of the parallel performance on 16 threads.