Main Profile

At A Glance

NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast & Memory...

Sparse Representation and Low-rank Approximation Workshop at NIPS 2011 Invited Talk: Fast and Memory-efficient Low Rank Approximation of Massive Graphs by Inderjit Dhillon, University of Texas at Austin Abstract: Social network analysis requires us to perform a variety of analysis tasks, including summarization and prediction, on massive graphs. In this talk, I will present a fast and memory-efficient procedure called clustered low rank matrix approximation for massive graphs. The procedure involves a fast clustering of the graph followed by approximation of each cluster separately using existing methods, e.g. the singular value decomposition, or stochastic algorithms. The clusterwise approximations are then extended to approximate the entire graph. This approach has several benefits: (1) important structure of the graph is preserved due to the clustering; (2) accurate low rank approximations are achieved; (3) the procedure is efficient both in terms of computational speed and memory usage. Further, we generalize stochastic algorithms into the clustered low rank approximation framework and present theoretical bounds for the approximation error. Finally, a set of experiments shows that our methods outperform existing dimensionality reduction algorithms on massive social networks.
Length: 36:13

Contact

Questions about NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast & Memory...

Want more info about NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast & Memory...? Get free advice from education experts and Noodle community members.

  • Answer

Ask a New Question