Summer Reading Group Data Science Lab

We are group of members of Data Science Lab at Iowa State University. We get together to read and understand the very recent papers related to Theoretical and algorithmic aspects of nonlinear parameter estimation and deep learning. We meet twice a week. Please see our Reading list and Schedule page for more details.

Week 5 - Friday, 7th July

Understanding Trainable Sparse Coding via matrix factorization

In our 8th meet on Friday, Thanh presented the paper titled Understanding Trainable Sparse Coding with Matrix Factorization with a brief summary of this reference.

Notes for the meeting are available here.

Week 4 - Friday, 29th June

Compressed sensing using generative models

In our 7th meet on Friday, Praneeth presented the paper titled Compressed sensing using generative models.

Notes for the meeting are available here.

Week 4 - Tuesday, 27th June

Generative Adversarial Networks and Wasserstein GANs

In our 6th meet this Tuesday, Viraj completed the last piece of Generative Adversarial Networks paper in the first half. Notes are available here.

In the second half of the meet, Dr. Chinmay presented the Wasserstein GAN paper. The idea behind this presentation is to understand the motivation and developing the background behind the notion of Wasserstein distance in context of GAN.

Notes for the meeting and other additional resources will be posted here once available.

Week 3 - 23rd June

Generative Adversarial Networks

In our 5th meet on Friday, Viraj gave a brief idea about Generative Models and presented the paper titled Generative Adversarial Networks.

This tutorial was also used in conjuction.

Notes for the meeting are available here. Some illustrations for better understanding are available in slides here along with some results. (Please download the presenatation slides to open it using Powerpoint as it contains few GIFs.)

Week 2 - 13th & 16th June

Learning ReLUs via gradient descent

In our 3rd meet last Tuesday, Gauri volunteered to present the paper titled Learning ReLUs via gradient descent – theoretical results proving linear convergence of projected gradient descent algorithm, to fit Rectified Linear Units (ReLUs) to data, using optimal number of samples.

Notes for Tuesday meeting are here.

In next meet on Friday, Gauri concluded the topic of Tuesday’s meet.

Notes for Friday meeting will be uploaded once available.