Topic: Reading Group
When I worked at Lab41 I started a popular series of blog posts where I reviewed some key papers in machine learning. Since then I have added some more paper reviews, including my PhD Thesis. You can find them below:
My PhD Thesis, In Short
I graduated from the University of Minnesota in June, 2015. I wrote an esoteric thesis about Z boson decay, which I explain here.
I graduated from the University of Minnesota in June, 2015. I wrote an esoteric thesis about Z boson decay, which I explain here.
Lab41 Reading Group: Swapout: Learning an Ensemble of Deep Architectures
Want to train a network but unsure about dropout vs. stochastic depth? Should you use a ResNet? Stop worry and use Swapout; it does all that and more!
Want to train a network but unsure about dropout vs. stochastic depth? Should you use a ResNet? Stop worry and use Swapout; it does all that and more!
Lab41 Reading Group: Skip-Thought Vectors
Word embeddings are great and should be your first stop for doing word based NLP. But what about sentences? Read on to learn about skip-thought vectors, a sentence embedding algorithm!
Word embeddings are great and should be your first stop for doing word based NLP. But what about sentences? Read on to learn about skip-thought vectors, a sentence embedding algorithm!
Lab41 Reading Group: Deep Residual Learning for Image Recognition
Inception, AlexNet, VGG... There are so many network architectures, which one should you be using? The one everyone else is: ResNet! Come find out how it works!
Inception, AlexNet, VGG... There are so many network architectures, which one should you be using? The one everyone else is: ResNet! Come find out how it works!
Lab41 Reading Group: Deep Compression
Deep learning is the future, but how can I fit a battery-drain, half-gigabyte network on my phone? You compress it! Come find out how deep compression saves space and power!
Deep learning is the future, but how can I fit a battery-drain, half-gigabyte network on my phone? You compress it! Come find out how deep compression saves space and power!
Lab41 Reading Group: Deep Networks with Stochastic Depth
Dropout successfully regularizes networks by dropping nodes, but what if we went one step further? Find out how stochastic depth improves your network by dropping whole layers!
Dropout successfully regularizes networks by dropping nodes, but what if we went one step further? Find out how stochastic depth improves your network by dropping whole layers!
Lab41 Reading Group: Generative Adversarial Nets
What cost function would you use to determine if a picture looks real? How about one learned by another network! Find out more with my summary of Generative Adversarial Networks!
What cost function would you use to determine if a picture looks real? How about one learned by another network! Find out more with my summary of Generative Adversarial Networks!