Skip to content

Schedule

Samuel Friedman edited this page Oct 2, 2019 · 52 revisions

Volunteer by editing below. Small contributions are highly encouraged, and don't take the topic of the week too seriously.

Future

2019

October 3

Remi M: Can CCLF predict in vitro tumor growth with ML?

Past

September 4th

Sam F: High-Fidelity Image Generation With Fewer Labels https://arxiv.org/pdf/1903.02271v1.pdf

August 22nd

Sam F: Venture beyond Empirical Risk Minimization with Mixup a simple and effective data augmentation strategy : https://arxiv.org/abs/1710.09412

August 8th

Sam F: pix2pix Image-to-Image Translation with Conditional Adversarial Networks. https://arxiv.org/pdf/1611.07004.pdf

July 25th

Sam F: snorkeling for training data: Snorkel: Rapid Training Data Creation https://arxiv.org/abs/1711.10160

June 27th

Sam F: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) https://arxiv.org/pdf/1711.11279.pdf

May 30th

Sam F: "Deep-learning cardiac motion analysis for human survival prediction" a de-noising autoencoder which predicts survival from 4D cardiac MRIs https://www.nature.com/articles/s42256-019-0019-2

May 17th

Sam F: Attention is all you need Tracing the development in recurrent language models like "Neural Machine Translation by Jointly Learning to Align and Translate" https://arxiv.org/pdf/1409.0473.pdf It's standalone usage in "Attention Is All You Need" https://arxiv.org/pdf/1706.03762.pdf And recent applications augmenting convolutional nets: "Attention Augmented Convolutional Networks" https://arxiv.org/pdf/1904.09925.pdf

May 2nd

Joshua Batson: Noise2Self: Blind Denoising by Self-Supervision https://arxiv.org/abs/1901.11365

April 18th

Alessandro Achille: Critical Learning Periods in Deep Neural Networks. https://arxiv.org/pdf/1711.08856.pdf

April 4th

Sam F: Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks https://arxiv.org/pdf/1707.01836.pdf The paper frames processing ECGs as a sequence to sequence learning problem. We will also discuss and demo several other ways to structure this problem including multi-task classification, waveform regression, and captioning.

March 21st

Jon Bloom: Morse ensemble learning (Morsembling?): how the (trivial) topology of Euclidean space forces geometric relationships between the critical points of any "smooth" loss function on a deep neural network. Together with known probabilistic results, this gives a theoretical foundation for existing ensemble learning methods that in turn suggests new recursive algorithms that trade off between the quality and quantity of minima. I'm excited to have a conversation about why ensemble (and consensus) methods may be particularly useful when applying ML to biology. The Loss Surfaces of Multilayer Networks (2014) Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs (NeurIPS 2018) Essentially No Barriers in Neural Network Energy Landscape (ICML 2018) Stochastic Gradient Descent Escapes Saddle Points Efficiently (2019)

March 7th

Luca: Classifying and segmenting microscopy images with deep multiple instance learning (https://www.ncbi.nlm.nih.gov/pubmed/27307644)

February 7th

Sam F: Bias in word embeddings (https://www.pnas.org/content/115/16/E3635) and one approach to combat it: Multiaccuracy: Black-Box Post-Processing for Fairness in Classification (https://arxiv.org/abs/1805.12317)

January 23rd

Sam F: Deep Sets (https://arxiv.org/pdf/1703.06114.pdf)

January 9th

Mehrtash B: Neural Ordinary Differential Equations (https://arxiv.org/pdf/1806.07366.pdf)

2018

December 13th

Sam F: Re-usable Holdout (http://science.sciencemag.org/content/sci/349/6248/636.full.pdf)

November 15th

Sam F: Deformable Convolutions (https://arxiv.org/pdf/1703.06211v2.pdf)

November 1st

Sam F: Multi-view pooling (https://arxiv.org/pdf/1505.00880.pdf)

September 20th

Sam F: BQSR Models

September 6th

Daniel Kunin: Autoencoders and SVD

May 10:

Marton Kanasz-Nagy: Information bottleneck Part 2: https://openreview.net/forum?id=ry_WPG-A-

May 3:

Ben Kaufman: Information bottleneck Part 1: https://arxiv.org/abs/1503.02406

April 26:

Sam F: GPU on google cloud

April 19:

Sam F: Multi-Modal Learning

Aug 31, Chapter 20:

Takuto: Autoencoding Variational Bayes.

Aug 24:

David B: Neural Autoregressive Distribution Estimator

Aug 17:

Sam F: DNA Motifs, Deep Bind, etc

Aug 10, Chapter 18:

David B: Undirected Models, Contrastive Divergence, restricted Boltzmann machines

July 20, Chapter 14:

Sam F: Autoencoders in Keras. Preludes to Autoencoding Variational Bayes.

July 13, Chapter 14:

David B: Self-supervised learning, a clever extension of the autoencoder concept:

  • Agrawal et al, "Learning to See by Moving"
  • Wang and Gupta, "Unsupervised Learning of Visual Representations using Videos"
  • Noroozi and Favaro, "Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles"

July 6, Chapter 13:

Sam F: Generalizations of translation, bi-directional RNNs, Heuristics for better GANs and CNN + RNN caption generation.

June 29, Chapter 13:

David B: overview of a reasonable chunk of Chapter 13

June 22, Chapter 12:

Sam F: Gene2Vec

June 15, Chapter 12:

David B: Joint learning of word embeddings in two languages

June 8, Chapter 12:

David B: Word embeddings: Bengio 2003, word2vec, and GloVe, plus the Devlin et al paper on translation.

June 1:

Sam F: visualizing layers.

May 25, Chapter 11:

We will brainstorm some problems relating to Mutect 2.

May 18, Chapter 11:

We will brainstorm Mimoun's project for identifying cancer cells via imaging at the Cancer Cell Line Factory.

May 11, Chapter 11:

Rudimentary plans for applying deep learning to BQSR.

May 4, Chapter 11:

Rudimentary plans for Steve's application of deep learning to SV breakpoints.

April 27, Chapter 10:

Ray Jones on this paper, with roots in this one.

April 20, Chapter 10: Recurrent Networks

David: Either a paper on using RNNs to predict protein or miRNA binding affinity, maybe both papers.

April 13, Chapter 10: Recurrent Networks

Umut: Fiddle and the cyclic loss estimator

April 6, Chapter 10: Recurrent Networks

Sam F: GANs in Keras, software architecture for DL

March 30, Chapter 10: Recurrent Networks

David: Brainstorming deep learning in Mutect.

March 23, Chapter 10: Recurrent Networks

Sam F: RNNs and their vanishing/ exploding gradients. Video from the very funny Jurgen Schmidhuber. LSTMs and GRUs in keras

March 16, Chapter 10: Recurrent Networks

David: Basic RNN architectures and contrasting RNNs with HMMs

March 9, Chapter 9: Convolutional Networks

Joe: Schreiber et al "Nucleotide sequence and DNaseI sensitivity are predictive of 3D chromatin architecture" (bioRxiv)

March 2nd, Chapter 9: Convolutional Networks

Sam F: More CNNs in Keras, Deeper look at inception, vgg16, and resnet

February 23, Chapter 9: Convolutional Networks

David: Saxe et al "Random Weights and Unsupervised Feature Learning" -- the surprising observation that CNNs with random kernels don't perform much worse than CNNs with trained kernels.

February 16, Chapter 9: Convolutional Networks

Sam F: Old school convolution, MNIST CNNs

February 9: Snow day

February 2, Chapter 8: Optimization

David: Romero et al, FitNets

January 26, Chapter 8: Optimization

Sam F: GPU Tasting Menu Sam F: Intro to Keras Functional API

January 19, Chapter 8: Optimization

Larson: Discussed two recent studies related to stochastic gradient descent 1) Li, 2014 on efficient minibatch stochastic optimization and 2) Kimgma and Ba, 2015 on the 'Adam' algorithm. Jon Bloom discussed geometric approaches to identifying and navigating critical points in high dimensional space.

January 12, Chapter 7: Regularization

David: examples of data augmentation for speech recognition from: 1) Schluter and Grill, "Exploring Data Augmentation for Improved Singing Voice Detection"; 2) Ko et al, "Audio Augmentation for Speech Recognition"; Salomon and Bello, "Deep Convolutional Neural Networks and DataAugmentation for Environmental Sound Classification."

Umut: Generative adversarial networks

January 5, Chapter 7: Regularization

David: quick overview of regularization and an extremely half-baked thought on why early stopping works.

Sam F: Keras notebook on chromatin state prediction with a 1D convolutional net

December 22, Chapter 6: Deep Feedforward Networks

Mehrtash: universal approximation theorems

Sam F: getting started with Keras