Machine Learning Meetup
This weekly seminar series, hosted by the computer vision research team at FeatureX, is open to students and professionals who share an interest in gathering with like-minded machine learning researchers. This series focuses on current and influential papers in machine learning, and brings active participants together around one relevant paper each week. The presenter will introduce the background of the paper and review the findings. Attendees are expected to have read the paper and be ready to participate in group discussions about the research content and its implications.
Space is limited and RSVP’s are mandatory for this event. Please email Emily Rogers at firstname.lastname@example.org if you plan to attend. If your plans change, please update us so we can offer space to someone else.
Paper Title and Link: Neural Ordinary Differential Equations by Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.