Back to All Events

FeatureX Machine Learning Seminar

  • CIC Boston 50 Milk Street, 5th floor, Periscope Boston, MA, 02110 United States (map)

FeatureX Machine Learning Seminar

The FeatureX weekly seminar series is open to students and professionals who share an interest in gathering with like-minded machine learning researchers. This series focuses on current and influential papers in machine learning, and brings active participants together around one relevant paper each week. The presenter will introduce the background of the paper and review the findings. Attendees are expected to have read the paper and be ready to participate in group discussions about the research content and its implications.

Space is limited and RSVP’s are mandatory for this event. Please email Emily Rogers at emily.rogers@featurex.ai if you plan to attend. If your plans change, please update us so we can offer space to someone else. 

Upcoming Seminar:

Paper Title and LinkOn the Convergence of Adam and Beyond by Sashank J. Reddi, Satyen Kale & Sanjiv Kumar (Google New York) *Published as a conference paper at ICLR 2018

Abstract: Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with “long-term memory” of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance.

Earlier Event: May 10
Venture Café