The fifth edition of the International Conference on Learning Representations (ICLR 2017) is happening in a few days at Toulon, France. ICLR has fast grown to be the conference to attend for anyone working in Deep Learning and these days it is almost everybody.
Criteo AI Lab will be presenting two posters at ICLR. Minmin Chen wrote up a nice article on coming up with a simple but efficient representation for documents by making use of a data corruption model which acts as a regularizer that favors low frequency/high information words while forcing the embeddings of the non-informative words to be close to zero. The model besides improving training and testing efficiency also yields good results on a wide variety of tasks such as sentiment analysis, document classification and semantic relatedness.
While at Criteo AI Lab, Nicolas Le Roux, proposed the idea of updating the log-loss minimization bound during training optimization by weighting the log-loss with the probabilities obtained from the parameters in a previous step. This makes the classifier robust to outliers in the data.
Besides our own submissions, we are super excited to hear from the authors of the best papers, particularly the one on the generalization abilities of deep learners. We will follow up with a summary on those papers post the conference.
We are also looking forward to chatting with some of you at the Criteo Cocktail event that will be hosted at the National Museum of the Marine at Toulon. Intrigued? Drop by the Criteo AI Lab booth.
See you at Toulon!