I was fortunate enough to get a student ticket to MLconf 2017, which was exciting because I've never been there and the speaker list looked pretty amazing. I kept my mouth shut and took tons of notes, some of which I thought I'd share:
The first speaker was Corinna Cortes, Head of Research at Google, New York. It turns out that Google is doing a whole bunch of stuff--hang on, there's a photographer with a surprisingly loud camera for 2017 who has to keep darting in front of everyone to take pictures of her. He'll do this for about the first five minutes of everyone's talk, during the part that they're really nervous and trying to calm down and focus.
Anyhow, so Google is doing enormously cool stuff, in part because they can't just do something, they have to do it at massive scale. Unsurprisingly their answer to lots of questions is either "tries" or "finite state transducers" for this reason. She went into a bunch of applications of their work, highlighting difficulties like producing output that doesn't swear at people, performance, context (for Translate), and of course the problem of standardized input that basically every speaker will mention at least once over the next eight hours.
Next up was Soumith Chintala from Facebook, who went out of his way to mention several times how early it was and how difficult that was for him. I suspect he was also feeling relatively fine until he realized he had to follow Ms. Cortes, at which point he really felt the hour. She was a ridiculously tough act to follow but he did his best, and I'll spoil the ending and tell you that Facebook has similar challenges and is using similar approaches. During this talk I also noted that the conference was giving something away to the person with the best tweet about the conference, reminding me that you can never tell whether a given group of people will find value in Twitter or not.
Yi Wang from Baidu was next; he had a lot of great energy and was awesome to listen to. He works on Paddle Paddle, which is a deep learning platform that was originally an internal project at Baidu for their clients but has since been released into the wild. It seems cool but he didn't have a lot of time to go into it so I'll just have to check it out.
The last speaker before the first break was Ross Goodwin, who is a mixture of lots of different things (artist, scientist, inventor) and gave a really great talk about some of his creative uses of technology and machine learning. He also is responsible for this which is incredible and a must-see. I have in my notes: "so interesting I wrote no notes." I think it made the room a little nervous that someone that creative would also be speaking their language, and having enormous fun with tech that they seem to take pretty seriously, but I could have been just hoping that this was the case.
The next session featured a good talk about accuracy benefits from tuning hyperparameters from Alexandra Johnson at SigOpt and a talk about nearest neighbors and vector models from someone who seemed to actively dislike this topic, sarcastically saying "cool" as he flipped disinterestedly through his slides. (At this point I texted my wife to say "I think this conference center used to be a whorehouse," which was the first of many times I found myself trying to understand the venue choice for this conference.)
When Ben Hamner from Kaggle came up to the stage everyone in the room kind of straightened up and made little waking-up sounds, since Kaggle was recently acquired by Google, and because having the guy from Kaggle in a room full of ML people is a bit like having the guy from Easy Mac in a room full of undergraduate students. He didn't really get into the theory of machine learning, but did mention that winning learners are nearly always ensembles and that the current contest, about finding duplicate questions on Quora, was a good one, with which I agree. He mostly went over some tooling that they are creating to bring first-class IDE experiences like people have with Visual Studio or IntelliJ to the work of machine learning, and that seems like it will be pretty cool.
The last speaker before lunch was from Goldman Sachs, and he gave a short talk about using machine learning to identify insider trading and market manipulation strategies, which they were able to make progress on by looking at chunks of data spanning just seconds at a time to find patterns of abuse. The information he was imparting was engaging but he really didn't get enough time and a lot of folks were ready for lunch. Which, in case you were wondering, consisted of a sandwich and a salad, which were pretty good.