MLconf SF 2018 Speaker Resources

2018 Speaker Resources

Ilke Demir, Postdoctoral Researcher, Facebook

Open datasets and call for papers:

DeepGlobe Benchmark

Generative Street Addresses

My related references

  • Demir I., Hughes F., Raj A., Dhruv K., Muddala S.M., Garg S., Doo B., Raskar R., 2018. Generative Street Addresses from Satellite Imagery. ISPRS International Journal of Geo-Information (IJGI).
  • Aliaga D., Demir I, Benes B., Wand M., 2016. Inverse Procedural Modeling of 3D Models for Virtual Worlds. ACM SIGGRAPH 2016 Courses. (SIGGRAPH)
  • Demir I., Aliaga D., Benes B., 2016. Proceduralization for Editing 3D Architectural Models. International Conference on 3D Vision 2016 (3DV).
  • Demir I., Aliaga D., Benes B., 2015. Coupled Segmentation and Similarity Detection for Architectural Models. ACM Transactions on Graphics (ToG), also SIGGRAPH 2015.
  • Demir I., Aliaga D., Benes B., 2015. Procedural Editing of 3D Building Point Clouds. IEEE International Conference on Computer Vision 2015 (ICCV).
  • Demir I., Aliaga D., Benes B., 2014. Proceduralization of Buildings at City Scale. International Conference on 3D Vision 2014 (3DV).

Mentioned projects

Mentioned references

  • [1] Stefan Voigt, Fabio Giulio-Tonolo, Josh Lyons, Jan Kucera, Brenda Jones, Tobias Schneiderhan, Gabriel Platzeck, Kazuya Kaku, Manzul Kumar  Hazarika,  Lorant  Czaran, et al. Global trends in  satellite-based emergency mapping. Science, 353(6296):247–252, 2016.
  • [2] Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden, Li Fei-Fei. Demography with deep learning and street view. Proceedings of the National Academy of Sciences Dec 2017, 114 (50) 13108-13113
  • [3] Carlos A. Vanegas, Ignacio Garcia-Dorado, Daniel G. Aliaga, Bedrich Benes, and Paul Waddell. 2012. Inverse design of urban procedural models. ACM Trans. Graph. 31, 6, Article 168 (November 2012), 11 pages.

Joan Xiao, Lead Machine Learning Scientist, Figure Eight

Prasanth Anbalagan, Senior Software Engineer (Q&E Analysis) on the Artificial Intelligence Center of Excellence Team, Red Hat


Yi Li, Dr. Yi Li, Machine Learning Research Scientist, Baidu Silicon Valley AI Lab


Dr. Leslie Smith, Senior Research Scientist, US Naval Research Laboratory

  • Smith, Leslie N. “Cyclical learning rates for training neural networks.” In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pp. 464-472. IEEE, 2017.
  • Smith, Leslie N., and Nicholay Topin. “Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates.” arXiv preprint arXiv:1708.07120 (2017).
  • Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).

My github page:

Large batch papers:

  • Goyal, Priya, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. “Accurate, large minibatch SGD: training imagenet in 1 hour.” arXiv preprint arXiv:1706.02677 (2017).
  • You, Yang, Igor Gitman, and Boris Ginsburg. “Scaling sgd batch size to 32k for imagenet training.” arXiv preprint arXiv:1708.03888 (2017).
  • Akiba, Takuya, Shuji Suzuki, and Keisuke Fukuda. “Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes.” arXiv preprint arXiv:1711.04325 (2017).
  • Codreanu, Valeriu, Damian Podareanu, and Vikram Saletore. “Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train.” arXiv preprint arXiv:1711.04291 (2017).
  • Smith, Samuel L., Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. “Don’t decay the learning rate, increase the batch size.” arXiv preprint arXiv:1711.00489 (2017).
  • Jia, Xianyan, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie et al. “Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes.” arXiv preprint arXiv:1807.11205 (2018).
  • Yao, Zhewei, Amir Gholami, Kurt Keutzer, and Michael Mahoney. “Large batch size training of neural networks with adversarial training and second-order information.” arXiv preprint arXiv:1810.01021 (2018).