2018 Speaker Resources
Ilke Demir, Postdoctoral Researcher, Facebook
Open datasets and call for papers:
- DeepGlobe: https://deepglobe.org
- SpaceNet: https://spacenetchallenge.github.io/
- Earth observation challenge: http://eochallenge.bemyapp.com/
- Data fusion contest: http://grss-ieee.org/data-fusion-contest/
- Functional map of the world: https://www.iarpa.gov/challenges/fmow.html
- SUMO Challenge: https://sumochallenge.org/
- Geospatial Modeling and Visualization, Special Issue in Big Earth Data Journal
http://bit.ly/BigEarthData - Challenges and opportunities for deep learning in remote sensing,
Special session in Living Planet Symposium 2019
https://lps19.esa.int/
DeepGlobe Benchmark
- Papers: http://bit.ly/deepglobe_papers
- Website: http://deepglobe.org
- Dataset: http://bit.ly/deepglobe
Generative Street Addresses
- Code: https://github.com/facebookresearch/street-addresses
- Paper: https://research.fb.com/publications/robocodes
My related references
- Demir I., Hughes F., Raj A., Dhruv K., Muddala S.M., Garg S., Doo B., Raskar R., 2018. Generative Street Addresses from Satellite Imagery. ISPRS International Journal of Geo-Information (IJGI).
- Aliaga D., Demir I, Benes B., Wand M., 2016. Inverse Procedural Modeling of 3D Models for Virtual Worlds. ACM SIGGRAPH 2016 Courses. (SIGGRAPH)
- Demir I., Aliaga D., Benes B., 2016. Proceduralization for Editing 3D Architectural Models. International Conference on 3D Vision 2016 (3DV).
- Demir I., Aliaga D., Benes B., 2015. Coupled Segmentation and Similarity Detection for Architectural Models. ACM Transactions on Graphics (ToG), also SIGGRAPH 2015.
- Demir I., Aliaga D., Benes B., 2015. Procedural Editing of 3D Building Point Clouds. IEEE International Conference on Computer Vision 2015 (ICCV).
- Demir I., Aliaga D., Benes B., 2014. Proceduralization of Buildings at City Scale. International Conference on 3D Vision 2014 (3DV).
Mentioned projects
- Street score: http://streetscore.media.mit.edu/
- Sustanability: http://sustain.stanford.edu/predicting-poverty/
Mentioned references
- [1] Stefan Voigt, Fabio Giulio-Tonolo, Josh Lyons, Jan Kucera, Brenda Jones, Tobias Schneiderhan, Gabriel Platzeck, Kazuya Kaku, Manzul Kumar Hazarika, Lorant Czaran, et al. Global trends in satellite-based emergency mapping. Science, 353(6296):247–252, 2016.
- [2] Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden, Li Fei-Fei. Demography with deep learning and street view. Proceedings of the National Academy of Sciences Dec 2017, 114 (50) 13108-13113
- [3] Carlos A. Vanegas, Ignacio Garcia-Dorado, Daniel G. Aliaga, Bedrich Benes, and Paul Waddell. 2012. Inverse design of urban procedural models. ACM Trans. Graph. 31, 6, Article 168 (November 2012), 11 pages.
Joan Xiao, Lead Machine Learning Scientist, Figure Eight
- ELMo: https://arxiv.org/abs/1802.05365
- Flair: https://github.com/zalandoresearch/flair
- Universal Language Model Fine-tuning for Text Classification: https://arxiv.org/abs/1801.06146
- Improving Language Understanding with Unsupervised Learning: https://blog.openai.com/language-unsupervised/
- NLP’s ImageNet moment has arrived: https://thegradient.pub/nlp-imagenet/
- BERT: https://github.com/google-research/bert
Prasanth Anbalagan, Senior Software Engineer (Q&E Analysis) on the Artificial Intelligence Center of Excellence Team, Red Hat
- AI-Library https://gitlab.com/
opendatahub/ai-library - Open Data Hub https://opendatahub.io/
- https://www.youtube.com/watch?
v=3TnXYQkbaYU
Yi Li, Dr. Yi Li, Machine Learning Research Scientist, Baidu Silicon Valley AI Lab
- Cancer Metastasis Detection With Neural Conditional Random Field
- Open Source Code Github Page: https://github.com/baidu-research/NCRF
Dr. Leslie Smith, Senior Research Scientist, US Naval Research Laboratory
- Smith, Leslie N. “Cyclical learning rates for training neural networks.” In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pp. 464-472. IEEE, 2017.
- Smith, Leslie N., and Nicholay Topin. “Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates.” arXiv preprint arXiv:1708.07120 (2017).
- Smith, Leslie N. “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay.” arXiv preprint arXiv:1803.09820 (2018).
My github page:
- https://github.com/lnsmith54/super-convergence
- https://github.com/iPhysicist/super-convergence
- https://github.com/lnsmith54/hyperParam1
Large batch papers:
- Goyal, Priya, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. “Accurate, large minibatch SGD: training imagenet in 1 hour.” arXiv preprint arXiv:1706.02677 (2017).
- You, Yang, Igor Gitman, and Boris Ginsburg. “Scaling sgd batch size to 32k for imagenet training.” arXiv preprint arXiv:1708.03888 (2017).
- Akiba, Takuya, Shuji Suzuki, and Keisuke Fukuda. “Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes.” arXiv preprint arXiv:1711.04325 (2017).
- Codreanu, Valeriu, Damian Podareanu, and Vikram Saletore. “Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train.” arXiv preprint arXiv:1711.04291 (2017).
- Smith, Samuel L., Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. “Don’t decay the learning rate, increase the batch size.” arXiv preprint arXiv:1711.00489 (2017).
- Jia, Xianyan, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie et al. “Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes.” arXiv preprint arXiv:1807.11205 (2018).
- Yao, Zhewei, Amir Gholami, Kurt Keutzer, and Michael Mahoney. “Large batch size training of neural networks with adversarial training and second-order information.” arXiv preprint arXiv:1810.01021 (2018).