2020 Agenda
9:00 AM | Livestream begins | |
9:30 AM | Keynote |
Megan Kacholia
Kemal El Moujahid Manasi Joshi |
9:55 AM |
Learning to Read with TensorFlow and Keras
Natural Language Processing (NLP) has hit an inflection point, and this talk shows you how TensorFlow and Keras make it easy to preprocess, train, and hypertune text models. |
Paige Bailey |
10:15 AM |
TensorFlow Hub: Making Model Discovery Easy
TF Hub is the main repository for ML models. This talk looks into all the new features and how they can make your model discovery journey even better. |
Sandeep Gupta |
10:25 AM |
Collaborative ML with TensorBoard.dev
Sharing experiment results is an important part of the ML process. This talk shows how TensorBoard.dev can enable collaborative ML by making it easy to share experiment results in your paper, blog post, social media, and more. |
Gal Oshri |
10:30 AM |
Transitioning Kagglers to TPU with TF 2.x
Recently, Kaggle introduced TPU support through its competition platform. This talk touches on how Kaggler competitors transitioned from GPU to TPU use, first in Colab, and then in Kaggle notebooks. |
Julia Elliott |
10:35 AM |
Performance Profiling in TF 2
This talk presents a profiler that Google internally uses to investigate TF performance on platforms including GPU, TPU, and CPU. |
Qiumin Xu | 10:45 AM |
Potential Q&A Block
Please leverage the LiveChat feature in the livestream as we will have TensorFlow team members responding in the chat in real time. Should we have additional time in the livestream, we will answer a few questions live. |
All speakers thus far |
10:55 AM | Break | |
11:20 AM |
Research with TensorFlow
In this talk we'll go over some interesting features of TF which are useful when doing research. |
Alexandre Passos |
11:35 AM |
Differentiable Convex Optimization Layers
Convex optimization problems are used to solve many problems in the real world. Until now, it has been difficult to use them in TensorFlow pipelines. This talk presents cvxpylayers, a package that makes it easy to embed convex optimization problems into TensorFlow, letting you tune them using gradient descent. |
Akshay Agrawal, Stanford University |
11:40 AM |
Scaling Tensorflow Data Processing with tf.data
As model training becomes more distributed in nature, tf.data has evolved to be more distribution aware and performant. This talk presents tf.data tools for scaling TensorFlow data processing. In particular: tf.data service that allows your tf.data pipeline to run on a cluster of machines, and tf.data.snapshot that materializes the results to disk for reuses across multiple invocations. |
Rohan Jain |
11:55 AM |
Scaling TensorFlow 2 Models to Multi-Worker GPUs
This talk showcases multiple performance improvements in TensorFlow 2.2 to accelerate and scale users' ML training workload to multi-worker multi-GPUs. We walk through the optimizations using a BERT fine-tuning task in TF model garden, written using a custom training loop. |
Zongwei Zhou |
12:10 PM |
Making the Most of Colab
Learn tips and tricks from the Colab team. This talk describes how TensorFlow users make the most of Colab, and peeks behind the curtain to see how Colab works. |
Timothy Novikoff |
12:15 PM |
TensorFlow and Machine Learning from the Trenches: The Innovation Experience
Center at the Jet Propulsion Laboratory
Chris Mattmann will explain how JPL’s Innovation Experience Center in the Office of the Chief Information Officer supports advanced analytics, AI and Machine Learning using TensorFlow for Smarter Rovers, a Smarter Campus, and beyond! |
Chris Mattmann, NASA |
12:25 PM |
Potential Q&A Block
Please leverage the LiveChat feature in the livestream as we will have TensorFlow team members responding in the chat in real time. Should we have additional time in the livestream, we will answer a few questions live. |
Speakers from the break onwards |
12:35 PM | Break | |
1:40 PM |
MLIR: Accelerating TF with Compilers
This talk will describe MLIR - machine learning compiler infrastructure for TensorFlow and explain how it helps TensorFlow to scale faster to meet the needs of rapidly evolving machine learning software and hardware. |
Jacques Pienaar |
1:50 PM |
TFRT: A New TensorFlow Runtime
TFRT is a new runtime for TensorFlow. Leveraging MLIR, it aims to provide a unified, extensible infrastructure layer with best-in-class performance across a wide variety of domain specific hardware. This approach provides efficient use of the multithreaded host CPUs, supports fully asynchronous programming models, and is focused on low-level efficiency. |
Mingsheng Hong |
2:00 PM |
TFX: Production ML with TensorFlow in 2020
Learn how the Google production ML platform, TFX, is changing in 2020. |
Tris Warkentin
Zhitao Li |
2:25 PM |
TensorFlow Enterprise: Productionizing TensorFlow with Google Cloud
TensorFlow Enterprise makes your TensorFlow applications enterprise ready, with a number of enhancements to TensorFlow on Google Cloud. It unlocks Cloud scale data and models, while simplifying development of business-critical ML applications from prototype to production. Together, we solve the hardest part of enterprise ML in production. |
Makoto Uchida |
2:35 PM |
TensorFlow Lite: ML for Mobile and IoT Devices
Learn about how to deploy ML to mobile phones and embedded devices. Now deployed on billions of devices in production - it’s the world’s best cross-platform ML framework for mobile and microcontrollers. Tune in for our new exciting announcements. |
Tim Davis
T.J. Alumbaugh |
2:55 PM |
Jacquard: Embedding ML Seamlessly into Everyday
Objects
Jacquard is an ML-powered ambient computing platform that takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose. We'll describe how we have trained and deployed resource-constrained machine learning models that get embedded seamlessly into everyday garments and accessories; like your favorite jacket, backpack, or a pair of shoes that you love to wear. |
Nicholas Gillian |
3:05 PM |
TensorFlow.js: Machine Learning for the Web and Beyond
TensorFlow.js is a platform for training and deploying machine learning models in browsers, or anywhere Javascript can run, such as mobile devices, WeChat mini app platform, and Raspberry Pi. It provides several back ends, including a CPU, GPU, Node, and WASM back end. It also provides a collection of pretrained models, including the two newest additions: MobileBERT and FaceMesh. |
Na Li |
3:15 PM |
Potential Q&A Block
Please leverage the LiveChat feature in the livestream as we will have TensorFlow team members responding in the chat in real time. Should we have additional time in the livestream, we will answer a few questions live. |
Speakers from the break onwards |
3:25 PM | Break | |
3:45 PM |
Getting Involved in the TF Community
Learn how you can be a part of the growing TensorFlow ecosystem and become a contributor through code, documentation, education, or community leadership. |
Joana Carraqueira |
3:55 PM |
Responsible AI with TensorFlow: Fairness and Privacy
Introducing a framework to think about ML, fairness and privacy. This talk will propose a fairness-aware ML workflow, illustrate how TensorFlow tools such as Fairness Indicators can be used to detect and mitigate bias, and will then transition to a specific case-study regarding privacy that will walk participants through a couple of infrastructure pieces that can help train a model in a privacy preserving manner. |
Catherina Xu
Miguel Guevara |
4:20 PM |
TensorFlow Quantum: A Software Platform for Hybrid
Quantum-Classical Machine Learning
We introduce TensorFlow Quantum, an open-source library for the rapid prototyping of novel hybrid quantum-classical ML algorithms. This library will extend the scope of current ML under TensorFlow and provides the necessary toolbox for bringing quantum computing and machine learning research communities together to control and model quantum data. |
Masoud Mohseni |
4:45 PM | Closing announcements |