PyTorch 2.0: What you should expect | by Dimitris Poulopoulos | Nov, 2020

[ad_1]


What is coming to PyTorch in the next three years, and why you should care.

Image by Pexels from Pixabay

PyTorch Developer Day 2020 went virtual this year, due to the coronavirus outbreak. Yet the quality of the annual event was, as expected, of a high standard. Among the many announcements, the team revealed its plan for the next three years and in this story, we’re going to summarize the key points.

PyTorch started its humble journey in 2016 and quickly became the go-to tool for Deep Learning researchers. However, PyTorch is much more than a mere prototyping tool today. It has grown into a fully-fledged production-ready framework, that is expanding its fanbase in the business sector.

And that is exactly the goal of its creators and maintainers; become the defacto standard in both academia and industry. Researchers and Machine Learning engineers should be able to run PyTorch efficiently for local Jupyter servers to Cloud Platforms, and from multi-node GPU clusters to smart devices on the edge. At the same time, PyTorch abides by some core principles:

  • It should remain flexible and facilitate rapid experimentation for researchers
  • It should be efficient, performant, reliable and scalable for production
  • It should cater towards users’ needs

These points are not always easy targets. For example, production systems tend to overfit established norms, trying to become stable and reliable. On the other hand, researchers need to break customary cycles to advance the field.

That being said let’s go through the announcements step-by-step.

Learning Rate is a newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news and articles. Subscribe here!

In the research area, PyTorch wants to remain the indisputable leader. More than 80% of researchers who submit their work on major Machine Learning conferences, such as NeurIPS or ICML, cite PyTorch as their tool of choice. To achieve that, PyTorch will focus on three main areas:

  • Better front-end APIs: Support for research reproducibility, numpy compatibility and complex tensors to express complex numbers
  • New modelling paradigms: Support for sparse data or tensors for graph neural networks
  • Comprehensive domain libraries: torchvision, torchtext and torchaudio address the challenges in Computer Vision, Natural Language Processing or Speech Recognition, but there are many more to tackle, for example in Recommender Systems or Reinforcement Learning

In 2017 PyTorch took its first steps towards becoming a Deep Learning framework suitable for production. Its integration with Caffe 2 and TorchScript was successful up to a point, but there are many more to be done:

  • Acceleration of a wide range of models: Achieve 100x speed-up in CV, Speech or NLP
  • AI Compiler: Support for AI compilers, like the work done on PyTorch JIT
  • Diverse hardware portfolio: Support for both server-side accelerators and embedded hardware
  • Model scaling: Support for large scale training of models with trillions of parameters or huge embedding tables. This means taking parallelism to the next level, implementing elastic, fault-tolerant training, high bandwidth data loading and even exploring new memory architectures.
  • Large scale inference: Support for model compression or quantization and distributed inference

PyTorch will continue to invest in a field that TensorFlow seems to have the upper hand: on-device AI. TensorFlow Lite can assist you in deploying machine learning models on mobile and IoT devices. This is a very critical area, as ubiquitous edge devices start to form the cloud of the future. Thus, PyTorch will focus on:

  • Unified runtime: Implement a unified runtime to cover a wide variety of hardware, including CPU micro-architectures, Android devices and specialized hardware like GPU and DSP
  • Model optimization: Set the balance between power efficiency, latency and accuracy

The PyTorch team wants to build a cloud-agnostic, open-source and end-to-end Machine Learning production workflow. A framework that will help its users deploy their models easily, at a low cost, while maintaining a significant amount of customization and flexibility.

In this story, we saw the three main axes of this effort: cutting-edge research, production and on-device AI. This is an exciting journey that gives all of us the opportunity to contribute code that will get us there faster. On top of that, we can also devote our time to many other open-source libraries built on PyTorch, like fast.ai or PyTorch Lightning which are democratizing AI.

My name is Dimitris Poulopoulos and I’m a machine learning engineer working for Arrikto. I have worked on designing and implementing AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA.

If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science and DataOps follow me on Medium, LinkedIn or @james2pl on Twitter.

Opinions expressed are solely my own and do not express the views or opinions of my employer.



[ad_2]




Source link

Write a comment