July 2022: the TensorFlow neural network code library is dead.
OK, that statement is somewhat of a provocative exaggeration but bear with me.
It’s impossible to get hard data about the usage of TensorFlow (and Keras) relative to the other major library, PyTorch. Even if it were possible, such usage data would be instantly out-of-date by the time it was collated and published.
But I work at a large tech company and I have a circle of about a dozen colleagues and collaborators who work with neural systems at companies including Google, Microsoft, Amazon, Facebook, and others. All of these colleagues tell me the same thing, which is essentially that no new projects are using TensorFlow and all their teams that used to use TF have switched to PyTorch (or in a few cases at Google, switched to JAX).
Of course this is a small self-selecting sample. But the information is strong enough for me to stake my reputation in the following sense: If I were in a startup company (that my livelihood depended on) or startup team within a large company (that my job depended on), I would strongly advocate for PyTorch and strongly argue that TensorFlow / Keras has no future.
There are hundreds of blog posts and comments on the Internet on the topic of PyTorch vs. TensorFlow / Keras. I use both libraries, or all three libraries depending on your point of view of what Keras is, regularly. Keras can be dismissed quickly: it operates at too-high a level to give the flexibility needed for all but the simplest scenarios. That said, the three key issues that tell me TF is dead, in my opinion, are:
1.) The not-backward-compatible TF version 2.0 was a disaster and made the terrible TF documentation even worse.
2.) PyTorch is much easier to use than TF, in part because PyTorch is essentially Python modules as opposed to TF which feels more like custom code awkwardly integrated with Python.
3.) Google, the creator of TF, is now using JAX instead of TF for most new production and research systems.
So, in my mind, TensorFlow is clearly a dead end. I suspect Google will drop new development of TF within 36 months, at most. Because there is so much existing TF code, TF will likely limp along for several years and perhaps become the COBOL of machine learning.
One of my job responsibilities at the tech company I work for is to give training to software engineers and data scientists. Starting now, I will discontinue the TF/ Keras classes I offer, and focus strictly on PyTorch.
JAX is an unknown. I’ve experimented with JAX and have the feeling that it works at too low a level. There are several efforts to make JAX closer to the level of abstraction of PyTorch. The FLAX library is one example. When one JAX-wrapper library emerges from the pack, it could be a good alternative to PyTorch.

Three images from a stock photo search for “machine learning engineer”. Left: Most ML engineers, including me, hate it when they’re using PyTorch and binary digits fly out of the screen and hit them in the face. Center: All ML engineers should have a wrench ready to debug their neural network code. Right: Most of my colleagues don’t dress quite like this, but some do use ergonomic desks.
.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
A data-driven approach may be to tally the number of transformer papers where the code is written in PyTorch and TF/Keras. Do this for vision and NLP models. My intuition is PyTorch is head-over-shoulders ahead. And I’m interested in the actual numbers
I agree with you. In fact, the motivation for this blog post was some of my work with Transformers — all of the papers I came across used PyTorch and none (literally, zero) used TensorFlow.