Are there any Python packages that still require numpy-1.x now, in April 2026 ?
Posted by dark_prophet@reddit | Python | View on Reddit | 29 comments
I am trying to understand how important is numpy-1.x today.
Do you know of, work on, or observed Python packages which latest version fails with numpy-2.x and only works with numpy-1.x ?
johnnytest__7@reddit
Robotics toolbox and its dependencies
hl_lost@reddit
honestly at this point most actively maintained packages have made the jump. the stragglers are usually smaller/abandoned projects that nobody's updating anymore.
the real pain i've run into is transitive dependencies — your main packages all support numpy 2.x but then some random subdependency three levels deep pins
numpy<2and suddenly you're in dependency hell. had this happen with a data pipeline at work a few months ago and it was not a fun afternoon lolif you want a concrete answer, i'd just do something like
pip install numpy==2.x && pip checkon your project and see what screams. faster than trying to catalog everythingAcceptable_Crab164@reddit
Yeah, quiet a handful of Em
Able-Preparation843@reddit
Interesting question. From what I've seen in the ecosystem, the numpy-1.x vs 2.x situation has gotten much better than it was during the initial NumPy 2.0 release.
Most major scientific packages (pandas, scikit-learn, scipy, matplotlib, PyTorch, TensorFlow) have all updated to support NumPy 2.x now. The numpy team did a great job with the compatibility layer and the deprecation warnings that were in place during the 1.x series helped a lot.
However, the packages that are most likely to still be stuck on numpy-1.x are:
- Older/niche ML libraries that haven't had active maintainers
- Some bioinformatics tools that have very specific C-extension dependencies
- Legacy packages in specialized domains (certain finance or signal processing libs)
- Any package that hasn't been updated since \~2023-2024
One practical tip: you can use `pip check` or `pipdeptree` to see dependency conflicts in your environment. Also, tools like `pipx` or `uv` make it easier to manage separate environments for packages that have conflicting numpy requirements.
What's your use case? Are you dealing with a specific package that broke, or just doing a general audit?
misterfitzie@reddit
are you a bot? you seem like you're ai
Able-Preparation843@reddit
Been on Reddit too long, started writing like ChatGPT apparently 💀.........
dark_prophet@reddit (OP)
I am doing an audit.
And I am also looking for specific examples: is there a substantial number of real examples of packages that still require numpy-1.x?
mr_jim_lahey@reddit
Try searching on GitHub for setup.py and pyproject.toml files containing numpy-1
flying-sheep@reddit
That is probably of little help by itself: - numpy>=1.??? could mean “supports both” or “hasn't been updated” - numpy~=1.? could mean “doesn't support 2” or “doesn't know that upper bounding by default is bad”
mr_jim_lahey@reddit
You're welcome to provide an alternative suggestion that would yield fewer false positives.
Able-Preparation843@reddit
Fair point from flying-sheep about the false positives. But you're on the right track with GitHub search - it just needs to be more specific.
A better approach would be to search GitHub code for:
`"numpy<2"` in setup.py/pyproject.toml - this explicitly caps the upper bound
`"numpy>=1, <2"` - packages that actively block numpy 2.x
Check the `requires-python` or `install_requires` fields specifically, not just raw text
Also, a practical way to find real examples:
- Use `pypi.org` and filter packages by last upload date (anything before mid-2024 likely hasn't tested with numpy 2.x)
- Run `pip install numpy==2.0.0` in a fresh venv, then try installing older packages and see which ones fail
- Check the `numpy` issue tracker and PRs - they track packages that broke during the migration
For an audit, the most reliable signal is probably checking if a package has published a wheel compatible with numpy 2.x ABI. The NumPy 2.0 migration guide lists known issues with C-extensions that used the internal API.
Would love to hear what you find if you try the `pip check` approach on your end.
mr_jim_lahey@reddit
Yeah, totally agree you'd need substantial additional filtering/cross-checking to get a reasonable handle on the statistics and bigger picture. As usual in software, there are plenty of ways to skin the cat. OP was asking for individual package examples so, searching GitHub like I suggested would probably be sufficient in that more limited scope.
RevRagnarok@reddit
My project at work because we have an older version of Boost that only works with numpy < 2.
ShuredingaNoNeko@reddit
En mí trabajo mantengo aplicaciones que solo funcionan con numpy 1.x, pero son de un Python muy viejo. Creo que la importancia reluce en la mantención de aplicaciones viejas.
ClearDevDocs@reddit
NumPy 2.x is still pretty new, so yeah — a lot of the ecosystem is still catching up.
You’ll definitely run into packages that either: • pin to numpy<2 • or break because of ABI changes / deprecated APIs
Common ones (depending on version): • older pandas • scipy (older releases) • scikit-learn • some ML/DS libs like xgboost, tensorflow, etc.
In practice: Most production projects are still on NumPy 1.x NumPy 2.x is usable, but you might hit compatibility issues
Rule of thumb: If you’re starting fresh → try 2.x If you’re working in an existing project → stick with 1.x unless everything supports 2.x
It’s basically a transition period right now.
billsil@reddit
Of course there are. More packages are dead than not
Also, PyPI is harder than it used to be. Not everyone is up in how to upload a package and deal with the 2FA requirements. I haven’t done a release in 2 years due to that.
cgoldberg@reddit
You don't need a "wealth of experience". It literally takes less than 30 seconds to setup 2FA and get a token for publishing.
billsil@reddit
Sure and aspect of requiring twine and knowing where to put what piece of info and knowing where you saved your plain text recovery keys and knowing which ones are still valid and getting a supported app, linking that, and dealing with changing pip requirements.
PyPI used to be drag and drop.
Nobody can do all that in 30 seconds. I’ve done it and I struggle to do it.
cgoldberg@reddit
You only need recovery keys to recover an account if you lose access to 2FA...if you can't manage that, that's on you. Requiring 2FA and a valid token is absolutely necessary due to all the supply chain attacks on PyPI. Overall, it's really easy and well documented.
billsil@reddit
Again, what info do you enter where. It’s easier to start from scratch that try to diagnose why opaque software just doesn’t work
cgoldberg@reddit
If you are manually publishing, you generate a token on PyPI and enter it when prompted from twine, or put it in a config file. Very simple instructions are linked on PyPI's homepage...or you can use trusted publishing. It's not opaque or difficult, and the process is the same whether starting from scratch or publishing for the thousandth time.
flying-sheep@reddit
And as said, trusted publishing takes as long to set up as making one release manually, so it’s always worth it.
flying-sheep@reddit
It’s easier than ever thanks to trusted publishers.
I asked around, it takes newbies like 10 minutes to set up, 2 when you’ve done it before.
Spiritual-Yam-1410@reddit
yeah there are still some pockets of the ecosystem lagging behind tbh.
most mainstream libs (pandas, scipy, scikit-learn, pytorch) are already compatible with numpy 2.x or have updates out. but smaller or less-maintained packages can still break, especially ones with c extensions or pinned dependencies.
Emergency-Buyer-7384@reddit
I can say I've never run into issues where i needed 1.0 in the last couple months.
NeatRuin7406@reddit
the numpy 2.x migration surface is way bigger than most people anticipated. the main straggler category isn't the obvious scientific packages -- those tracked the release closely -- it's enterprise data pipelines with pinned requirements.txt files that nobody touches until something breaks. also older cython extensions that were compiled against the numpy c api and expose internal struct layouts are the nastiest because they fail silently or produce garbage instead of erroring loudly. if you're maintaining a package and still haven't migrated, the numpy 2 migration guide has a compatibility shim via numpy.compat that buys you some time, but the abi boundary is the hard part that the shim doesn't fully cover.
AutoModerator@reddit
Your submission has been automatically queued for manual review by the moderation team because it has been reported too many times.
Please wait until the moderation team reviews your post.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
brontide@reddit
norfair object tracking. I've got a fork with an abandond PR which permits the tracking to support numpy-2. If you want a long list just check out the dependencies for frigate, they are still using numpy-1. Took a few hours to get it converted over and not everything was working.
eufemiapiccio77@reddit
More than not