When You Don't Control the Python Environment
What to do when you can't rely on the package manager in Python
Python is great at letting users build whatever they want, however it is notoriously bad at one specific thing: letting a platform run that user code without relying on whatever happens to be installed on the machine.
In Posit Connect, on which I have been working on for the past year, that problem is unavoidable. Connect orchestrates the environments where applications run, but it cannot assume anything about those environments. There might be no pip. No conda. No uv. The Python interpreter could be custom-built, minimal, or misconfigured. And even when everything exists, it is often user-owned, meaning Connect cannot “just fix it” by mutating it.
In addition to that, Connect handles a cache of environments, to accelerate the deployment process by skipping the recreation cost of the environment. This allows users to iterate quickly on their content, but means that Connect also needs a way to know if that environment is still compatible with the new content.
At that point you hit a bootstrap paradox. To decide whether an environment can run an app, you need to inspect what is installed. In Python, the obvious tools for that are usually… Python packaging tools. Tools that might not exist, might not be importable, or might only work when executed inside the very environment you are trying to audit.
This is one reason people often perceive Go or Rust deployments as simpler: the common outcome is a single executable you can drop onto a server, with most dependencies already inside. In Python, the default outcome is closer to “an application plus a story about the environment it expects”.
So the question arises: “Given a path to a Python environment, what is installed there, and can it run this application?”
And picopip has been the solution for me: a tiny, vendorable, zero-dependency environment scanner that lets Connect make decisions even when the target environment is unknown or broken.
The problem nobody wants, orchestrating python environments you don’t own
You might be wondering why picopip was born when pip was already there. And your question would be perfectly valid. The issue is that pip bring some assumptions:
pip needs to be installed as a package
An environment could have been created without pip
It has to be spawned as a subprocess in the target environment, it doesn’t have Python APIs (no, invoking
pip.mainis not an API…)
Those constraints conflict with the context where an application supervisor might be running as the supervisor doesn’t own the environments of the applications, and thus it can’t modify them nor take for granted any known state. The user might have installed their packages via conda and pip might not even exist, and the supervisor still needs to be able to check if the application can run correctly in that environment.
So, as you might imaging, using pip is not a reliable option in that context.
Why Python deployment feels “hard” compared to Go/Rust
A common approach to solve the issue would be to ship an executable that is self-contained, that it has all the features it needs and the dependencies it relies on.
Languages like Go and Rust have been growing the reputation of being easy to deploy as the resulting binary is self contained and requires minimal dependencies to run. You copy a binary and that’s all.
While this is exaggerated, it’s true that is generally easier to deploy projects built in those languages than projects built with Python. Python brings a whole story about the environment it has to work in every time you have to run an application: which dependencies must be installed, for which architecture and system libraries, and which system libraries must exist. And now also the question of which threading model the interpreter is built for.
When I created consolidatewheels I was specifically looking for a solution to distributing complex projects made by multiple components each with their own dependencies and system libraries, and while the project has been great at solving that problem, it really didn’t make the underlying problem disappear, it just made it easier to deal with it.
Projects like PyOxidizer were born specifically to try to solve this issue and make Python applications self contained and relocatable, but they can’t always solve the problem and are not commonly used to distribute software.
Inspecting hostile environments
So, how can we inspect a python environment to check if it’s compatible with an application that the supervisor is in charge of and eventually decide if a new environment has to be created for the application to be able to run in its current state?
To solve this problem, It’s good to think of the environment where our code is running as an “hostile environment”. An environment that is actually trying to do all it can to prevent our code from being able to run correctly or at all.
This bring some significant implications:
The supervisor has to run in its own environment, isolated from any state or issue that the supervised application environment might have
The supervisor needs to be self-contained and avoid the need for system libraries or dependencies.
This allows the supervisor to run in a context that might be broken. As far as the system is able to start a binary (I guess “has libc” is a safe definition) it will be able to run.
But how can we achieve this with Python when there is no solid story for distributing portable or relocatable environments?
Vendoring as a distribution strategy
We need to vendor everything…
We need to have one single directory (or file) that contains everything the application needs to be able to run. You copy that directory to the target environment and that’s all. That’s how far your deploy process has to go.
The deploy process need to be completely stateless, idempotent and atomic
The problem is that Python libraries are not designed nor meant for that use case. They are made of ton of modules, packages and dependencies they bring in.
And that’s good, that’s the sane way to design software. But it means practically giving up the vendoring use case.
Libraries needs to be designed in a different way when their primary means of use if by vendoring, they need to be:
Easy to install by copying one single entity
Easy to update by having all their distribution metadata in the library itself
Easy to include in the project build process
The fact that for example duktape as a Javascript interpreter was explicitly designed with those goals in mind was what allowed me to create DukPy which has been for years the easiest way to get Javascript support in a Python project on most systems and architectures.
You add duktape.c in your list of statically linked files when the binary is compiled and that’s all, you get a self contained binary with Javascript support. Simple, reliable, predictable.
Enter picopip: zero-dep environment inspection
So, for the task of inspecting hostile environments I needed a solution that was as reliable as duktape has been for me for years while working on DukPy, and that’s why I had the idea of creating picopip.
Copy one file in your project ( picopip.py ) and you have the full capability of inspecting any target python environment and compare versions of installed packages to the use case expectations.
To know what’s available in a target environment I can do
>>> from picopip import get_packages_from_env
>>>
>>> pkgs = get_packages_from_env(venvdir)
>>> print(pkgs)
[(’certifi’, ‘2025.4.26’), (’charset-normalizer’, ‘3.4.2’), (’idna’, ‘3.10’),
(’pip’, ‘21.2.4’), (’requests’, ‘2.32.3’), (’setuptools’, ‘58.0.4’),
(’urllib3’, ‘2.4.0’)]and to check if those versions are compatible with the ones I expect I can use picopip.parse_version which return a sortable tuple
>>> from picopip import parse_version
>>>
>>> parse_version(”1.13.5”)
((1, 13, 5), 0)
>>> parse_version(”1.13.5a1”) < parse_version(”1.13.5”)
True
>>> parse_version(”1.13.5.post2”) > parse_version(”1.13.5”)
TrueThe interesting part is that the version tuple doesn’t require any special logic to compare versions, it uses an interesting hack that relies on an integer value to offset the version based on pre or post releases.
# To simplify representing pre/post/dev stages as integers for comparison,
# we assign each stage a base offset, and add the stage number to it.
# For example, "1.0.0rc2" becomes ( (1,0,0), -9998 ), while
# "1.0.0post3" becomes ( (1,0,0), 10003 ).
# This guarantees that releases are always sortable as simple numeric tuples.
OFFSET_STAGE_SPAN = 10_000 # 9999 pre/post/dev per release should be enough.
OFFSET_BASE = { # dev < a < b < rc < release < post
"dev": -4 * OFFSET_STAGE_SPAN,
"a": -3 * OFFSET_STAGE_SPAN,
"b": -2 * OFFSET_STAGE_SPAN,
"rc": -1 * OFFSET_STAGE_SPAN,
"release": 0,
"post": OFFSET_STAGE_SPAN,
}As rarely any distribution will ever release more than 9999 post releases (and rarely distributions have post releases at all) the model works fine.
picopip might not be fully featured or able to handle all corner cases that pip can handle. It is not meant to be a full replacement for pip, but it’s good enough to help me make the decision of “Can I even invoke pip?”, “Does this environment need to be recreated from scratch?”.
To answer those questions you don’t need a 100% accurate answer, You need a signal. And if there is an hint that something might be wrong, it’s better to act safe and rebuild the environment.
Takeaways
If you build systems that run Python rather than just write Python, you eventually learn that the hard part is not installing dependencies, but reasoning about environments you don’t control. Tooling like pip, conda or uv works well when the environment is healthy and cooperative, but orchestration cannot assume that state as a given.
The key shift is treating the orchestrator as an auditor, not a fixer. First inspect, without side effects. Decide whether the environment is trustworthy. Only then choose whether to reuse it, replace it, or refuse to run. Once you adopt that mindset, small, vendorable, zero-dependency tools stop looking like hacks and start looking like necessary bootstrap components.
picopip exists because Python still lacks a clean story for read-only introspection under hostile conditions. It’s not a replacement for package managers, and it doesn’t try to be perfect. It’s a pragmatic building block that lets higher-level systems make correct decisions without inheriting assumptions about the target runtime.
If there’s one lesson here, it’s this: deployment failures often start before installation. If you can’t reliably answer “what is this environment, really?”, everything built on top of it is guesswork.
Future ideas
On the long term, I promised myself to work a bit on the idea of building “a package manager for vendored libraries”.
You might be wondering: “you just said you can’t use a package manager…”
And that’s true, but I can’t use it at deploy time, but nothing prevents me from using it at “build” time when I prepare a bundle of my software for deploy.
As picopip has all its metadata in the picopip.py file itself (What version it is, where to download updates from, etc…) in theory it is possible to automate upgrades of picopip and have a tool that replaces the vendored package with new versions any time they are available.
PEP 723 shows that there is indeed a need in the community for a solution for small portable self-contained python programs, and such a tool could go as far as to integrate PyOxidizer to generate a self contained single binary that can be deployed anywhere without the need for a Python environment at all.


