5

I want to use tensorflow through a virtual environment. However, the Python script I want to run requires me to use a separate virtual environment that does not include tensorflow.

Is it possible to activate these simultaneously? If not, can I merge the two virtual environments somehow?

2
  • Check this out. You could also activate different virtual environments on different terminal sessions Commented Jun 20, 2018 at 21:01
  • maybe you can create a third virtuanenv with packages from initial two. Activate virtual env and use command pip freeze > req_venv1 and the same for second. Then init a new virtual env and to pip install -r req_venv1 and pip install -r req_venv2 Commented Jun 21, 2018 at 0:45

1 Answer 1

3

You could try adding the site-packages dir of the other virtualenv to your PYTHONPATH variable. Your mileage may vary, but I think it would work for the majority of the packages.

export PYTHONPATH=<other-env>/lib/python3.6/site-packages:$PYTHONPATH 

(or the equivalent variable setting statement for your OS/Shell)

update

Not that the approach in the original answer above will just work for the same Python version, and no conflicting version requirements in the dependencies. So, it might work for a while, but not be reliable long term.

For example, say hypothetically the packages in VENV1 rely on Python requests version 2.20 - but packages in the second version rely on the current 2.32 version. Which requests library gets loaded will depend on the order of the site-package libraries in the path - but the worse part: things would work nicely until one package in VENV1 uses requests in a way that is incompatible with the 2.32 version: then things would break with a cryptic error message. (again, this is just one example. requests for one library, although likely to be requirements of packages on both sides, won't have backwards incompatibilities).

All in all, however, it is a great way to mitigate several copies of the same huge Python library, like tensorflow: merging the site packages, or using symbolic links so that more than one venv can "see" the tensorflow installed in a single disk location is a huge gain, since the install is in the order of a few gigabytes.

Also note that since 2024, tools like Astral's uv can add dependencies to Venvs using symbolic links, instead of copying all the files - that would make installing Tensorflow in a second Venv in the same computer (same Python version, etc...) really fast, and take no extra disk-space.

Another approach, if one gets conflicting Python versions, or hit package dependency incompatibilities would be to call everything that needs different libraries using Python subprocess, or something like Celery and making RPC (Remote Procedure Calls): that'd be harder to setup, but reliable once it's done. With celery, for example, it is possible to have a number of workers running in a separate venv, even with a different Python version - the project just should be careful with the import statements done in the workers and in the main process - but it would be super reliable.

Sign up to request clarification or add additional context in comments.

4 Comments

Thanks this works! As a note, The path separator is : on Linux and ; on Windows
I found a package for which this doesn't work: pywin32. My workaround was to switch the venvs: Call venv A (containing pywin32) directly using python.exe and add venv B using PYTHONPATH.
Yes - thanks for commenting here - I've just updated the question with a lot more information. (even if you don't need it now, might be useful to others)
My use case is that venv B contains just sphinx (a package for building documentation) and venv A contains a specific business app. This separation exists because we have 30+ Python apps on our server and we don't want to install sphinx into every venv. This solution works but I feel like this isn't the intended way.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.