These are the activated devices that I have:
[name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 5415837867258701517 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 3198956339 locality { bus_id: 1 links { } } incarnation: 12462133041849407996 physical_device_desc: "device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0" ] What I want to do is to configure my program to use GeForce GTX 960M and also make this configuration permanent for all my previous/future programs if is it possible?
tf.deviceto that a section of the code must be run on the GPU or fail otherwise (unless you useallow_soft_placement, see Using GPUs). With multiple GPUs, you can select which ones CUDA use withCUDA_VISIBLE_DEVICES, but I don't think that's your problem.tf.deviceyou can choose what device you want to use (GPU or CPU), and withCUDA_VISIBLE_DEVICESyou can disable the GPU completely (setting it to-1). You can also disable the GPU per-session, see How to run Tensorflow on CPU.os.environ['CUDA_VISIBLE_DEVICES'] = '1'in the begining of my program but no difference happend!os.environ['CUDA_VISIBLE_DEVICES'] = '-1'at the beginning before importing TensorFlow for the first time.