from experiments run, seems tensorflow uses automatically cpus on 1 machine. furthermore, seems tensorflow refers cpus /cpu:0.
am right, different gpus of 1 machine indexed , viewed separate devices, cpus on 1 machine viewed single device?
is there way machine can have multiple cpus viewing tensorflows perspective?
by default cpus available process aggregated under cpu:0
device.
there's answer mrry here showing how create logical devices /cpu:1
, /cpu:2
there doesn't seem working functionality pin logical devices specific physical cores or able use numa nodes in tensorflow.
a possible work-around use distributed tensorflow multiple processes on 1 machine , use taskset
on linux pin specific processes specific cores
Comments
Post a Comment