The relationship between op and tensor
- op is the node on the graph, and the line is the tensor.
op inputs tensor, and also outputs downstream tensor
As each tensor, there will be an op attribute (attribute), the op represents the calculation output of this tensor. for example:
In [74]: with tf.Session() as sess: ...: zer = tf.zeros(shape=(32, 32)) ...: print(zer.op) ...: name: "zeros_0" op: "Const" attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "value" value { tensor { dtype: DT_FLOAT tensor_shape { dim { size: 32 } dim { size: 32 } } float_val: 0.0 } } }
zer is a Tensor, and the op attribute is Const, which means it is generated by the op of Const.
- tf.constant/zeros is lowercase, because here is a method, a function that generates an op, and this op will produce a tensor whose elements are all 0;
- tf.Variable itself is a class, so it is capitalized;
- The op is bound to the graph, and the tensor is bound to the session.
random seed fixed
reference
-
Deep learning model is stable, set random seed Daquan
This explains the meaning of each line in the most basic way of setting seeds in the solution below
-
Tensorflow random number generation seed tf.set_random_seed()
This experiment proves: the difference between the random seed setting at the graph level and the random seed setting at the op level, and the characteristics that the former can cover the latter
-
stackoverflow: How to get stable results with TensorFlow, setting random seed
High praise: tf.set_random_seed only sets the default seed of the current graph, so every time you create a new graph, you should reset the seed within its scope
-
stackoverflow:reproducible-results-in-tensorflow-with-tf-set-random-seed
The random number generation in tf is not only affected by the seed value you set, but the random seed of the calculation graph is actually the id number of the last operation of the current calculation graph – after creating the graph, you need to set the random number seed first, and then Generate a random number operator again.
Then I don't really understand the role of tf.set_random_seed here (it seems to have a role at all...) -
Note that the random number on the GPU currently cannot be set to run multiple times and is fixed to death – known
- reddit this answer It seems that there is a way (haven't tried it yet), and the following code should be added on the basis of the following solution [I remember some people said that this is only valid for tf2, not valid for tf1...]:
os.environ['TF_DETERMINISTIC_OPS'] = '1' os.environ['TF_CUDNN_DETERMINISTIC'] = '1' tf.config.threading.set_inter_op_parallelism_threads(1) tf.config.threading.set_intra_op_parallelism_threads(1)
- reddit this answer It seems that there is a way (haven't tried it yet), and the following code should be added on the basis of the following solution [I remember some people said that this is only valid for tf2, not valid for tf1...]:
solution
- the basic
def seed_tensorflow(seed=1217): import random import os random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) tf.set_random_seed(seed) # , disable_gpu = TRUE # the below seems configs that work for tf2 os.environ['TF_DETERMINISTIC_OPS'] = '1' os.environ['TF_CUDNN_DETERMINISTIC'] = '1' tf.config.threading.set_inter_op_parallelism_threads(1) tf.config.threading.set_intra_op_parallelism_threads(1)
tf assignments and dependencies
The yellow one is the reference edge, indicating that the output value of the node is used to change the vector pointed by the arrow, but there is no topological dependency in the DAG for the pointed vector
The gray one is the calculation of the vector pointed to by the dataflow edge arrow in the DAG, which has a topological dependence on the upstream vector
import tensorflow as tf sess= tf.InteractiveSession() with tf.name_scope('test_1') as scope: W = tf.Variable(10,name='W') assign_op = W.assign(100,name='assign_W') with tf.name_scope('test_2') as scope: my_var = tf.Variable(2, name="my_var") my_var_times_two = my_var.assign(2 * my_var,name='multiply') # sess.run(my_var_times_two) # print(f"my_var_times_two is {my_var_times_two}") writer=tf.summary.FileWriter('graph',sess.graph) # In fact, the tensorboard will be stored after this step is written. The latter two are to flush into the file immediately writer.flush() writer.close() # sess.close()
W = tf.Variable(10) assign_op = W.assign(100) sess.run(assign_op) print(W.eval() ) >>> 100
my_var = tf.Variable(2, name="my_var") my_var_times_two = my_var.assign(2 * my_var) sess.run(my_var_times_two) # print(f"my_var_times_two is {my_var_times_two}")
--------------------------------------------------------------------------- FailedPreconditionError Traceback (most recent call last) ~/.conda/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args) 1364 try: -> 1365 return fn(*args) 1366 except errors.OpError as e: ~/.conda/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata) 1349 return self._call_tf_sessionrun(options, feed_dict, fetch_list, -> 1350 target_list, run_metadata) 1351 ~/.conda/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata) 1442 fetch_list, target_list, -> 1443 run_metadata) 1444 FailedPreconditionError: Attempting to use uninitialized value my_var_3 [[{{node my_var_3/read}}]] During handling of the above exception, another exception occurred:
All in all, let's sess.run(tf.global_variables_initializer()) obediently
Dependency requirements in some scenarios
- op's control dependency control I don't know where it is used yet
* The use of intialized_value() without knowing whether the dependent Variable has been initialized