9

I am working with Tensorflow 2.0 and want to store the following Keras model as frozen graph.

import tensorflow as tf model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, input_shape=[100])) model.add(tf.keras.layers.Dense(32, activation='relu')) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(2, activation='softmax')) model.summary() model.save('./models/') 

I can't find any good examples how to do this in Tensorflow 2.0. I have found the freeze_graph.py file in the Tensorflow Github repository but find it hard to wrap my head around it.

I load the file mentioned above using:

from tensorflow.python.tools.freeze_graph import freeze_graph 

But what exactly do I have to provide to the freeze_graph function itself? Here I marked the arguments where I am not sure with a questionmark.

freeze_graph(input_graph=?, input_saver='', input_binary=False, input_checkpoint=?, output_node_names=?, restore_op_name='', filename_tensor_name='', output_graph='./frozen_graph.pb', clear_devices=True, initializer_nodes='') 

Can someone provide a simple example that shows how I can store the model above as a frozen graph using the freeeze_graph function?

1 Answer 1

14

Freeze_Graph is now gone in Tensorflow 2.0.
You can check it here Tensorflow 2.0 : frozen graph support.

Except for the .save method that you have in your code.
.save Method is already saving a .pb ready for inference. As an alternative, you can also use the below code.

You can also use convert_variables_to_constants_v2

Below is the sample code.

 import tensorflow as tf import os from tensorflow.python.tools import freeze_graph from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, input_shape=(1,))) model.add(tf.keras.layers.Dense(32, activation='relu')) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1, activation='softmax')) model.compile(optimizer='adam', loss='mse') model.summary() # Convert Keras model to ConcreteFunction full_model = tf.function(lambda x: model(x)) full_model = full_model.get_concrete_function( tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name="yourInputName")) # Get frozen ConcreteFunction frozen_func = convert_variables_to_constants_v2(full_model) frozen_func.graph.as_graph_def() layers = [op.name for op in frozen_func.graph.get_operations()] print("-" * 50) print("Frozen model layers: ") for layer in layers: print(layer) print("-" * 50) print("Frozen model inputs: ") print(frozen_func.inputs) print("Frozen model outputs: ") print(frozen_func.outputs) # Save frozen graph from frozen ConcreteFunction to hard drive tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir="./frozen_models", name="frozen_graph.pb", as_text=False) ### USAGE ## def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []) import_graph = wrapped_import.graph print("-" * 50) print("Frozen model layers: ") layers = [op.name for op in import_graph.get_operations()] if print_graph == True: for layer in layers: print(layer) print("-" * 50) return wrapped_import.prune( tf.nest.map_structure(import_graph.as_graph_element, inputs), tf.nest.map_structure(import_graph.as_graph_element, outputs)) ## Example Usage ### # Load frozen graph using TensorFlow 1.x functions with tf.io.gfile.GFile("./frozen_models/frozen_graph.pb", "rb") as f: graph_def = tf.compat.v1.GraphDef() loaded = graph_def.ParseFromString(f.read()) # Wrap frozen graph to ConcreteFunctions frozen_func = wrap_frozen_graph(graph_def=graph_def, inputs=["yourInputName:0"], outputs=["Identity:0"], print_graph=True) print("-" * 50) print("Frozen model inputs: ") print(frozen_func.inputs) print("Frozen model outputs: ") print(frozen_func.outputs) # Get predictions for test images predictions = frozen_func(yourInputName=tf.constant([[3.]])) # Print the prediction for the first image print("-" * 50) print("Example prediction reference:") print(predictions[0].numpy()) 
Sign up to request clarification or add additional context in comments.

8 Comments

Can I assign custom input and output node names such as input and output instead of x and Identity? Is the nesting of the _imports_graph_def() functino inside the wrap_frozen_graph() function really necessary?
@random9 Yes, it could be change and the _import_graph_def() is also important.
Can you please show how the variable names can be changed?
Hi @random9,For the input, just add a name on this line tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name='yourInputName')) and change the value of this inputs=["yourInputName:0"]. For the output, unfortunately its always appending an Identity layer at the end of the model. I haven't yet found a solution to that.
I get the following error, when I try your approach of renaming the input: TypeError: Expected argument names ['Input'] but got values for ['x']. Missing: ['Input'] Any idea why that is the case?
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.