1

I using the below script to convert my frozen_inference_graph into a TensorRT optimized one:

import tensorflow as tf from tensorflow.python.compiler.tensorrt import trt_convert as trt with tf.Session() as sess: # First deserialize your frozen graph: with tf.gfile.GFile('frozen_inference_graph.pb', 'rb') as f: frozen_graph = tf.GraphDef() frozen_graph.ParseFromString(f.read()) # Now you can create a TensorRT inference graph from your # frozen graph: converter = trt.TrtGraphConverter( input_graph_def=frozen_graph, nodes_blacklist=['outputs/Softmax']) #output nodes trt_graph = converter.convert() # Import the TensorRT graph into a new graph and run: output_node = tf.import_graph_def( trt_graph, return_elements=['outputs/Softmax']) sess.run(output_node) 

My question is how can I save this optimized graph to disk so I can use it to run inference?

1 Answer 1

1

yes you can just add those two lines:

saved_model_dir_trt = "./tensorrt_model.trt" converter.save(saved_model_dir_trt)

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.