7

I have two different types of data (image volumes and coordinates) and I would like to use a convolutional neural network on the image volume data and then after this I would like to append some additional information (ie. the coordinates of the volume).

Independently this should create a pretty solid predictor for my function. How can I implement this using Keras.

The only answers I have found online are either ambiguous or are using the deprecated methods which I have got to work. But I would really like to implement this using the current API that way I can more easily save the model for later use.

model = Sequential() model.add(Conv3D(32, kernel_size=(3, 3, 3), activation='relu', input_shape=input_shape)) model.add(Conv3D(64, (3, 3, 3), activation='relu')) model.add(MaxPooling3D(pool_size=(2, 2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) print(model.output_shape) # The additional data (the coordinates x,y,z) extra = Sequential() extra.add(Activation('sigmoid', input_shape=(3,))) print(extra.output_shape) merged = Concatenate([model, extra]) # New model should encompass the outputs of the convolutional network and the coordinates that have been merged. # But how? new_model = Sequential() new_model.add(Dense(128, activation='relu')) new_model.add(Dropout(0.8)) new_model.add(Dense(32, activation='sigmoid')) new_model.add(Dense(num_classes, activation='softmax')) new_model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) 

2 Answers 2

9

Sequential models are not suited for creating models with branches.

You can have the two independent models as Sequential models, as you did, but from the Concatenate on, you should start using the functional Model API.

The idea is to get the output tensors of the two models and feed them in other layers to get new output tensors.

So, considering you have model and extra:

mergedOutput = Concatenate()([model.output, extra.output]) 

This mergetOutput is a tensor. You can either create the last part of the model using this tensor, or create the last part independently, and call it on this tensor. The second approach may be good if you want to train each model separately (doesn't seem to be your case).

Now, creating the new model as a functional API model:

out = Dense(128, activation='relu')(mergetOutput) out = Dropout(0.8)(out) out = Dense(32, activation='sigmoid')(out) out = Dense(num_classes, activation='softmax')(out) new_model = Model( [model.input, extra.input], #model with two input tensors out #and one output tensor ) 

An easier approach is to take all three models you have already created and use them to create a combined model:

model = Sequential() #your first model extra = Sequential() #your second model new_model = Sequential() #all these three exactly as you did #in this case, you just need to add an input shape to new_model, compatible with the concatenated output of the previous models. new_model.add(FirstNewModelLayer(...,input_shape=(someValue,))) 

Join them like this:

mergedOutput = Concatenate()([model.output, extra.output]) finalOutput = new_model(mergedOutput) fullModel = Model([model.input,extra.input],finalOutput) 
Sign up to request clarification or add additional context in comments.

4 Comments

Thank you very much for clarifying the example in the API. I appreciate it!
One more question, how can I save and load this kind of model. Is it sufficient to save just the fullModel?
Unfortunately I have never been able to save a model in keras. I don't know why. What I do is fullModel.save_weights(filename) and fullModel.load_weights(filename). This is enough to save and load a learned model, but may cause difficulties when you want to start training again (the optimizer is lost in this process and will have to adjust itself again during training).
@DanielMöller Could you have a look at this?
1

Use the functional API of Keras (https://keras.io/models/model/). You can just apply layers to your merged layer in Keras. The functional API works like this. You have a tensor and you apply a function to this Tensor. Then this is recursively evaluated. Because pretty much everything is a tensor in Keras this works quite nicely.

An example for this is:

activation = Dense(128, activation='relu')(merged) 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.