70

I got this error when I tried to modify the learning rate parameter of SGD optimizer in Keras. Did I miss something in my code or my Keras was not installed properly?

Here is my code:

from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Activation import keras from keras.optimizers import SGD model = Sequential() model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,))) model.add(Activation('softmax')) model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics= ['accuracy'])* 

and here is the error message:

Traceback (most recent call last): File "C:\TensorFlow\Keras\ResNet-50\test_sgd.py", line 10, in model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=['accuracy']) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\models.py", line 787, in compile **kwargs) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\engine\training.py", line 632, in compile self.optimizer = optimizers.get(optimizer) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\optimizers.py", line 788, in get raise ValueError('Could not interpret optimizer identifier:', identifier) ValueError: ('Could not interpret optimizer identifier:', <keras.optimizers.SGD object at 0x000002039B152FD0>)

0

18 Answers 18

76

The reason is you are using tensorflow.python.keras API for model and layers and keras.optimizers for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.

Sign up to request clarification or add additional context in comments.

1 Comment

this doesnt work, you should give a working solution
27

I am bit late here, Your issue is you have mixed Tensorflow keras and keras API in your code. The optimizer and the model should come from same layer definition. Use Keras API for everything as below:

from keras.models import Sequential from keras.layers import Dense, Dropout, LSTM, BatchNormalization from keras.callbacks import TensorBoard from keras.callbacks import ModelCheckpoint from keras.optimizers import adam # Set Model model = Sequential() model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) # Set Optimizer opt = adam(lr=0.001, decay=1e-6) # Compile model model.compile( loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'] ) 

I have used adam in this example. Please use your relevant optimizer as per above code.

Hope this helps.

1 Comment

Alternatively, if you'd like to use tensorflow.keras instead of keras, try the example at the following link
23

This problem is mainly caused due to different versions. The tensorflow.keras version may not be same as the keras. Thus causing the error as mentioned by @Priyanka.

For me, whenever this error arises, I pass in the name of the optimizer as a string, and the backend figures it out. For example instead of

tf.keras.optimizers.Adam 

or

keras.optimizers.Adam 

I do

model.compile(optimizer= 'adam' , loss= keras.losses.binary_crossentropy, metrics=['accuracy']) 

4 Comments

Yes, you can pass a string name of the optimizer as the value of optimizer argument but using tf.keras.optimizers.Adam function is more flexible when you want to adjust optimizer setting for example learning rate.
Just to add, in current TF version (2.4.1), optimizers have to be called as a function, not a parameter. So the exact code will be "tf.keras.optimizers.Adam()"
then how can I add lr with this syntax? i tried below but it did not work model.compile(optimizer= 'adam'(lr=0.0001); loss= keras.losses.binary_crossentropy, metrics=['accuracy'])
How do you specify the Adam parameters this way?
6
from tensorflow.keras.optimizers import SGD 

This works well.

Since Tensorflow 2.0, there is a new API available directly via tensorflow:

Solution works for tensorflow==2.2.0rc2, Keras==2.2.4 (on Win10)

Please also note that the version above uses learning_rate as parameter and no longer lr.

1 Comment

Welcome to Stack Overflow! While this code may solve the question, including an explanation of how and why this solves the problem would really help to improve the quality of your post, and probably result in more up-votes. Remember that you are answering the question for readers in the future, not just the person asking now. Please edit your answer to add explanations and give an indication of what limitations and assumptions apply.
5

For some libraries (e.g. keras_radam) you'll need to set up an environment variable before the import:

import os os.environ['TF_KERAS'] = '1' import tensorflow import your_library 

Comments

5

In my case it was because I missed the parentheses. I am using tensorflow_addons so my code was like

model.compile(optimizer=tfa.optimizers.LAMB, loss='binary_crossentropy', metrics=['binary_accuracy']) 

And it gives

ValueError: ('Could not interpret optimizer identifier:', <class tensorflow_addons.optimizers.lamb.LAMB'>)

Then I changed my code into:

model.compile(optimizer=tfa.optimizers.LAMB(), loss='binary_crossentropy', metrics=['binary_accuracy']) 

and it works.

Comments

5

recently, in the latest update of Keras API 2.5.0 , importing Adam optimizer shows the following error:

from keras.optimizers import Adam ImportError: cannot import name 'Adam' from 'keras.optimizers' 

instead use the following for importing optimizers (i.e. Adam) :

from keras.optimizers import adam_v2 optimizer = adam_v2.Adam(learning_rate=lr, decay=lr/epochs) Model.compile(loss='--', optimizer=optimizer , metrics=['--']) 

Comments

4

Running the Keras documentaion example https://keras.io/examples/cifar10_cnn/ and installing the latest keras and tensor flow versions

(at the time of this writing tensorflow 2.0.0a0 and Keras version 2.2.4 )

I had to import explicitly the optimizer the keras the example is using,specifically the line on top of the example :

opt = tensorflow.keras.optimizers.rmsprop(lr=0.0001, decay=1e-6) 

was replaced by

from tensorflow.keras.optimizers import RMSprop opt = RMSprop(lr=0.0001, decay=1e-6) 

In the recent version the api "broke" and keras.stuff in a lot of cases became tensorflow.keras.stuff.

Comments

3

Use one style in one kernel, try not to mix

from keras.optimizers import sth

with

from tensorflow.keras.optimizers import sth

Comments

2

I tried the following and it worked for me:

from keras import optimizers

sgd = optimizers.SGD(lr=0.01)

model.compile(loss='mean_squared_error', optimizer=sgd)

Comments

2

use

from tensorflow.keras import optimizers

instead of

from keras import optimizers

Comments

1

Try changing your import lines to

from keras.models import Sequential from keras.layers import Dense, ... 

Your imports seem a little strange to me. Maybe you could elaborate more on that.

Comments

1

I have misplaced parenthesis and got this error,

Initially it was

x=Conv2D(filters[0],(3,3),use_bias=False,padding="same",kernel_regularizer=l2(reg),x)) 

The corrected version was

x=Conv2D(filters[0],(3,3),use_bias=False,padding="same",kernel_regularizer=l2(reg))(x) 

Comments

1

I tried everything in this thread to fix it but they didn't work. However, I managed to fix it for me. For me, the issue was that calling the optimizer class, ie. tensorflow.keras.optimizers.Adam caused the error, but calling the optimizer as a function, ie. tensorflow.keras.optimizers.Adam() worked. So my code looks like:

model.compile( loss=tensorflow.keras.losses.categorical_crossentropy(), optimizer=tensorflow.keras.optimizers.Adam() ) 

Looking at the tensorflow github, I am not the only one with this error where calling the function rather than the class fixed the error.

Comments

0

Just give

optimizer = 'sgd' / 'RMSprop' 

1 Comment

Welcome to Stack Overflow! Could you add a little bit of an explanation about why you think this would solve the problem stated in the question?
0

I got the same error message and resolved this issue, in my case, by replacing the assignment of optimizer:

optimizer=keras.optimizers.Adam 

with its instance instead of the class itself:

optimizer=keras.optimizers.Adam() 

Comments

0

This also happens in keras_core (the new library which will soon turn to Keras 3.0 (solution provided in the 2 comments ## below, TLDR : change the optimizer from keras.optimizers.x.y.z to tf.keras.optimizers.x.y.z.

In the following code snippet:

import tensorflow as tf import keras_cv import tensorflow_datasets as tfds import keras_core as keras # Create a preprocessing pipeline with augmentations BATCH_SIZE = 16 NUM_CLASSES = 3 augmenter = keras_cv.layers.Augmenter( [ keras_cv.layers.RandomFlip(), keras_cv.layers.RandAugment(value_range=(0, 255)), keras_cv.layers.CutMix(), ], ) def preprocess_data(images, labels, augment=False): labels = tf.one_hot(labels, NUM_CLASSES) inputs = {"images": images, "labels": labels} outputs = inputs if augment: outputs = augmenter(outputs) return outputs['images'], outputs['labels'] train_dataset, test_dataset = tfds.load( 'rock_paper_scissors', as_supervised=True, split=['train', 'test'], ) train_dataset = train_dataset.batch(BATCH_SIZE).map( lambda x, y: preprocess_data(x, y, augment=True), num_parallel_calls=tf.data.AUTOTUNE).prefetch( tf.data.AUTOTUNE) test_dataset = test_dataset.batch(BATCH_SIZE).map( preprocess_data, num_parallel_calls=tf.data.AUTOTUNE).prefetch( tf.data.AUTOTUNE) # Create a model using a pretrained backbone backbone = keras_cv.models.EfficientNetV2Backbone.from_preset( "efficientnetv2_b0_imagenet" ) model = keras_cv.models.ImageClassifier( backbone=backbone, num_classes=NUM_CLASSES, activation="softmax", ) model.compile( loss='categorical_crossentropy', #Here lies the problem, need to convert it to tf.keras.optimizers.Adam as below #Instead of pure keras.optimizers.Adam() optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5), metrics=['accuracy'] ) # Train your model model.fit( train_dataset, validation_data=test_dataset, epochs=8, verbose=1 ) 

Comments

0

I also faced the same problem.

When did the problem create?

When I was setting [value] to the bias_initializer of keras.layers.Dense(), I got the error.

Could not interpret optimizer identifier [...]

How did I solve the problem?

Calling tf.keras.initializers.Constant([value]) function while setting [value] to bias_initializer solved the problem which makes sense since just [value] was not interpretable to keras.layers.Dense(). This tutorial of TensorFlow and this answer of SO helped me.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.