skip to Main Content

I am building and training a neural network model on tensorflow version 2.17.0-nightly. After that I convert it to a tflite model.
But when I try to load the model I get the following message:
Didn't find op for builtin opcode 'FULLY_CONNECTED' version '12'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Model Description:

self.model = tf.keras.models.Sequential([
            tf.keras.layers.LSTM(80, input_shape=input_shape, return_sequences=True),
            tf.keras.layers.LSTM(128, activation='tanh', return_sequences=False),
            tf.keras.layers.Dense(80, activation='relu'),
            tf.keras.layers.Dense(64, activation='relu'),
            tf.keras.layers.Dense(32, activation='relu'),
            tf.keras.layers.Dense(10, activation='sigmoid'),
            tf.keras.layers.Dense(4, activation='sigmoid')
        ])

Code for model conversion:

lstm_model = tf.keras.models.load_model('lstm_model_TEST.keras', custom_objects={'F1_score': F1_score})

# Convert the model.
run_model = tf.function(lambda x: lstm_model(x))

BATCH_SIZE = 1
STEPS = 10
INPUT_SIZE = 5

concrete_func = run_model.get_concrete_function(
    tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], lstm_model.inputs[0].dtype))

converter = tf.lite.TFLiteConverter.from_keras_model(lstm_model)

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,
    tf.lite.OpsSet.SELECT_TF_OPS
]
converter.experimental_new_converter = True

tflite_model = converter.convert()

# Save the model.
with open('model_LSTM.tflite', 'wb') as f:
    f.write(tflite_model)

tf.__version__ = 2.17.0-dev20240514

Dependencies in Android Studio:

 implementation("org.tensorflow:tensorflow-lite-support:0.4.4")
 implementation("org.tensorflow:tensorflow-lite-metadata:0.4.4")
 implementation("org.tensorflow:tensorflow-lite:2.16.1")
 implementation("org.tensorflow:tensorflow-lite-gpu:2.16.1")

Code for model download:

val model = ModelLstm.newInstance(this)

I tried converting a model without tf-nightly using tensorflow version 2.16.1, but the model did not convert

2

Answers


  1. Chosen as BEST ANSWER

    While working on the project I found this solution :/ I've changed the following dependencies in the gradle settings:

    dependencyResolutionManagement {
        repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
        repositories {
            google()
            mavenCentral()
            maven {
                url = uri("http://oss.sonatype.org/content/repositories/snapshots")
                isAllowInsecureProtocol = true
            }
        }
    }
    

    This allowed me to use the tf-nightly library. This solved the problem for me. Not the best solution)


  2. Recently, I had the same problem. It might be helpful to share, so i am posting some Colab codes below. I have used Tensorflow 2.15.0 for now, both on Google Colab and Android Studio. Below codes were needed to be able to successfully downgrade Tensorflow to 2.15.0. At least it was required in my environment.

    !pip install orbax-checkpoint==0.4.0
    !pip install tensorstore==0.1.40
    !pip install tf-keras==2.15.0
    !pip install tensorflow==2.15.0
    !pip install ml-dtypes==0.2.0
    

    To verify the Tensorflow version:

    print(tf.__version__)
    

    To convert the model:

    # Convert Keras model to TF Lite format.
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    tflite_float_model = converter.convert()
    
    # Show model size in KBs.
    float_model_size = len(tflite_float_model) / 1024
    print('Float model size = %dKBs.' % float_model_size)
    
    # Re-convert the model to TF Lite using quantization.
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    tflite_quantized_model = converter.convert()
    
    # Show model size in KBs.
    quantized_model_size = len(tflite_quantized_model) / 1024
    print('Quantized model size = %dKBs,' % quantized_model_size)
    print('which is about %d%% of the float model size.'
          % (quantized_model_size * 100 / float_model_size))
    

    To download the model:

    # Save the quantized model to file to the Downloads directory
    f = open('hasyv2.tflite', "wb")
    f.write(tflite_quantized_model)
    f.close()
    
    # Download the digit classification model
    files.download('hasyv2.tflite')
    
    print('`hasyv2.tflite` has been downloaded')
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search