skip to Main Content

I am trying to run a TensorFlow-lite model on my App on a smartphone. First, I trained the model with numerical data using LSTM and build the model layer using TensorFlow.Keras. I used TensorFlow V2.x and saved the trained model on a server. After that, the model is downloaded to the internal memory of the smartphone by the App and loaded to the interpreter using "MappedByteBuffer". Until here everything is working correctly.

The problem is in the interpreter can not read and run the model.
I also added the required dependencies on the build.gradle.

The conversion code to tflite model in python:

from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
from tensorflow.keras import regularizers
#Create the network
model = Sequential()
model.add(LSTM(...... name = 'First_layer'))
model.add(Dropout(rate=Drop_out))
model.add(LSTM(...... name = 'Second_layer'))
model.add(Dropout(rate=Drop_out))

# compile model
model.compile(loss=keras.losses.mae, 
optimizer=keras.optimizers.Adam(learning_rate=learning_rate), metrics=["mae"])

# fit model
model.fit(.......)
#save the model
tf.saved_model.save(model,'saved_model')
print("Model  type", model1.dtype)# Model type is float32 and size around 2MB

#Convert saved model into TFlite
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
tflite_model = converter.convert()

with open("Model.tflite, "wb") as f:
    f.write(tflite_model)
f.close()

I tried also other conversion way using Keras

# converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
# tflite_model = converter.convert()

After this step, the "Model.tflite" is converted and downloaded to the internal memory of the smartphone.

Android studio code:

  try {
        private Interpreter tflite = new Interpreter(loadModelFile());
        Log.d("Load_model", "Created a Tensorflow Lite of AutoAuth.");

    } catch (IOException e) {
        Log.e("Load_model", "IOException loading the tflite file");

    }

private MappedByteBuffer loadModelFile() throws IOException {
    String model_path = model_directory + model_name + ".tflite";
    Log.d(TAG, model_path);
    File file = new File(model_path);
    if(file!=null){
    FileInputStream inputStream = new FileInputStream(file);
    FileChannel fileChannel = inputStream.getChannel();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, file.length());
    }else{
        return null;
    }
}

The "loadModelFile()" function is working correctly because I checked it with another tflite model using MNIST dataset for image classification. The problem is only the interpreter.

This is also build.gradle’s contents:

android {
aaptOptions {
    noCompress "tflite"
}
 }
  android {
     defaultConfig {
        ndk {
            abiFilters 'armeabi-v7a', 'arm64-v8a'
        }
      }
    }

dependencies {
     implementation 'com.jakewharton:butterknife:8.8.1'
     implementation 'org.tensorflow:tensorflow-lite:0.1.2-nightly'
     annotationProcessor 'com.jakewharton:butterknife-compiler:8.8.1'
     implementation fileTree(dir: 'libs', include: ['*.jar'])
     //noinspection GradleCompatible
     implementation 'com.android.support:appcompat-v7:28.0.0'
    implementation 'com.android.support.constraint:constraint-layout:2.0.4'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'com.android.support.test:runner:1.0.2'
    androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
    }

Whenever I run Android studio, I got one of the following errors:
1-
enter image description here

or

2-

enter image description here

I have gone through many resources and threads and read about saving trained models, TFlite conversion, and Interpreters.
I am trying for 5 days ago to solve this issue but have no hope. Can anyone give a solution for this?

2

Answers


  1. Referring to one of the most recent TfLite android app examples might help: Model Personalization App. This demo app uses transfer learning model instead of LSTM, but the overall workflow should be similar.

    As Farmaker mentioned in the comment, try using SNAPSHOT in the gradle dependency:

    implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
    

    To load the model properly, can you try:

    protected MappedByteBuffer loadMappedFile(String filePath) throws IOException {
        AssetFileDescriptor fileDescriptor = assetManager.openFd(this.directoryName + "/" + filePath);
    
        FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
        FileChannel fileChannel = inputStream.getChannel();
        long startOffset = fileDescriptor.getStartOffset();
        long declaredLength = fileDescriptor.getDeclaredLength();
        return fileChannel.map(MapMode.READ_ONLY, startOffset, declaredLength);
      }
    

    This snippet can also be found in the GitHub example link I posted above.

    Login or Signup to reply.
  2. loadMappedFile have impl at tensorflow lite utils

    import org.tensorflow.lite.support.common.FileUtil;
    
        MappedByteBuffer tfliteModel = FileUtil.loadMappedFile(activity, getModelPath());
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search