skip to Main Content

do anyone know how to archieve this? I have been trying it with tensorflow lite but i always get "Interpreter has not been initialized" and im not getting more error messages. there are other lib that can do this work? im using a model trained with pictures in .tflite format

Edit:
Im using this libraries:

  intl: ^0.19.0
  tflite_flutter: ^0.10.4
  image_picker: ^0.8.4+4
  path_provider: ^2.0.11
  image: ^3.0.1

This is my code:

import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'dart:io';
import 'package:tflite_flutter/tflite_flutter.dart' as tfl;
import 'package:image/image.dart' as img;
import 'package:flutter/services.dart' show rootBundle;

class LocalRecognitionScreen extends StatefulWidget {
  @override
  _LocalRecognitionScreenState createState() => _LocalRecognitionScreenState();
}

class _LocalRecognitionScreenState extends State<LocalRecognitionScreen> {
  File? _image;
  final picker = ImagePicker();
  String _result = '';
  late tfl.Interpreter _interpreter;
  late List<String> _labels;

  @override
  void initState() {
    super.initState();
    _loadModel();
  }

  Future<void> _loadModel() async {
    try {
      _interpreter = await tfl.Interpreter.fromAsset('assets/70.tflite');
      _labels = await _loadLabels('assets/70.txt');
    } catch (e) {
      print('Error loading model: $e');
    }
  }

  Future<List<String>> _loadLabels(String path) async {
    final rawLabels = await rootBundle.loadString(path);
    return rawLabels.split('n');
  }

  Future<void> _getImage() async {
    final pickedFile = await picker.getImage(source: ImageSource.camera);

    if (pickedFile != null) {
      setState(() {
        _image = File(pickedFile.path);
      });
    } else {
      ScaffoldMessenger.of(context).showSnackBar(
        SnackBar(content: Text('No se seleccionó ninguna imagen')),
      );
    }
  }

  Future<void> _recognizeImage() async {
    if (_image == null) return;

    final imageBytes = await _image!.readAsBytes();
    final image = img.decodeImage(imageBytes);

    if (image == null) {
      ScaffoldMessenger.of(context).showSnackBar(
        SnackBar(content: Text('Error al decodificar la imagen')),
      );
      return;
    }

    final input = _preprocessImage(image);
    final output =
        List.filled(1 * _labels.length, 0).reshape([1, _labels.length]);

    try {
      await Future.delayed(const Duration(seconds: 1));

      _interpreter.run(input, output);
      final resultIndex = output[0]
          .indexOf(output[0].reduce((curr, next) => curr > next ? curr : next));

      setState(() {
        if (output[0][resultIndex] > 0.5) {
          _result = _labels[resultIndex];
        } else {
          _result = 'No reconocido';
        }
      });
    } catch (e) {
      print('Error recognizing image: $e');
    }
  }

  List<List<List<List<double>>>> _preprocessImage(img.Image image) {
    final resizedImage = img.copyResize(image, width: 224, height: 224);
    final input = List.generate(
        1,
        (_) => List.generate(
            224, (_) => List.generate(224, (_) => List.filled(3, 0.0))));

    for (int y = 0; y < 224; y++) {
      for (int x = 0; x < 224; x++) {
        final pixel = resizedImage.getPixel(x, y);
        input[0][y][x][0] = img.getRed(pixel) / 255.0;
        input[0][y][x][1] = img.getGreen(pixel) / 255.0;
        input[0][y][x][2] = img.getBlue(pixel) / 255.0;
      }
    }

    return input;
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Reconocimiento Facial Local'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            _image == null
                ? Text('No se seleccionó ninguna imagen.')
                : Image.file(_image!),
            SizedBox(height: 20),
            ElevatedButton(
              onPressed: _getImage,
              child: Text('Abrir Cámara'),
            ),
            ElevatedButton(
              onPressed: _recognizeImage,
              child: Text('Reconocer Imagen'),
            ),
            SizedBox(height: 20),
            Text(
              _result,
              style: TextStyle(
                fontSize: 24,
                color: _result == 'No reconocido' ? Colors.red : Colors.green,
              ),
            ),
          ],
        ),
      ),
    );
  }
}

I made some changes but still having trobles. Now i get:

I/flutter (18195): Error loading model: Invalid argument(s): Unable to create interpreter.

2

Answers


  1. Chosen as BEST ANSWER

    I find the issue here, tflite_flutter is outdated with tensorflow. I was training my model with the lates version for python but i had to rollback to version 2.12.0, retrain and that did the trick


  2. The given information is not enough to answer your question.

    1. Which package that you use?
      I assume you use either flutter_tflite or tflite_flutter. I suggest that you use the latter one which is more up-to-date.

    2. You need to give some codes.
      If you are using the flutter_tflite (the first one), then it is a common problem. Adding a delay before running the interpreter seems to work.

      await Future.delayed(const Duration(seconds: 1));

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search