Below is a example of AWS Rekognition script to run object detection.
Was wondering if I can get some help on modifyin it.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client("rekognition")
#passing s3 bucket object file reference
response = client.detect_labels(Image = {"S3Object": {"Bucket": "bucket_name", "Name": "image_name"}}, MaxLabels=3, MinConfidence=70)
print(response)
return "Thanks"
Question 1: Here, it only reads a single image ("image_name") at a time. How can I run all images in the bucket?
Question 2: How can I save the response to json or csv file with same file name? for example, if apple.jpg, orange.jgp, and grape.jpg was used, I want each response to apple.json, orange.json, and grape.json.
Lastly, I heard that there is something like 50 image limit per day. Is this true?
Thank you,
2
Answers
You have two choices:
The first solution is probably more easy to achieve, but the second one permits to totally decouple the Lambdas and reuse them for future tasks.
Regarding the last question: you can see every default or current quota on an account with the Service Quotas console.
The only "50 limit" that I see on Rekognition is a 50 calls per second for the DetectCustomLabels API.
A very similiar Python use case is located in the Official AWS Code Library.
SDK for Python (Boto3)
Shows you how to use the AWS SDK for Python (Boto3) to create a web application that lets you do the following:
Upload photos to an Amazon Simple Storage Service (Amazon S3) bucket.
Use Amazon Rekognition to analyze and label the photos. This covers how to send multiple objects to Amazon Rekognition.
Use Amazon Simple Email Service (Amazon SES) to send email reports of image analysis. (THis put label data into a report. You can just as easily put Label data into JSON).
Services used in this example
Amazon Rekognition
Amazon S3
Amazon SES
Analyze all photos in your S3 bucket and use Amazon SES to email a report.