We are almost done.
After redeploying with serverless:
sls deploy
we just need to ensure the following:
i. Make your S3 bucket public
We are using an S3 bucket to handle the image file transfer in our Deep Learning app and need to make access public while using the app. Do this by unchecking the “Block all public access” checkbox on your S3 bucket in the AWS Console
ii. Set the correct S3 bucket access
As we are saving and retrieving images from our S3 bucket, we also need to authorise “put” and “get” actions on the bucket. To do this, replace the existing bucket policy with the JSON below
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::assets.tudicuando.com/*"
}
]
}
The Amazon Resource Name (ARN) shown should be replaced with the equivalent ARN shown for your S3 bucket in the AWS Console. After editing the policy and saving, you should see a red badge displaying “Publicly accessible” at the top of the S3 bucket page
iii. Copy the bucket name to your main python script
The main.py script in the GitHub source code you cloned in Stage 4 contains the line below:
myBucket = '[YOUR-S3-BUCKET]'
[YOUR-S3-BUCKET] should be replaced with the actual S3 bucket name
iv. Define the uploaded image types for our app in AWS API Gateway
Open API Gateway on the AWS Console and under Settings, scroll down to binary types and set:
binary types = */*
This will enable all image types to be uploaded for model inference
Redeploy your app with
sls deploy
then open the (same) url mentioned above to view and interact with your app by uploading a few traffic sign images from here:
https://github.com/bw-cetech/lambda-dl-local/tree/main/test-images
and running the inference process.
And that's it !
This completes the detailed six stage process to deploying a deep learning app on AWS Lambda, but stay tuned in the coming weeks for a video series showing the “how-to” described above!
