This tutorial describes how to convert a Boto3 Python module to a Flask API to upload and delete files on AWS S3.

Finding a way to store, distribute and manage your data is a challenging task. Running applications, delivering content to users, hosting high-traffic websites, or backing up databases all require a lot of storage, and the need for more storage keeps on growing.

AWS S3 is one of the most popular services offered by Amazon Web Services. It provides object storage through a web interface service. It uses the same infrastructure that Amazon.com uses.

S3 aims to provide data in an object form, reducing latency and providing high availability and scalability. S3 stands for Simple Storage Service and allows you to store static data.

Since S3 provides impressive availability and durability, it has become a standard for companies to store data such as video, audio, images, and CSV files. The folder structure is typical of what you see on Windows or macOS — you have folders and sub-folders and so on.

S3 also has built-in redundancy with 99.99999999999% (yeah, that’s 99 point 11 times 9!) durability. This is due to S3 replicating your buckets and all of its content across three availability zones or three physical locations. So, the likelihood that S3 will lose an uploaded file is extremely low.

Why Is S3 Useful?

  • Cheap and reliable way to store objects.
  • Low latency and high throughput access to your buckets’ contents.
  • You can easily use it to host static websites.
  • You can easily integrate events on your S3 bucket with SNS, SQS, and Lambda for some very powerful event-driven applications.
  • It has a mechanism to shift away from old data into long-term storage for cost reduction.

Why Not Call S3 Directly?

In this tutorial, we’re going to create S3 APIs using the Boto3 package in Python and Flask. There are many reasons a developer may opt to use Boto3 to work with S3 instead of calling the AWS REST API directly.

For example, when using a vendor API, you must deal with added dependencies, and error codes could introduce additional complexity. When using a custom package, you could simply check an if/else statement.

Another reason is to avoid writing longer codes. Say you want to initiate an upload. When you’re working with a local package, you can simply make a call, such as boto3.upload_file(), and pass the file name, and it’s done. But if you want to do the same thing with AWS S3 API, you must deal with headers and a body, and compose nearly 20 lines of code just to upload a file.

Also, when using an API or CDN that gives the user the functionality to upload static files, a developer may prefer to create an authentication system using their own REST API (which uses their organization’s username and password). Having the user use S3 through your application, with an endpoint like nordicapis.com/upload-to-s3, can help avoid exposing the AWS API keys.

What’s Boto3?

Boto3 is an official Python package from Amazon to use AWS S3 functionalities. It allows you to directly upload, delete, and update objects in an S3 bucket.

Now, let’s move ahead with the tutorial.

Prerequisites

  • Knowledge of Python
  • Flask
  • AWS API Credentials
  • Firecamp to test the API
  • Virtualenv

Step 1: Getting AWS API Credentials

The first step is to get the Access Key ID and Secret Access Key. You need to open the IAM page (https://console.aws.amazon.com/iam/home#/home) and then click on Users.

Then click on Add Users. Supply a username and click on Programmatic Access so that the credentials can be generated.

Now on the next screen, you’ll see permissions. So, create a new user group and select AmazonS3FullAccess.

Once that’s done, you can now see the credentials on the last page. Store this in a notepad as we’ll need it to use Boto3.

Step 2: Installing Dependencies

Now create a folder named aws-api and open your terminal. We’re assuming that you have already installed the virtualenv package. If you haven’t, follow this tutorial for the installation.

Now, let’s try to initialize a virtualenv instance by using the below command:

virtualenv .
source bin/activate

Once that’s done, let’s install the Boto3 package and Flask using the below command:

pip3 install boto3
pip3 install Flask

Now we have installed the dependencies, let’s focus on the coding part.

Step 3: API Endpoint to Upload a File

In this step, we’ll be creating an API endpoint to upload files on our S3 bucket. Before proceeding with the code, let’s first create a bucket on our AWS S3.

I created a bucket named vs-test with region us-east-2. Make sure to note down the bucket region as it is a required parameter.

Now create a file app.py, open it in your favorite IDE and paste the below code:

import boto3, json, os
from botocore.exceptions import NoCredentialsError
from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
from werkzeug.datastructures import  FileStorage

app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = os.getcwd()

@app.route('/')
def hello_world():
    return 'This is my first API call!'

@app.route('/upload-file', methods=["POST"])
def uploadFile():
     print("here")

     if 'secret_key' in request.form.keys() and 'key_id' in request.form.keys() and 'upload_file' in request.files.keys() and 'bucket_name' in request.form.keys() and 'region'  in request.form.keys():
          secret_key = request.form['secret_key']
          key_id = request.form['key_id']
          bucket_name = request.form['bucket_name']
          region = request.form['region']
          file_upload =  request.files['upload_file']
          print(file_upload.filename)
          file_upload.save(secure_filename(file_upload.filename))

          s3 = boto3.client('s3', aws_access_key_id=key_id, aws_secret_access_key=secret_key)
          s3.upload_file(app.config['UPLOAD_FOLDER']+"/"+file_upload.filename, bucket_name, file_upload.filename)

          return jsonify({"Status" : "File uploaded successfully"})   
     else:
          return jsonify({"error": "You're missing one of the following: app_secret, key_id"})

Code Explanation

At first, we’re importing all the dependencies such as boto3, Flask, Filestorage, and others. After that, we have defined an endpoint upload-file, which accepts form-submission.

We’re first checking if the required fields are there or not in the request and then uploading the file to our Flask server. Once the file is uploaded, we’re authenticating with AWS S3 and then passing the required parameters like bucket name, region, key, secret, etc.

Now let’s try to test our API. I’ll use Firecamp to test the API. You can also pick a tool mentioned here but make sure it supports file upload functionality.

On the terminal, to run the Flask server use the below command:

flask run

Now, let’s make an API request. In Firecamp, select multipart under body and pass the below fields:

  • secret_key
  • key_id
  • bucket_name
  • region
  • upload_file

In the URL field, pass the below URL:

http://127.0.0.1:5000/upload-file

If you have passed the correct details, you should see the files on the AWS S3.

Another point to note is that to update or replace the same file, you just need to upload the file with the same name that you want to replace/update and the rest will be taken care of by AWS S3.

Step 4: API Endpoint to Delete a File

Now let’s try to delete a file on S3 by using Flask. So, add the below code in your app.py file:

@app.route('/delete-file', methods=["POST"])
def deleteFile():
     secret_key = request.form['secret_key']
     key_id = request.form['key_id']
     bucket_name = request.form['bucket_name']
     file_name = request.form['file_name']
     s3 = boto3.client('s3', aws_access_key_id=key_id, aws_secret_access_key=secret_key)
     s3.delete_object(Bucket=bucket_name, Key=file_name)
     return jsonify({"Success": "File Deleted"})

Code Explanation:

We have defined an endpoint delete-file which will accept the below form submissions:

  • secret_key
  • key_id
  • bucket_name
  • file_name

After that, we’re creating an S3 instance and then calling delete_object to delete the object from S3.

Step 5: API Endpoint to Create an S3 Bucket

Once you’re done with all the above steps, we can try to create a new S3 bucket using our API. To do so, add the below code in your app.py:

@app.route('/create-bucket', methods=["POST"])
def createBucket():
     secret_key = request.form['secret_key']
     key_id = request.form['key_id']
     bucket_name = request.form['bucket_name']
     s3 = boto3.client('s3', aws_access_key_id=key_id, aws_secret_access_key=secret_key)
     try:
          s3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={
               'LocationConstraint': 'us-east-2'})
          return jsonify({"Success": "Bucket Created"})
     except:
          return jsonify({"Error": "Something went wrong"})

Code Explanation:

We have defined an endpoint create-bucket, which we’ll use to create a bucket on our S3. The function createBucket() will take all the required fields such as secret_key,key_id,bucket_name, andbucket_name`.

After creating variables, we’re creating an S3 instance and using the create_bucket function to create the S3 bucket.

On the Firecamp app, you can try calling the API endpoint with the credentials, and it’ll return something like this:

See, Wasn’t it easy to convert Boto3 to Flask API? The testing becomes much easier with Firecamp as it natively supports file uploads.

Final Words

You can convert a complete Boto3 Python module or even any module to a Flask API and test it using Firecamp or any other API testing tool. This conversion will be very useful when you’re creating a web or mobile app where you don’t want your users to be redirected to a third-party website.