This is a sample script for uploading multiple files to S3 keeping the original folder structure. Doing this manually can be a bit tedious, specially if there are many files to upload located in different folders. This code will do the hard work for you, just call the function upload_files('/path/to/my/folder')
. The param of the function must be the path of the folder containing the files in your local machine.
Install Boto3
You will need to install Boto3 first:
pip install boto3
Script
import boto3 import os def upload_files(path): session = boto3.Session( aws_access_key_id='YOUR_AWS_ACCESS_KEY_ID', aws_secret_access_key='YOUR_AWS_SECRET_ACCESS_KEY_ID', region_name='YOUR_AWS_ACCOUNT_REGION' ) s3 = session.resource('s3') bucket = s3.Bucket('YOUR_BUCKET_NAME') for subdir, dirs, files in os.walk(path): for file in files: full_path = os.path.join(subdir, file) with open(full_path, 'rb') as data: bucket.put_object(Key=full_path[len(path)+1:], Body=data) if __name__ == "__main__": upload_files('/path/to/my/folder')
The script will ignore the local path when creating the resources on S3, for example if we execute upload_files('/my_data')
having the following structure:
/my_data/photos00/image1.jpg /my_data/photos01/image1.jpg
The resulting structure on S3 will be:
/photos00/image1.jpg /photos01/image1.jpg
Comments 1
This code greatly helped me to upload file to S3. Thanks you! However I want to upload the files to a specific subfolder on S3. Not quite sure how to do it.
ex: “datawarehouse” is my main bucket where I can upload easily with the above code. But I want to upload it in this path: datawarehouse/Import/networkreport.
Can you please help me do it within this code?