I have some data in S3 and I want to create a lambda function to predict the output with my deployed aws sagemaker endpoint then I put the outputs in S3 again. Is it necessary in this case to create an api gateway like decribed in this link ? and in the lambda function what I have to put. I expect to put (where to find the data, how to invoke the endpoint, where to put the data)
import boto3 import io import json import csv import os client = boto3.client('s3') #low-level functional API resource = boto3.resource('s3') #high-level object-oriented API my_bucket = resource.Bucket('demo-scikit-byo-iris') #subsitute this for your s3 bucket name. obj = client.get_object(Bucket='demo-scikit-byo-iris', Key='foo.csv') lines= obj['Body'].read().decode('utf-8').splitlines() reader = csv.reader(lines) import io file = io.StringIO(lines) # grab environment variables runtime= boto3.client('runtime.sagemaker') response = runtime.invoke_endpoint( EndpointName= 'nilm2', Body = file.getvalue(), ContentType='*/*', Accept = 'Accept') output = response['Body'].read().decode('utf-8') my data is a csv file of 2 columns of floats with no headers, the problem is that lines return a list of strings(each row is an element of this list:['11.55,65.23', '55.68,69.56'...]) the invoke work well but the response is also a string: output = '65.23\n,65.23\n,22.56\n,...'
So how to save this output to S3 as a csv file
Thanks