3

I know how to use python to load an existing s3 bucket in sage maker using R. Something like this:

role = get_execution_role() region = boto3.Session().region_name bucket='existing S3 Bucket' data_key = 'Data file in the existing s3 bucket' data_location = 's3://{}/{}'.format(bucket, data_key) 

How can one recreate this using R in Sage maker? All i see in the available documentation is how to create a new bucket but none of it mentions how to use an existing S3 bucket. Help would be appreciated.

link to documentation for R in sage maker: https://aws.amazon.com/blogs/machine-learning/using-r-with-amazon-sage maker/

2
  • you are not typing the link correctly: aws.amazon.com/blogs/machine-learning/… Commented Nov 29, 2019 at 23:30
  • 1
    I do not know the details but you can conceptually load reticulate library and hopefully execute the same Python code in R. Commented Nov 29, 2019 at 23:32

3 Answers 3

1

Thanks for using Amazon SageMaker!

You can use SageMaker Session helper methods for listing and reading files from S3. Please checkout this sample notebook if you need examples for using SageMaker Session using_r_with_amazon_sagemaker.ipynb .

Thanks,

Neelam

Sign up to request clarification or add additional context in comments.

Comments

0

You can use the Sagemaker Python SDK via reticulate, for example:

library(reticulate) sagemaker <- import("sagemaker") uri <- "s3://my-bucket/my-prefix" files <- sagemaker$s3$S3Downloader$list(uri) csv <- sagemaker$s3$S3Downloader$read_file(files[1]) df <- read.csv(text=csv) 

Comments

0

You can use packages from the cloudyr project:

library(aws.s3) library(aws.ec2metadata) # will mutate environment to add credentials library(readr) df <- s3read_using(FUN = read_csv, bucket = "my-bucket", object = "my-key.csv") 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.