If your organization has access to a S3 bucket and your Platform database is Redshift, you can use SQL scripts to COPY or UNLOAD data between S3 and Redshift. UNLOAD and COPY are optimized to transfer data between Redshift and S3.
As a prerequisite, you must have the correct permissions and valid credentials to the S3 bucket, and must create a Platform S3 Credential by following these instructions: Adding an AWS/S3 Credential.
COPY
To import data from S3 to Redshift use the COPY command.
- Create a new SQL Script
-
Add a parameter of type AWS Credential
- In this example the parameter is named ‘aws’
- Select your AWS/S3 credential from the dropdown
- Use the COPY command and reference the values of your credential
COPY schema.table
FROM 's3://your-bucket/path/to/file.csv'
access_key_id '{{aws.access_key_id}}' secret_access_key '{{aws.secret_access_key}}'
-- add additional options depending on your use case
UNLOAD
To export data from Redshift to S3 use the UNLOAD command.
- Create a new SQL Script
-
Add a parameter of type AWS Credential
- In this example the parameter is named ‘aws’
- Select your AWS/S3 credential from the dropdown
- Use the UNLOAD command and reference the values of your credential
UNLOAD ('<your select statement here>')
TO 's3://your-bucket/path/to/file.csv'
access_key_id '{{aws.access_key_id}}' secret_access_key '{{aws.secret_access_key}}'
-- add additional options depending on your use case
Comments
0 comments
Article is closed for comments.