For low volumes of data (a few rows of data occasionally) it is OK to use:
insert into table ... update table ... delete from table ...
commands to maintain redshift data. This is how spark streaming would likely work.
However, for larger volumes you must always: 1) write data to s3, preferably chunked up into 1MB to 1GB files, preferable gzipped. 2) run redshift copy command to load that s3 data into redshift "staging" area 3) run redshift sql to merge the staging data into your target tables.
using this copy method could be hundreds of times more efficient than individual inserts.
This means of course, you really have to run in batch mode.
You can run the batch update every few minutes to keep redshift data latency low.