Redshift statement length exceeding while inserting data from a pandas dataframe to Redshift table
01:05 28 Jul 2021

I get this error

SyntaxError: Statement is too large. Statement Size: 19780406 bytes. Maximum Allowed: 16777216 bytes

I have even dropped some columns also to make the insertion possible but it doesn't work, can't drop even more columns. I get this error for this code

red_conn = create_engine(
        f"postgresql:)
from sqlalchemy import event
@event.listens_for(red_conn, "before_cursor_execute")
def receive_before_cursor_execute(
       conn, cursor, statement, params, context, executemany
        ):
            if executemany:
                cursor.fast_executemany = True
df.to_sql('table1',red_conn,index=False,schema='schemaname',if_exists='append',\
      method='multi',chunksize=5000)

Using Bulk copy or copy command by storing this dataframe into CSV and then moving it to s3 and using copy command to insert is leading to ANSI errors and lot of type and data mismatches. I prefer loading a dataframe directly to redshift be it batch wise or anything. Please help how to insert the data into redshift from a dataframe without getting any statement length limitations. Thanks much in advance!

python postgresql dataframe amazon-redshift amazon-redshift-spectrum