Hello, I've always heard that using parameter markers in an insert operation performs better than using insert with literals. But I had an opposite experience that I'd like to understand.
I'm using DB2 11.1.3.3 on Linux.
I needed to insert 57000 rows into an empty table from a CSV file. I am not allowed to use the load utility in the target database.
I used some anonymous blocks as exemplified below for every 5000 rows (first I tried putting all rows within the same block, but it hit the command limit):
BEGIN
declare v_insert_text varchar(32672);
declare v_insert_stmt statement;
set v_insert_text = 'insert into <schema>.<target_table> (<list_of_columns>) values (?, ?, ?)';
prepare v_insert_stmt from v_insert_text;
execute v_insert_stmt using value1, value2, value3;
...
execute v_insert_stmt using value1, value2, value3;
commit;
END@
...
BEGIN
declare v_insert_text varchar(32672);
declare v_insert_stmt statement;
set v_insert_text = 'insert into <schema>.<target_table> (<list_of_columns>) values (?, ?, ?)';
prepare v_insert_stmt from v_insert_text;
execute v_insert_stmt using value1, value2, value3;
...
execute v_insert_stmt using value1, value2, value3;
commit;
END@
I put all the commands in a SQL file and executed the script using the db2 clp. It took 01:33:26 to complete. During execution, I noticed on the monitor that most of the time DB2 was working on the preparation of the statement, as far as I understood.
(Please find attached the monitor screen)
Then I truncated the table and did the same task using insert row to row with literals, again with commit every 5000 rows. It took 00:03:11 to complete.
Can anyone help me undestand what happened?
Best Regards,
Ivanna Vieira