We recently moved a few applications to AWS cloud and immediately started noticing a significant drop in performance. This was happening for the applications that were being developed locally on-premise, and access Oracle database on the cloud. Performance decreased by many folds. One of the applications that used to take 5 minutes to process 35 million records, was taking 3 hours.
I excepted a little bit performance hit, but not 36 times slower. First, I checked all our recent commits, to see if any of the recent changes were causing the issue, but I didn’t notice anything significant. Then, I checked SQL execution plan and fine-tuned SQL created indexes on the key columns. This didn’t make any difference at all. I also tried tracing network and traffic but didn’t see anything important.
Adding to the confusion, when I tried running the same queries as in the application with SQL clients like SqlPlus, Toad etc. selects ran in seconds. This ruled out the possibility of something wrong with the SQL. To narrow it down, I created a very simple microservice that ran same queries without any data processing, but it took hours to run. This pointed to the fact that, even though SQL itself was running fast, once it was used in an application, something was going wrong.
After fiddling with different configurations, I started noticing immediate improvements in performance, as soon as I added setFetchSize to Spring JdbcTemplate and tuned it, the performance came back to before migration levels, from 3 hours to 5 minutes. Here is the code in Scala, depending on your configuration your fetch size, may be different.
1 2 3 |
@Autowired var jdbcTemplate: JdbcTemplate = _ jdbcTemplate.setFetchSize(1000) |
If you are using Spring’s NamedParameterJdbcTemplate, configure JdbcTemplate as above and pass it to NamedParameterJdbcTemplat as a constructor arg.
1 2 3 |
var namedParameterJdbcTemplate: NamedParameterJdbcTemplate = _ namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(jdbcTemplate) |