From: Jeremy J C. <jj...@sy...> - 2014-11-12 18:38:51
|
I have a production server with a 65G journal file (AWS EBS SSD, encrypted) and a bigdata process of size 10G out of 16G total; c3.2xlarge (we have the large heap size for in-memory sort) I need to upgrade bigdata to get the latest critical bug fixes, minimizing effective downtime. I have a preference for 15 minutes downtime and then be totally ready over 2 minutes downtime and then 5 minutes of sluggishness When we migrated to bigdata, we noticed that immediately after the performance was sluggish (verging on the unacceptably sluggish) but that things improved fairly quickly with use. I am wondering about whether to run a query that reads every triple say (count the total number of triples?) to ‘warm things up’ Argument in favor of warm up, is that it sounds the right thing to do Argument against is that if the key caching is actually at the OS level, then the warm up would actually mess it all up, because the current OS caches will be appropriate for the actual access patterns with real queries, and I will replace it with caches for my artificial query: i.e. stopping, upgrading and restarting bigdata will actually have very little impact on the immediate performance because the important caches are not in the bigdata process at all. Please advise Jeremy |