Python memory leak after server upgrade OR not all sizes have the same performance
My python application is basically a loop whose first step is to simultaneously (multithreaded) fetch some information via http, and whose second step is to make some calculations with that data and write to disk.
It is important to reduce network latency in this process. Most of the http resources are located in Silicon Valley.
In my many trials, I have observed that the $24.00 high frequency compute servers in SV have higher latency than $18.00 ones. I have tested this with the exact same configuration and application (the application I have mentioned above). There is a 10-20% more cycle time with the $24.00 ones.
First question: Why do different sized installations at the same location have different http performances?
To overcome this, I go with a $18.00 HFC server. But, as you might imagine, the less the latency, the more cycles the app makes, and more disk data it generates and requires more disk space. The disk size of this offer is not enough for the applications maintenance purposes. So I decide to upgrade the 18.00$ HFC server to a 24.00$ one so that I won't lose the IP and keep the low latency that 24.00$ HFC installations do not provide me with.
After I do that, my python app starts to have a memory leak. To overcome this, I reinstall the server, reinstall the exact same configuration and app but still get the memory leak.
Second question: Why might a resized server cause a memory leak?
Help will be very much appreciated,