August 21st we attended the London Cloud Foundry User Group (LCFUG) meetup at Pivotal Labs (there were flags). After a chat by Duncan Winn, Director CF EMEA at Pivotal, on Cloud Foundry internals and interaction flows, I shared some of anynines experiences running Cloud Foundry. Focusing on ‘how (not) to shoot yourself in the leg with your over-commitment settings‘, I recollected from over 12 months of experience. You can find my slides on SlideShare, or just read on. The anynines stack is built up from rented hardware in a datacenter, with (initially) VMware and then Cloud Foundry on top of that. We then migrated from a rented VMware to a self-hosted OpenStack (because of reasons). And it simply worked, straight from the start. Proof point being that Cloud Foundry saves investments into software development by being infrastructure agnostic.
Table of Contents
… yet not entirely without failure
We have seen some security issues in the 12 months since we started using Cloud Foundry, yet Pivotal informs its partners early on about issues and usually these go along with fixes as well. An OpenStack related issue was for instance the slow file system writes we experienced with Ext4. With a workaround using Ext3, this was no major one though. Neither was the slow network traffic to and from warden containers on OpenStack. We swiftly traced this down to be related to the hardcoded DEA MTU settings and created a pull request to fix that (to be configurable).
On over-committing your DEA
A Cloud Foundry related gotcha we found evacuating the DEA, which triggers train script, which in turn triggers a Bosh timeout race-condition. Removing a DEA, apps will be evacuated and the DEA will be stopped. Bosh deployment will fail when the evacuation takes longer than the Bosh timeout. Instead of having to rerun your Bosh deployment, you should set your Bosh timeout accordingly.
The DEA ‘over-committing’ RAM of the VM caused some recent issues. With 10 GB of RAM droplets with a total size of 40 GB, the default over-commitment factor of Cloud Foundry is 4. RAM usage peaks causes random errors. Failures happening during staging causes random applications to crash, without leaving meaningful log information.
Finally we decided to reduce the over-commitment factor to 2. Your native strategy would be to open the Bosh release for Cloud Foundry, manually reduce the over-commitment factor and run a Bosh deploy. BUT this has a heavy impact on your running applications. Switching from 8 GB VMs, with an OC factor 4 announces 32 GB (V)RAM. Reducing the OC factor to 2, relates to 16 GB (V)RAM. When evacuating a 32 GB (V)RAM host, another 32 GB (V)RAM host would be preferred. Causing an evacuation wave is unwanted.
Instead you will need to create a second resource pool for new DEAs, in order to actually deploy the 2nd resource pool before you stop the old DEAs. A drawback is that it needs more resources, but it makes for a smoother transition.