Pooled Bandwidth
I would really like to be able to run a reverse proxy with load balancing, then behind that have multiple instances for horizontal scaling. If the bandwidth were pooled I could use the bandwidth from all instances even though all the outside connections are going through a single instance. This would be great in conjunction with a vLAN.
This discussion has been closed.
Comments
Would transfer my entire linode setup over here in a heartbeat if this was implemented.
Especially useful in the Sydney and Tokyo datacentres.
+100,000,000 (Shows how much i really would love this!)
Yeah, seems a good idea. Useful, and fair, as customers would still just be using the bandwidth paid for, but in a better way to them, and not in a negative to VULTR.
What better way to encourage lots of backend servers not requiring public IPs (if that functionality is ever added)
It would be even more useful as far as fault tolerance goes if it was possible to request different physical KVM hosts, though I can see some of the problems with this functionality...
I guess the development of the load balancing functionality possibly will include fault domains as a feature available to everyone (like MS Azure).
Regarding price, the cheap solutions (CloudStack, Profitbricks, etc) implement LB as simple instance, a virtual appliance (e.g. virtual router), not exactly what I would call a robust/dependable/scalable solution but running it in HA it works and it is cheap. :-)
Apart from straight forward server emulation, I'm pretty new to all this 'cloud' stuff, so don't really know what's out there. (I'm from the old skool: boxes, wires, Halon cyclinders, and cat5 spaghetti! - I could even mention BNC, 10base5, and even DRS6000 rs232 'donkey wallopers', but I won't, as that will make me sound old!)
This feature was postponed in favor of a few other updates (such as one click apps, location & plan expansion, DNS panel, internal upgrades, etc) but we still plan to implement and open a beta for this, likely this summer. Thank you for your patience.