Announcement Announcement Module
Collapse
No announcement yet.
Suggestions for making tcServer more cloud-friendly Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Suggestions for making tcServer more cloud-friendly

    We're knee-deep in an enterprise-wide deployment of tcServer onto VMware ESXi/Ubuntu Linux machines in a virtual private cloud environment. I thought I might share some of my thoughts here for posterity, discussion, and to let Googlebots get their binary fingers on them. These are simply opinions based on my own experience. Take them or leave as you will...

    First, a little about our architecture: Sun hardware (several 12GB RAM/8-way and 4-way 64-bit, some SAN-connected, others not), VMware ESXi 4.0 hypervisor on that, 4 ethernet cards bonded/failed-over on all servers, then Ubuntu Linux 9.10 server nodes as VMs. Private BIND9 server running name service and DHCP for the client nodes, Kerberos auth on a master node. All segregated on Cisco managed switches with a VLAN to keep multicast/broadcast local to the cloud.

    Phew...

    (In no particular order):

    * Our cloud is designed to be horizontally scalable, thus our nodes are small. 2-3 GB RAM and 8-10GB of drive space, per. Hyperic AMS eats up such a vast majority of resources on these nodes I turned it off. I'm writing my own management tools based on Python and RabbitMQ which work faster and consume such drastically less resources I more than doubled my capacity by simply shutting down Hyperic and all the agents. If you don't want to use AMS, then you're pretty much screwed on what tools you can use to automate management tasks. I've had to write my own from scratch, again, using Python.

    * Our cloud nodes are ignorant of their environment until they boot up. They are designed to be provisioned from a common VMware template file and can be brought up or down as load demands change. Scripts set various system variables and download configurations from a centralized server on boot. Since there's no easy way to have a tcServer instance aware that it is sharing a common configuration with many other servers, I had to write a Python script that monitors a central location for deployment artifacts (WAR files, definition files, etc...) and listen to MQ events for configuration changes. This script downloads a new server.xml and setenv.sh if need be and restarts the server. I wrote these tools in about a day, day-and-a-half. Doing something equivalent in Java/JMX would take me...well, I'd still be working on it, let's just say. I really wish tcServer had some kind of awareness that it gets configuration from somewhere other than an XML file on-disk. I know that's how Tomcat does it and for a lot of people (probably most), that's sufficient. But consider that I have several dozen tcServer instances. My config changes have to go out to a lot of servers in a short amount of time and incur basically no downtime (rolling restarts w/ session failover). I started with shared NFS folders, but that introduces a single point of failure, so I'm moving away from that and making sure each node can operate independently of one another and the master node. A cloud node should be self-sufficient enough to keep serving pages no matter what other servers are up or down. There's obviously no tools for that (or none that work for me, at any rate), so these utilities are rolled ourselves.

    * JMX is active, my cloud architecture is passive. I can't integrate with the JMX management tools much first because I'm not using AMS and secondly because I'm trying to do as much as possible in a passive, event-driven way. Things happen as the result of events in a message queue, not as the result of an active method call. This isn't a critique of JMX. That's just the way it works. And it's nice to be able to fire up jconsole and poke around inside the server. It's easy-peasy-lemon-squeezy to use JMX to manage a single server, sure. But 20? A single operation takes 20 active calls to 20 different servers that may or may not be up? I'd rather post a message and have those servers respond whenever they come up or right away if they're already running. I'll probably end up using some JMX calls that happen as the result of a message. I'm not sure yet. Just wanted to mention the paradigm is a little different.

    * tcServer doesn't handle clustering like I wanted it to. I wanted to have a webserver running on each node (for load-balancing/failover) that proxied the entire inventory of tcServer instances with NO sticky sessions. 4 linux boxes running 3 tcServers each would net me 4 webservers and 12 tcServers. I can't tell whether it's in Tomcat or in Spring Security, but I just can't do this. I don't want sticky sessions. That's the biggie. I want the load spread evenly and I want every instance of tcServer running to know about a user no matter where they logged in. I can do a JDBCStore on the sessions with a '0' set for the backup timeout, which solves part of the problem, but I can't use DNS round-robin load balancing in front of that. My nodes come up and down based on demand, so the DNS may point to 4 or 6 or 2 or 12 servers. There's no telling. I'm going to work around this and I know the problem is not tcServer, but Tomcat and the assumptions made about the environment in which Tomcat will run, but that's kinda the point: a private cloud is a whole other beast. My individual linux boxes should, as much as I can make them, appear to be children of a single, unified whole. Each tcServer in a cloud cluster I don't think should consider itself an individual, but, similarly to a multi-processor CPU, be just an available part of a larger, parrallelized unit.

    I'll edit this with more later... Now, it's quittin' time!
Working...
X