IT has advanced faster than software application development and this makes deploying production-ready applications much difficult than it could be. IT infrastructure has become virtualized and automated. A system administrator can quickly create and provision any number of virtual servers. If demand increases, he can create more virtual servers to handle the load. If demand decreases the resources can be used for other things. Modern IT is made up of tens, hundreds, and even thousands of virtual servers.
Software, however, is stuck in the era of the single server. Application development and deployment has changed little since the 1980s. Applications are written for a single machine and then deployed using an installer (e.g., InstallShield) or package manager (e.g., RPM) on a single machine. For client software this isn’t as big a problem (though it is still a problem for companies) but for server applications it’s hopelessly obsolete.
No production server application is ever deployed on just a single server. If it’s important for production then you need at least two instances for the sake of redundancy. The application will have to be installed multiple times and some kind of separate fail-over and load balancing system will have to be put in place. Even though deployment to multiple servers is the rule, it is still a special case for application development and takes extra time to configure and test.
It’s not hard to imagine how this situation could be improved. If server application development was production oriented it would work more like this:
- Write the software using language constructs and libraries that are multi-server compatible out of the box.
- Configure the load-balancing and failover strategy for the application.
- Identify the list of virtual servers it will be deployed on, anywhere from 2 to thousands. And then hit “Deploy”.
- If the demand or infrastructure changes then edit the application deployment configuration file from steps 2 and 3. The application then updates itself accordingly.
- Likewise for updates. The application and its dependencies are updated and simply redeployed. The updating should also be configurable so that use of it is not disrupted during update.
Here is an example. Let’s say the company needs a web service that performs a complex and time-consuming set of financial calculations. In my scheme the developer need only code the logic of the calculation and thoroughly test it. Then, the system administrator enters the deployment settings (type of load-balancing, endpoint URL, which VMs it will go on, etc.) And then the administrator simply deploys it using the virtualization-capable package manager.
The company now has a scalable web service with a single URL that is production ready. If one of the VMs goes down, no problem since it is deployed on several and it has the ability to cope with this issue out of the box. If demand for this web service is high, also no problem: the administrator simple edits the deployment file, adds VMs to the list, and re-runs the package manager.
My point is that application development tools and packagers should assume a multiple server environment as the default and provide all the tools to support it out-of-the-box. All the proxy, load-balancing, and fail-over capabilities should be provided and should not require additional software and tedious configuration. They should also probably be sand-boxed and bring all their dependencies with them. They should not require the VM to have other packages installed in order to work as they do now. They should be totally independent.
Application server based systems such as J2EE go some way towards this goal as do languages such as Erlang, but the overall state of server applications and package managers is woefully inadequate for the modern IT environment.