I think of Solaris as one of the few truly great technologies of the past 30 years. This isn’t a large list since the best technologies rarely survive. I’ve wanted to learn more about it but since it used Sparc hardware I never had a chance to install it on my own server. Things have changed. It now runs well on a variety of x86 platforms making it easy to test. Oracle also has a permissive license for this kind of install.
After writing about zones and security I started to wonder if I may not be mistaken about Solaris. I think my comments were about Linux containers but are Solaris zones really less secure? To find out I decided to install Solaris 11 and test it for myself. Before doing so I did a quick Internet check and found that CIS has a hardening guide for it and there is a DISA STIG pending as well. Those are good signs so I migrated my Xen VMs off of one server and installed Solaris 11.1 on it. (XenSever live migration is a great feature by the way. Yet another reason why I like Xen.)
It installed fine except for a glitch I had with my drives. I wanted to configure them in JBOD mode and use ZFS based raid (RAID-Z). Denied! My HP Smart Array Controller would not allow me do something so dangerous as software RAID. So, it’s mandatory hardware RAID for me. Once I learned that, Solaris installed without a problem. It was a good install experience. I’ve read that installation can easily be automated using the Automated Installer tool but haven’t test that yet.
After a few days testing it, I have to say that I am deeply impressed. Solaris has a polished and professional feel. Its documentation is amazing: 1.5 gigabytes when downloaded! For my initial testing I configured the network and some zones. For the network I dedicated one linked pair of NICs for the zone traffic. Creating the link aggregation was easy. I then created virtual NICs, each on their own VLAN, based on this aggregation. The Oracle instructions were clear and it worked. Zones were likewise easy to setup by following the documentation.
I was able to get Ansible to run a few simple plays on my Solaris server. This was from a CentOS workstation, however. My guess is that most of Ansible will work OK but there will be some glitches.
Solaris has so many killer features that it will take me a long time to learn them all: zones, ZFS, service management, the new packaging system, the Automated Installer, boot environments, etc. I’ve used Linux for an embarrassingly long time but seeing these feature makes me reconsider my OS choice.
In the context of Configuration Management (CM) rollback is an important capability. None of the existing infrastructure CM tools support it. If you deploy one of your configurations to a production server (for example, a new DNS configuration) and even though you tested it thoroughly it causes a problem, how do you quickly and safely roll back that change? Easy to do with source code, not so easy with production servers. I think Solaris may have the solution to that problem through its boot environment feature. Boot environments allow you to snapshot the current state of your system, make a change, and if the change fails easily revert it. That is a powerful CM feature. I will write more about this CM topic in another post. I mention it here because I think Solaris may enable a more robust CM process than was possible with CentOS.
I will test Solaris more in the coming weeks and write about it here.