Monday, July 26, 2010

Ten Reasons to Ignore Ten Reasons to Dump Windows and Use Linux

There is a tradition amongst the hard-core Linux aficionados to extol Linux's various virtues as a list of 10 or more reasons to substitute (as in "dump") an existing Windows installation with Linux. There's no reason to point them all out; a simple Google search ("10 reasons to use Linux") will provide you with such lists stretching back through the years, as well as hours of entertainment.

One very recent list caught my eye over the weekend, published by PCWorld. Their article, "Ten Reasons to Dump Windows and Use Linux", seems to have hit a nerve with me. Normally I just ignore such journalistic pap, since the majority of it comes courtesy of one ill-informed blog or another (such as this one). But in this instance a "real" publication put some time and journalistic "credibility" into this list, which places it in front of a wider audience than the typical blog poist (again, such as this one).

So let's consider all ten points, starting with the first in this list:
1. Commercial Support

In the past, businesses used the lack of commercial support as the main reason for staying with Windows. Red Hat, Novell and Canonical, the "big three" commercial Linux providers, have put this fear to rest. Each of these companies offers 24x7x365 support for your mission-critical applications and business services.
Of the "big three" listed, there's actually a "big two" and a third tag-along. The big two are Red Hat followed by Novell. Ubuntu is a much smaller player and a relative newcomer to market, compared to Red Hat and Novell. Red Hat is the longest-running Linux distributor of the three, and has been since it was founded in 1993. Red Hat is one of the very few pure-play Linux distributors that makes money, quarter after quarter, and year after year. Red Hat is also the largest pure-play Linux distributor.

Novell is actually the oldest company, having started in the PC networking world with Novell Netware. Novell attempted to diversify into the Unix market, first via Univel in 1991 as a joint venture with AT&T's USL, then when it purchased USL (Unix System Laboratories) outright from AT&T in 1993. Novell's foray into real Unix was limited at best; Novell continued to expand and innovate within the Netware product line until 2003, when it acquired Ximian and SuSE; the former eventually produced and released Mono, the port of C# to Linux, and the latter gave it a fully developed Linux distribution.

In the last seven years Novell has released a series of products based on these initial acquisitions, most notably Mono, SLED (Suse Linux Enterprise Desktop) and SLES (Suse Linux Enterprise Server). In 2006 Novell entered into a joint patent agreement with Microsoft, which ostensibly was meant to improve interoperability between Suse Linux Enterprise specifically and Microsoft's own Windows workstation and server products. This agreement generated considerable negative press in the Linux geek community, which, depending on your point of view is (or isn't) important. The geek fires over the agreement have pretty much died down (except in certain odd quarters).

Ubuntu is the youngest of the three; the distribution itself was initially released in October 2004, long after Red Hat was founded, and long after the initial release of SuSE (version 1 was released in 1994). Ubuntu is based on one of the oldest Linux distributions, Debian, itself first released in 1993. The problem with Ubuntu is that, while it can trace its roots to Debian, it injects considerable material of its own, and it bases it's releases on Debian's more experimental release branch. Ubuntu has also generated considerable controversy over its six-month release cycle, primarily over the breakage of various applications and features between releases.

If there is a fundamental difference between Ubuntu and its rivals Red Hat and Novell, it's that Red Hat and Novell have a true enterprise Linux offering with expected enterprise support, while Ubuntu's long-term support (referred to as LTS) releases are a simple promotion of their regular six-month releases.

For point 1, if you're considering Linux, either as a full replacement or as a supplement integrated into an existing Windows installation, then your best bet is to consider Red Hat or Novell. For my money, the only real choice is Red Hat Enterprise Linux (or RHEL), simply because Red Hat has been consistently profitable and growing; Novell has struggled to grow its Linux business while its Netware business has slowly faded, and Canonical, the company behind Ubuntu, has yet to have a single profitable quarter.

Now the second point:
2. .NET Support

Businesses that have standardized on Microsoft technology, specifically their .NET web technology, can rely on Linux for support of those same .NET applications. Novell owns and supports the Mono project that maintains .NET compatibility. One of the Mono project’s goals is to provide businesses the ability to make a choice and to resist vendor lock-in. Additionally, the Mono project offers Visual Studio plugins so that .NET developers can easily transfer Windows-based .NET applications without changing their familiar development tools. Why would Novell and others put forth the effort to create a .NET environment for Linux? For real .NET application stability, Linux is a better choice than Windows.
As was pointed out here, .NET support on Linux should be considered most carefully. Microsoft has been consistently advancing and updating the .NET framework since it's release in February 2002. Mono itself was first officially released in 2004. The current release of the .NET framework is version 4. As of April 2010, Mono's official release is 2.6.4. What does this mean with regards to supporting Microsoft's .NET?

According to the Wikipedia Mono page:
This version provides the core API of the .NET Framework as well as support for Visual Basic.NET and C# versions 2.0, 3.0 and 4.0. LINQ to objects and XML is part of the distribution, but not LINQ to SQL. C# 3.0 is now the default mode of operation for the C# compiler. Windows Forms 2.0 is also now supported. Support for C# 4.0 is feature complete (as of December 2009) but not yet released in a stable version.

Implementation of .NET Framework 3.0 (i.e. WPF) is under development under an experimental Mono subproject called "Olive", but the availability of a Mono framework supporting .NET 3.0 is not yet planned.

The Mono project has also created a VB.NET compiler as well as a runtime designed for running VB.NET applications. It is currently being developed by Rolf Bjarne Kvinge.
 It should also be noted that Mono includes an open-source implementation of Silverlight, called Moonlight. Moonlight is claimed to implement all Silverlight 2.0 and some Silverlight 3.0 APIs.

Regardless of what features and APIs are available with Mono, one of the biggest flaws of the current Mono implementation is within its garbage collector. As was pointed out here and on the Wikipedia Mono page:
However, the current conservative garbage collector (the "Boehm-Demers-Weiser Conservative Garbage Collector") has significant limitations compared to commercial garbage collected runtimes like the Java Virtual Machine or the .NET framework's runtime. The conservative collector can exhibit memory leaks that make it unsuitable for long-running server applications. As of July 2009, development of a modern garbage collector called "Simple Generational GC" (SGen-GC) is under way, but a date for incorporation into a production release has yet to be set. (emphasis mine)
Mono might be suitable for client-side experimentation, but its suitability as a total replacement for .NET on Windows server is questionable at best. Any business of any size would do well to think long and hard before choosing to move line-of-business applications written in C# to Mono on Linux (or to Mono on any platform, including Windows and Mac OS X).

The third point, Unix uptimes, is something of a straw man argument. Servers have to be carefully provisioned and configured to maintain the highest levels of availability (and thus uptime). All current modern operating systems, including Windows, can be so tailored as to provide equivalent levels of availability on individual platforms. Furthermore, techniques such as clustering and virtualization, which all modern operating systems support, further enhance availability and uptime. Arguing uptime superiority has thus become moot.

Point 4, concerning security, is another straw man argument. Concentrating specifically on the server-side use of Windows, Windows Server 2003 and later have made considerable strides in overall security. When properly installed and configured, contemporary versions or Windows Server are as secure as any version of Linux or Unix, and somewhat better than Mac OS X. The bane of good security on any server are the applications you install, and in this instance both Windows and Linux have very bad examples.

One of the biggest uses for servers is as web platform. It's unfortunate that too many beginning users install web server packages on both Windows  and Linux without properly configuring and hardening the installation, inviting breaches of security on both platforms. When properly installed and configured, Windows security is as good as Linux security, and both are quite good. Whether both are as strong as Unix is debatable, but for many applications both are more than good enough.

The fifth point, about transferable skills, is laughable and insulting. First is the attitude that Windows admins only know the GUI. There are any number of contemporary books devoted to nothing but the Windows command line, full of non-trivial and extensive examples for managing Windows servers. The single most powerful implementation of the Windows command line is the PowerShell. Introduced in 2006, it is fully integrated with the .NET framework, and provides easy automation of local and remote system management. The capabilities of the PowerShell will match any combination of Linux shell and scripting language. The idea that the seasoned Windows admin is ignorant of the Windows CLI and PowerShell exposes the ignorance of the author in making such a claim, especially in 2010.

Point 6 gives me considerable trouble:
6. Commodity hardware

Business owners will like the fact that their “out-of-date” systems will still run Linux and run it well. Fortunately for Linux adopters, there’s no hardware upgrade madness that follows every new version of the software that’s released. Linux runs on x86 32-bit and 64-bit architectures. If your system runs Windows, it will run Linux.
There is no guarantee that "out-of-date" hardware will run Linux in the manner that business owners expect their systems to run. There is a reason why systems are retired and made "out-of-date"; newer systems are produced by the vendors, and business needs change and grow over time. No system lasts forever, and when it has reached end-of-life, that system is removed and decommissioned from regular business service. Furthermore, contemporary commercial Linux distributions are as demanding of system resources as Windows, and perhaps even more so. If you find you must use "out-of-date" hardware for some business task, then your choices are limited to older releases of community-supported distributions, with all that that implies.

Point 7 also give me considerable trouble:
7. Linux is free

You may have heard that Linux is free. It is. Linux is free of charge and it is free in the sense that it is also free of patents and other restrictions that make it unwieldy for creative business owners who wish to edit and enhance the source code. This ability to innovate with Linux has helped create companies like Google, who have taken that ability and converted it into big business. Linux is free, as in freedom.
Linux is not free, especially commercial versions from Red Hat and Novell. If you're a business that wants peace of mind with your back-room systems, then you will purchase support from Red Hat or Novell to make sure that you're concentrating on your core business, instead of supporting back-room systems. The point that Google was indeed able to innovate on top of Linux by making significant changes to Linux, especially at the kernel level, is a valid point for the use of Linux and against Microsoft in that particular context. Microsoft's overly competitive behavior in the marketplace must carefully considered when looking for a software platform on which to build an idea. While the source to Linux (kernel, libraries, and applications) are essentially open and reasonably free to change, Windows source code can only be obtained from Microsoft with considerable portions under NDA. Again, this sounds terrible on the surface, but Microsoft's attitude towards their source is clear and up front, and is no different than other major vendors, such as Oracle. While freedom is bandied about rather glibly in this point, freedom includes the ability to partner with Microsoft, and to make the decisions necessary (or not) to be successful at it. And to go back to Google, keep in mind that Google has an extensive engineering staff devoted to the development and support of Linux, something that the regular small to medium sized business will not have.

Points 8 and 9 can be lumped together, because they underscore the same basic capability, and that's support. The devil is in the details. If you want 24x7 support where there's someone you can depend on at the end of a phone or an email, then you're going to buy such support from the major OS vendors. You won't get free comprehensive support. If you don't want to deal with Microsoft directly, there are sufficient partners who can provide the level of tailored help you need. The same holds true for Red Hat and Novell; both have built extensive partner networks every bit as strong as Microsoft's. Once again, reliable support is a wash as long as you stick with a reputable Linux vendor, just as you would expect by sticking with Windows.

Finally, point 10 on the list just makes me want grab the author and shake them until their head rattles:
10. Regular Updates

Are you tired of waiting for a Windows service pack every 18 months? Are you also tired of the difficulty in upgrading your Windows systems every few years because there’s no clear upgrade path? (Ubuntu Linux offers new, improved versions every six months) and long-term support (LTS) versions every two years. Every Linux distribution offers regular updates of its packages and sources several times per year and security fixes as needed. You can leave any upgrade angst in your officially licensed copy of Windows because it's easy to upgrade and update Linux. And, the best part? No reboot required. (emphasis mine)
The reason I want to shake the author until his head rattles is that the six month upgrade cycle requires a full upgrade of the existing operating system, with many instances of breakage along the way in features and applications between releases. This is a "feature" of the enthusiast distributions, of which Ubuntu is the most notable. Real businesses who demand high levels of availability at the lowest TCO possible will pick a commercial variant that does not upgrade every six months. As an example Red Hat upgrades are more finely grained (we're using RHEL 5.5 in our lab), a version that was officially released in March of 2007. The biggest feature of RHEL (as well as Novell) is the adherence to a specific kernel release, in this case 2.6.18 for RHEL 5. This does not mean that the RHEL 5 kernel has remained static and stagnant since its release; security fixes and important feature upgrades are back-ported from the existing kernel tree to Red Hat's on 2.6.18 kernel tree and released in a timely fashion. This also includes application-level fixes and releases. Sensationalism aside, Microsoft is every bit as diligent about releasing fixes as Red Hat and Novell, and all three are indistinguishable; they have to be, as they're competing for the same type of customers.

The final point of point 10 concerns reboots. Regardless of the OS, you should have a process in place for how those updates are applied to systems in your business. Usually, best practice dictates that updates are applied to a test system (or in this day, a test virtual machine), then tested with important applications to make sure that nothing has changed unexpectedly/is broken. This includes restarts to make sure that the system will start up as expected with the changes applied. Once you're reasonably satisfied with the testing, the real systems are upgraded in an orderly fashion, keeping a weather eye on any unintended consequences that my crop up, in case those changes need to be rolled back out. In all but the simplest of scenarios restarting/not restarting a system is the least of your worries, and you will restart them if the updates are extensive enough and involve the kernel.

The upshot of all this? You should only choose to use Linux if there is a very good business case for incorporating Linux into your business and you fully understand the caveats involved with switching. Only you can make that ultimate, and hopefully informed, decision. Naively switching without fully understand the consequences is a recipe for disaster, and a good way to tarnish the reputations of yourself, your company, and Linux.