Amadeus Consulting Discusses Preventing Bloat in Enterprise Systems

Computers & TechnologyTechnology

  • Author Steve Loper
  • Published May 10, 2011
  • Word count 1,548

Software Bloat

Most people are familiar with Moore’s Law which basically states that the number of components that can be put in an integrated circuit will double every 18 months. Many people often misinterpret this to mean that computers will double in speed every 18 months as well, though it is true that processors are rapidly getting faster.

Despite the rapid growth and incredible development of processing devices, we find that most software tasks take as long today as they did a decade ago. This is a second law called Wirth’s Law, or sometimes called Page’s Law, which states that "software is getting slower more rapidly than hardware becomes faster."

Though there are many reasons why computers seem to run only marginally faster than their obsolete counterparts, one major reason is that software has become bloated with extra features and processes that require a much larger install footprint and more processing resources to run. This is often called "software bloat."

Software bloat can be solved or prevented through a variety of steps and methods.

• Controlling "Feature Creep" or the persistent addition of new, but unnecessary, features

• Managing hardware limitations to "optimize" code rather than just buying more hardware

• Controlling user expectations and managing for the long term

• Auditing system code regularly to save costs and reduce unwanted redundancies

Software bloat is not always a bad thing. In most cases, it simply describes how later versions of a software product have a lot more features and capability than earlier versions of the same product. Unfortunately these come at the cost of processing power and storage space.

The problem comes when new features are continually added, but unused features are rarely or never removed, and software is never properly optimized to function efficiently. This means that unused code often persists and is maintained long past its expiration date and negatively impacts the software’s performance. This expansion of features is often called Feature Creep.

Feature Creep

Feature creep is one of the main causes of software bloat as new features are continually added, but few are ever removed. This contributes to the increasing space and processing requirements of software.

Of course new features and new capabilities are a good thing, but one problem with this kind of bloat is that users are often bombarded by massive updates and software seems to take up the same proportion of storage space that it did many years ago. For example, take QuickTime, a video player by Apple. It seems that there is a 50MB update every few weeks or so (sometimes bundled with an iTunes update).

What is in these updates? Bug fixes, new features, and a lot of unknowns. Although after all of this, it still does the only thing I want it to do: play video. Only now, it takes up a lot more space and resources than it really needs to just to play video.

Also related to feature creep is the tendency of some programs to suffer from a bloat-ware issue as they are unnecessarily bundled with another software product. This may be a good way to distribute software, but it also tends to be annoying for consumers who just want one product without burdening their system with unnecessary software.

In enterprise business systems, it is also important to proactively manage features and documenting each addition. This will help easily remove unneeded or unused features and keep systems running at top speed, and will also be important in recovering from any disaster such as a system crash or failure.

Feature Creep can also impact development and implementation of new systems as deadlines must be continually delayed to accommodate new features, which results in higher costs and a delayed return on investment.

The Relationship between Software and Hardware

In the 1970’s and early 80’s, software was constricted by severe limitations on disk space, memory and processing speed. As digital storage options grew, software also quickly expanded, beginning a never-ending race between available space and the required install footprint. As one expands, so does the other.

Many years ago, a friend of mine worked at Iomega helping design some of their first consumer hard drives. During the early phase of development of a new drive, the engineers created a hard drive that could be manufactured with either a 10 Megabyte or 20 Megabyte storage capacity, which was very large for the time. So large in fact, that the engineering team was struggling to figure out how an average person could ever need 20MB of storage space.

By the time the engineers were ready to go to manufacturing a few months later, it was clear that not only would 20MB be needed, it was almost inadequate. During that short time, software had evolved so quickly that it started to push the hardware limits.

Now we see that software basically functions in an environment with very few resource limits. Hardware capacity advances quickly enough that there is little need for application developers to be concerned about the size or resources their software needs as long as it is within functional and compatibility limits. In other words, there is little need for software to be kept small unless there is an artificial barrier imposed, or if hardware growth suddenly stops.

There are a few areas where there is a somewhat artificial hardware barrier. For example, the Xbox 360 and PS3 were released in 2005 and 2006 respectively, and they continue to run on basically the same hardware specifications (though later models have extra storage space for downloaded content).

Under this imposed hardware restriction, it is interesting to see how software adapts. Instead of relying on "unlimited" resources, developers optimize their code to make it as efficient as possible. Comparing early console games versus games released at the end of a console’s life, there is a very clear difference and later games are generally much more impressive despite running on the same hardware.

We also see this to some extent in mobile app development, where app size is somewhat limited (though there are work-arounds), which has limited bloat to some extent; though there are very unique app design strategies.

This may also be true on some enterprise systems as upgrading hardware is expensive or in some cases impossible. Also, sometimes it turns out to be cheaper (especially in the long run) to optimize a software system rather than just continually upgrading hardware to accommodate a burgeoning software system.

Software bloat also contributes to the rising cost of hardware as older equipment is no longer suitable for modern-day use. This is partially why there is almost no secondary market for hard drives, memory, processors or other hardware.

User Expectation Bloat

In the early part of the space race NASA astronauts ran into trouble when ink pens would not work properly in space due to the lack of gravity. After a few years of research and millions of dollars, scientists were able to develop a pen that would work in zero-gravity. So how did astronauts write in space before the development of this state-of-the-art pen? Simple, they used a pencil.

User expectations of technology increase as we begin to understand the capabilities and possibilities of technology. In this case, the pencil is a very simple piece of technology. We have limited expectations of a pencil and we know what it does, what it generally can do, and what its limitations are.

As consumers demand new features and capabilities, software becomes bigger, larger and slower. Or at least stays at about the same speed despite hardware advances. This is seen often in consumer and business software, but also in enterprise systems as new features are added but almost never removed even as they are no longer needed.

This is the point where expectation bloat and software bloat can be very damaging.

The Impact of Bloat in Enterprise Systems

One of the most important steps to maintaining and controlling a software system is to document all of the "moving parts" of your software. This should be a proactive part of your disaster recovery plan and also a standard practice to helping ensure that your software is running at its top performance.

Without good documentation, you really do not have a disaster recovery plan. Even if you think you do, you will never be able to tell when you have actually recovered because you have no record of what was in there to begin with. Good documentation and regular culling of unused code helps keep systems running at top performance and prevents saves you from conducting an archeological expedition to try to save lost data whenever the system crashes.

Often times "temporary" and unused code gets left in the system. This could be because somebody forgot to remove it, because nobody realizes that it is unused, or because there is no documentation and they fear that removing it will damage something else. This is usually only identified after a thorough system code audit.

Besides slowing down vital systems, software bloat often has other ongoing costs that are often overseen. In bad cases, it slows down vital systems and may waste a server or other hardware. However in worse cases, that wasted server will be continually maintained, patched, upgraded and eat up thousands of dollars and hundreds of hours in resources over its lifetime.

About Steve Loper

Steve Loper is the Senior Quality Engineer at Amadeus Consulting since 1995. Steve has been recognized by Microsoft as a "Most Valuable Professional". Steve is regarded as one of the top .NET application and SQL Server database architects in the country, and currently oversees client projects to ensure that a strong technical approach is put in place to address even the most complex issues.

Article source: https://articlebiz.com
This article has been viewed 1,004 times.

Rate article

Article comments

There are no posted comments.

Related articles