Network Management

EXPERT ADVICE

10 MS Exchange Practices Most Companies Should Shun

Deploying, managing and maintaining the high availability of the Microsoft Exchange 2010 email platform in enterprise environments is no small feat. Given the increased complexity of this new Microsoft platform, there are numerous key decisions to be made as the precursor to keeping Exchange running smoothly.

Here are the top 10 worst practices that you should avoid if you want to maintain the performance and uptime of your Exchange email system.

1: Run Without Backups

Running without backups is a hot topic these days, and it’s an approach that is gaining popularity in some IT circles. The logic behind this practice is that data sets are getting so large that they cannot possibly be backed up. Instead, proponents argue that data should be replicated in real time to other servers or locations using replication technology.

While running without backups can be a smart choice in some isolated cases, it is generally not a good practice. Specifically, if an organization requires the ability to restore data to a point in time, then using replication instead of backups is not a good option.

2. Deploy 50 GB Mailboxes

With Google and Microsoft increasing the size of their business class cloud offerings, some customers are asking for 50 GB mailboxes to “future proof” their Exchange deployments. The problem with this approach is that large data sets often must be stored in multiple places. For example, a 50 GB mailbox will have to be stored in the primary mailbox, the backup location and on the user’s hard drive (which is most likely backed up as well).

As a result, a 50 GB mailbox actually consumes 150 GB of storage. Also, when mobile users need to upgrade their laptops and re-sync their email over remote connections from their homes or a hotels, the process could take days. A better alternative is to use a single 5 GB mailbox and some sort of attachment handling software to keep mailbox sizes lean.

3. Deploy JBOD Storage Without RAID

Exchange 2010 now provides more robust replication capabilities using database availability groups (DAG’s), which can support up to 16 databases that are replicated across the same number of nodes. In some cases, it can make sense to forgo using RAID (Redundant Array of Inexpensive Disks).

However, contrary to what Microsoft would like us to believe, this does not make sense for everyone. Hard drives still fail, and running in a JBOD configuration with Exchange requires maintaining two copies of the primary database. The logic here is that even if a drive fails it can be replaced without losing any data because two copies of the database still exist.

Yet, as most IT administrators know, “when it rains it pours”. Playing these odds is a dangerous game. So adding some simple redundancy like RAID 5 and a hot spare will eliminate long nights or weekends spent rebuilding or restoring Exchange.

4. Use Third-Party High-Availability Solutions

Virtualization is the latest third-party approach being touted for achieving “High Availability.” Instead of using Exchange 2010’s built-in high-availability capabilities, this model is based on using underlying virtualization technology to attain high-availability features.

This is akin to buying a brand-new car, taking it home and then replacing the factory engine with a home-built one. Most enterprise products, especially database-centric products like Exchange, that provide built-in high-availability features should be used as intended to achieve expected results.

5. Put Your Email in the Cloud

There are several important ramifications that should be considered before moving email service to the public cloud.

Networking — will the existing network support the additional cloud services load? Public cloud email is likely to reduce performance significantly on 100 MB or 1 GB LANs versus the standard WAN-based performance.

Features — will public cloud email support all the requirements of your organization? On-premise versions of Exchange provide IT organizations with granular control over features and how they are implemented. Cloud email services can limit IT’s control over the end-user experience and potentially business productivity.

6. Stretch Your Data Centers for Disaster Recovery

Most organizations don’t need true disaster recovery capabilities for email. For example, a bank will require disaster recovery for email, but a widget manufacturer, on the other hand, can likely handle a 24-hour outage.

Therefore, creating a complete replica (or “stretched” data center as some people refer to it), is rarely necessary. For many organizations, a well-written disaster recovery plan supported with offsite storage of backed up data is a sufficient and much lower-cost option that is less prone to error during the recovery process.

7. Build Exchange for Five Nines of Availability

As in number six above, investments in email availability technology should map to business requirements.

Here are the numbers: 99.95 percent availability equals 4.38 hours of service interruption per year, while 99.999 percent translates to 5.26 minutes of downtime per year.

However, getting from 99.95 to 99.999 is extremely expensive — each nine represents an exponential increase in hardware and staffing costs. In fact, the difference between four nines and five nines is largely operational. Unless an organization is financial services, healthcare or some other real-time business, the costs of achieving five nines are difficult, if not impossible, to justify.

8. Third-Party Archiving for Large Mailboxes

With the release of Exchange 2010 and support for large mailboxes, there is no longer a need to use third-party archiving products to maintain a large mailbox (5-10 GB). Data can be stored in Exchange and accessed with Outlook 2007 (some configuration adjustments required) and Outlook 2010 without issue.

Prior to Exchange 2010, most organizations that wanted to implement unlimited mailbox sizes were forced to use some sort of “stubbing” technology — a clumsy feature that forces end users to connect to external systems to read their email.

9. Use Fibre Channel

There’s a common misconception that Fibre Channel storage is required for IO (input/output)-intensive applications like Exchange email. While this may have been the case in the past, it is quickly becoming the exception instead of the rule.

Major advancements in iSCSI over the past two years have made this technology a very viable transport mechanism for all but the most IO-intensive applications such as enterprise ERP/CRM database applications. Using iSCSI in conjunction with 10 GB Ethernet delivers a high-performance storage network that would cost many times more using complex Fibre Channel technology.

10. Deploy Non-Supported Solutions

Microsoft releases very specific Exchange support policies that, if followed, will deliver expected results. When designing Exchange or other IT systems, paying close attention to support policies and not cutting corners will save money and headaches later down the road.

In IT, like in life, just because we can do something doesn’t mean it will be in our best interests. When evaluating these 10 Exchange practices against the technology and business requirements of a typical organization, most of the time they should be avoided.


Lee Dumas is director of architecture for Azaleos, a provider of managed services for Exchange (and other Microsoft UC servers), and a Microsoft Certified Architect.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels