Nice brief article by David Ferris on how some of the changes from Exchange 2007 to Exchange 2010 will inevitably impact customer storage architectures for this mission critical application. Some things to consider with Exchange 2010:
- The Exchange high availability and failover model is centered around 'Database Availability Groups', or DAG, which uses 'over-the-top' replication between Exchange mailbox servers
- With DAG, each mailbox server has captive storage, unlike the shared storage model of 2007
- Single-instance attachment store is no longer part of Exchange, meaning redundant copies of attachments are stored individually, driving up capacity
- PST centralization, archiving, and a host of other features also drive up capacity consumption
The net result: customers will inevitably need more storage - a LOT more storage. StorSimple provides a compelling solution to these issues by blending the best of on-premises storage with the on-demand model of cloud storage, and an array of compelling technology advances including Weighted Storage Layout (WSL) and primary storage deduplication.
Today is an exciting today for a group of people that have been tucked away for almost a year building what we consider to be technology that has the potential to positively impact certain application environments and dramatically simplify storage management, improve performance consistency, and enable realization of the benefits of cloud storage. After nearly a year of operating in stealth mode, StorSimple is launching our first non-stealth company website. While we’re not disclosing product specifics until our product launch which is soon to come, we are disclosing who we are, what we do, what problems we’re targeting, and what benefits we can provide. We are however disclosing details about our solution and our technology, which has caused a number of our early adopters to reach out to the analyst, reseller, and technology partner community with great interest, which was likely the catalyst for StorSimple being named as a cool vendor by a prestigious analyst firm.
For those of you that might not be familiar with who we are, StorSimple is a company based on Santa Clara, CA, founded by Ursheet Parikh and Guru Pangal. Our investors are Index and Redpoint, and the goal of our company is to help customers simplify their data storage environments, reduce complexity, minimize cost, and improve performance consistency for high-growth applications. The DNA of our core team is in storage, data center networking, virtualization, and application delivery. What we’ve built and are currently in beta testing with is what we like to call an application-optimized hybrid storage solution - an on-premises storage system deployed in our data center, built for a certain class of applications, that can securely take advantage of cloud storage. One of our analyst friends described it best as “a hybrid storage provider that blends the best of on-premises storage with security, WAN optimization, and an on-demand cloud storage model.” Our solution works with both public and private clouds, and we have a number of technology and cloud storage partnerships that we will be announcing soon.
We’re taking a pragmatic approach and focusing our core technology (at this stage) on three key applications, which are all experiencing tremendous growth and are a very good match with our solution: Microsoft Exchange, Microsoft SharePoint, and Microsoft Windows File Servers. Our solution – in a nutshell – provides the experience of primary storage with the economic and operational benefits of cloud storage. That is, you get the performance and features of primary storage (volume management, block volume access, snapshots) and the economic (cost) and operational benefits (on-demand, pay-as-you-grow) of cloud storage.
We’ve built compelling technology directly and organically into our product. Our solution includes integrated on-premises storage consisting of SSD and SATA with automated data tiering (through a patented algorithm we’ve named Weighted Storage Layout, or WSL) and also primary storage deduplication. We’ve found that for a certain class of applications – those with a high degree of locality (that is, hotspots) and a high degree of compressibility (typically unstructured data), our solution is extremely effective. We’re not out to try to tell the world that we can address any application with our solution, but we are definitely out to build the best storage system in the world for these applications.
Why are WSL and primary storage deduplication compelling? Traditional storage provides minimal discrimination between blocks of data. Sure, most of them have read and write caches – and some of the higher-end systems are introducing (or have introduced) intelligent caching using a large pool of SSD, but the majority (not all) require you to purchase a massive pool of capacity up front, and in many cases this pool is comprised of fast, expensive storage that remains unused for long periods of time until your growth requirements cause your applications to actually take advantage of it – but the whole time, they’re consuming power, space, cooling, and yes you still have to manage it.
WSL and primary storage deduplication make a stunning combination. With these two, you get the benefits of transparent, efficient capacity utilization through deduplication, and WSL transparently adjust the physical location of sub-blocks of data into the appropriate tier to make sure that the most frequently accessed, most frequently referenced, and most relevant data is stored on the fastest tiers. The ability to track data’s relevance over time helps us to manage where data lives – all without changing the way your servers see storage. This means that you can achieve the performance of SSD without requiring a massive pool of it – as long as the tiering algorithm is effective in identifying what data needs to be there. The balance of the data can reside on lower-cost SATA.
We learned two lessons early on that helped shape our company direction. The first was that customers genuinely had concerns over the security of data stored in the cloud, and the second was that customers were generally reluctant to take advantage of cloud storage for primary storage purposes. We’ve designed the system to allow you to manage the encryption of your data, without sharing your keys with your cloud storage provider, and we also allow you to decide whether you want to take advantage of public or private cloud storage – and how you want to take advantage of it. Customers can use us like a traditional on-premises storage array (no cloud) and enjoy the benefits of WSL and primary storage deduplication, or they can take advantage of the cloud either for data protection purposes or for primary storage. In cases where you want to take advantage of the cloud, we have another patented feature we call ‘Cloud Clones’, which allows you to take a point-in-time, consistent snapshot of volumes related to an application, and store it (encrypted) in the cloud as an always-online point-in-time truly independent copy. To ensure performance when accessing the cloud, we’ve built in asymmetric WAN optimization into our solution (no device or instance needed in the cloud provider network), and WSL is optimized to address performance issues associated with having a tier that is connected over a WAN.
The net result for the applications that we’re focusing on is primary storage experience with lower cost and manageability. Over the next few weeks I’ll be writing a bit more about the specific applications we focus on, what their issues are, and how our technology solves those issues and makes life “simple” for our customers. Since we have “simple” in our name, so our product better be simple to deploy, simple to manage, and make life simpler for our customers. None of us are interested in changing our name from “StorSimple” to “StorDifficult” or “StorComplicated”.
While there are likely a number of questions that I didn’t answer directly (I didn’t want to write another book!) I do invite you to take a look at our website – http://www.storsimple.com – and if you’re interested in learning more or potentially participating in our beta, drop me a line! I’d love to hear from you.
Very good article from Jeff Boles @ Taneja discussing at a high level how cloud storage services are helping to address the total cost of ownership of data protection. When you factor in the many various elements of cost, complexity, and failure surface of traditional data protection, it's no surprise that the cloud can be so compelling.