Welcome!

Microsoft Cloud Authors: Andreas Grabner, Stackify Blog, Liz McMillan, David H Deans, Automic Blog

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

Performance: The Key to Data Efficiency By @Permabit | @CloudExpo [#Cloud]

Data efficiency encompasses a variety of different technologies that enable the most effective use of space on a storage device

Data efficiency - the combination of technologies including data deduplication, compression, zero elimination and thin provisioning - transformed the backup storage appliance market in well under a decade. Why has it taken so long for the same changes to occur in the primary storage appliance market? The answer can be found by looking back at the early evolution of the backup appliance market, and understanding why EMC's Data Domain continues to hold a commanding lead in that market today.

Data Efficiency Technologies
The term "data efficiency" encompasses a variety of different technologies that enable the most effective use of space on a storage device by both reducing wasted space and eliminating redundant information. These technologies include thin provisioning, which is now commonplace in primary storage, as well as less extensively deployed features such as compression and deduplication.

Compression is the use of an algorithm to identify data redundancies within a small distance, for example, finding repeated words within a 64 KB window. Compression algorithms often take other steps to increase the entropy (or information density) of a set of data such as more compactly representing parts of bytes that change rarely, like the high bits of a piece of ASCII text. These sorts of algorithms always operate "locally", within a data object like a file, or more frequently on only a small portion of that data object at a time. As such, compression is well suited to provide savings on textual content, databases (particularly NoSQL databases), and mail or other content servers. Compression algorithms typically achieve a savings of 2x to 4x on such data types.

Deduplication, on the other hand, identifies redundancies across a much larger set of data, for example, finding larger 4 KB repeats across an entire storage system. This requires both more memory and much more sophisticated data structures and algorithms, so deduplication is a relative newcomer to the efficiency game compared to compression. Because deduplication has a much greater scope, it has the opportunity to deliver much greater savings - as much as 25x on some data types. Deduplication is particularly effective on virtual machine images as used for server virtualization and VDI, as well as development file shares. It also shows very high space savings in database environments as used for DevOps, where multiple similar copies may exist for development, test and deployment purposes.

The Evolution of Data Efficiency
In less than ten years, data deduplication and compression shifted billions of dollars of customer investment from tape-based backup solutions to purpose-built disk-based backup appliances. The simple but incomplete reason for this is that these technologies made disk cheaper to use for backup. While this particular aspect enabled the switch to disk, it wasn't the driver for the change.

The reason customers switched from tape to disk was that backup and particularly restore to and from disk, respectively, is much, much faster. Enterprise environments were facing increasing challenges in meeting their backup windows, recovery point objectives, and (especially) recovery time objectives with tape-based backup systems. Customers were already using disk-based backup in critical environments, and they were slowly expanding the use of disk as the gradual price decline of disk allowed.

Deduplication enabled a media transition for backup by dramatically changing the price structure for disk-based vs tape. While the disk-based backup is still more expensive, deduplication has made it faster and better.

It's also worth noting that Data Domain, the market leader early on, still commands a majority share of the market. This can be partially explained by history, reputation and the EMC sales machine, but other early market entrants including Quantum, Sepaton and IBM have struggled to gain share, so this doesn't fully explain Data Domain's prolonged dominance.

The rest of the explanation is that deduplication technology is extremely difficult to build well, and Data Domain's product is a solid solution for disk-based backup. In particular, it is extremely fast for sequential write workloads like backup, and thus doesn't compromise performance of streaming to disk. Remember, customers aren't buying these systems for "cheap disk-based backup;" they're buying them for "affordable, fast backup and restore." Performance is the most important feature. Many of the competitors are still delivering the former - cost savings - without delivering the golden egg, which is actually performance.

Lessons for Primary Data Efficiency
What does the history of deduplication in the backup storage market teach us about the future of data efficiency in the primary storage market? First, we should note that data efficiency is catalyzing the same media transition in primary storage as it did in backup, on the same timeframe - this time from disk to flash, instead of tape to disk.

As was the case in backup, cheaper products aren't the major driver for customers in primary storage. Primary storage solutions still need to perform as well as (or better than) systems without data efficiency, under the same workloads. Storage consumers want more performance, not less, and technologies like deduplication enable them to get that performance from flash at a price they can afford. A flash-based system with deduplication doesn't have to be cheaper than the disk-based system it replaces, but it does have to be better overall!

This also explains the slow adoption of efficiency technologies by primary storage vendors. Building compression and deduplication for fully random access storage is an extremely difficult and complex thing to do right. Doing this while maintaining performance - a strict requirement, as we learn from the history of backup - requires years of engineering effort. Most of the solutions currently shipping with data efficiency are relatively disappointing and many other vendors have simply failed at their efforts, leaving only a handful of successful products on the market today.

It's not that vendors don't want to deliver data efficiency on their primary storage, it's that they simply haven't been able to develop it so far and have underestimated the difficulty of this task.

Hits and Misses (and Mostly Misses)
If we take a look at primary storage systems shipping with some form of data efficiency today, we see that the offerings are largely lackluster. The reason that offerings with efficiency features haven't taken the market by storm is because they deliver the same thing as less successful disk backup products - cheaper storage, not better storage. Almost universally, they deliver space savings at a steep cost in performance, a tradeoff no customer wants to make. If customers simply wanted to spend less, they would buy bulk SATA disk rather than fast SAS spindles or flash.

Take NetApp, for example. One of the very first to the market with deduplication, they proved that customers wanted efficiency - but they were also quickly turned off by the limitations of the ONTAP implementation. Take a look at the NetApp's Deduplication Deployment and Implementation Guide (TR-3505). Some choice quotes include, "if 1TB of new data has been added [...], this deduplication operation takes about 10 to 12 hours to complete," and "With eight deduplication processes running, there may be as much as a 15% to 50% performance penalty on other applications running on the system." Their "50% Virtualization Guarantee* Program" has 15 pages of terms and exceptions behind that little asterisk. It's no surprise that most NetApp users choose not to turn on deduplication.

VNX is another case in point. The "EMC VNX Deduplication and Compression" white paper is similarly frightening. Compression is offered, but it's available only as a capacity tier: "compression is not suggested to be used on active datasets." Deduplication is available as a post-process operation, but "for applications requiring consistent and predictable performance [...] Block Deduplication should not be used."

Finally, I'd like to address Pure Storage, which has set the standard for offering "cheap flash" without delivering the full performance of the medium. They represent the most successful of the all-flash array offerings on the market today and have deeply integrated data efficiency features, but they struggle to meet a sustained 150,000 IOPS. Their arrays deliver a solid win on price over all of the flash arrays without optimization, but that performance is not going to tip the balance for primary in the same way Data Domain did for backup.

To be fair to the above products, there are lots of others that must have tried to build their own deduplication and simply failed to deliver something that meets their exacting business standards. IBM, EMC VMAX, Violin Memory and others surely have tried to build their own efficiency features, and have even announced promises to deliver over the years, but none have shipped to date.

Finally, there are some leaders in the primary efficiency game so far! Hitachi is delivering "Deduplication without Compromise" on their HNAS and HUS platforms, providing deduplication (based on Permabit's AlbireoTM technology) that doesn't impact the fantastic performance of the platform. This solution delivers savings and performance for file storage, although the block side of HUS still lacks efficiency features.

EMC XtremIO is another winner in the all-flash array sector of the primary storage market. XtremIO has been able to deliver outstanding performance with fully inline data deduplication capabilities. The platform isn't yet scalable or dense in capacity, but it does deliver the required savings and performance necessary to make a change in the market.

Requirements for Change
The history of the backup appliance market makes the requirement for change in the primary storage market clear. Data efficiency simply cannot compromise performance, which is the reason why a customer is buying a particular storage platform in the first place. We're seeing the seeds of this change in products like HUS and XtremIO, but it's not yet clear who will be the Data Domain of the primary array storage deduplication market. The game is still young.

The good news is that data efficiency can do more than just reduce cost; it can also increase performance as well - making a better product overall, as we saw in the backup market. Inline deduplication can eliminate writes before they ever reach disk or flash, and deduplication can inherently sequentialize writes in a way that vastly improves random write performance in critical environments like OLTP databases. These are some of the requirements for a tipping point in the primary storage market.

Data efficiency in primary storage must deliver uncompromising performance in order to be successful. At a technical level, this means that any implementation must deliver predictable inline performance, a deduplication window that spans the entire capacity of the existing storage platform, and performance scalability to meet the application environment. The current winning solutions provide some of these features today, but it remains to be seen which product will capture them all first.

Inline Efficiency
Inline deduplication and compression - eliminating duplicates as they are written, rather than with a separate process that examines data hours (or days) later - is an absolute requirement for performance in the primary storage market, just as we've previously seen in the backup market. By operating in an inline manner, efficiency operations provide immediate savings, deliver greater and more predictable performance, and allow for greatly accelerated data protection.

With inline deduplication and compression, the customer sees immediate savings because duplicate data never consumes additional space. This is critical in high data change rate scenarios, such as VDI and database environments, because non-inline implementations can run out of space and prevent normal operation. In a post-process implementation, or one using garbage collection, duplicate copies of data can pile up on the media waiting for the optimization process to catch up. If a database, VM, or desktop is cloned many times in succession, the storage rapidly fills and becomes unusable. Inline operations prevent this bottleneck, one called out explicitly in the NetApp documentation above where at most 2 TB of new data can be processed per day. In a post-process implementation a heavily utilized system may never catch up with new data written!

Inline operation also provides for the predictable, consistent performance required by many primary storage applications. In this case, deduplication and compression occur at the time of data write and are balanced with the available system resources by design. This means that performance will not fluctuate wildly as with post-process operation, where a 50% impact (or more) can be seen on I/O performance, as optimization occurs long after the data is written. Additionally, optimization at the time of data write means that the effective size of DRAM or flash caches can be greatly increased, meaning that more workloads can fit in these caching layers and accelerate application performance.

A less obvious advantage of inline efficiency is the ability for a primary storage system to deliver faster data protection. Because data is reduced immediately, it can be replicated immediately in its reduced form for disaster recovery. This greatly shrinks recovery point objectives (RPOs) as well as bandwidth costs. In comparison, a post-process operation requires either waiting for deduplication to catch up with new data (which could take days to weeks), or replicating data in its full form (which could also take days to weeks of additional time).

Capacity and Scalability
Capacity and scalability of a data efficiency solution should seem to be obvious requirements, but they're not apparent in the products in the market today. As we've seen, a storage system incorporating deduplication and compression must be a better product, not just a cheaper product. This means that it must support the same storage capacity and the performance scalability of the primary storage platforms that customers are deploying today.

Deduplication is a relative newcomer to the data efficiency portfolio, and this is largely because the system resources required, in terms of CPU and memory, are much greater than older technologies like compression. The amount of CPU and DRAM in modern platforms means that even relatively simple deduplication algorithms can now be implemented without substantial hardware cost, but they're still quite limited in the amount of storage that they can address, or the data rate that they can accommodate.

For example, even the largest systems from all-flash array vendors like Pure and XtremIO support well under 100 TB of storage capacity, far smaller than the primary storage arrays being broadly deployed today. NetApp, while they support large arrays, only identify duplicates within a very small window of history - perhaps 2 TB or smaller. To deliver effective savings, duplicates must be identified across the entire storage array, and the storage array must support the capacities that are being delivered and used in the real world. Smaller systems may be able to peel off individual applications like VDI, but they'll be lost in the noise of the primary storage data efficiency tipping point to come.

Shifting the Primary Storage Market to Greater Efficiency
A lower cost product is not sufficient to substantially change customers' buying habits, as we saw from the example of the backup market. Rather, a superior product is required to drive rapid, revolutionary change. Just as the backup appliance market is unrecognizable from a decade ago, the primary storage market is on the cusp of a similar transformation. A small number of storage platforms are now delivering limited data efficiency capabilities with some of the features required for success: space savings, high performance, inline deduplication and compression, and capacity and throughput scalability. No clear winner has yet emerged. As the remaining vendors implement data efficiency, we will see who will play the role of Data Domain in the primary storage efficiency transformation.

More Stories By Jered Floyd

Jered Floyd, Chief Technology Officer and Founder of Permabit Technology Corporation, is responsible for exploring strategic future directions for Permabit’s products, and providing thought leadership to guide the company’s data optimization initiatives. He has previously deployed Permabit’s effective software development methodologies and was responsible for developing Permabit product’s core protocol and initial server and system architectures.

Prior to Permabit, Floyd was a Research Scientist on the Microbial Engineering project at the MIT Artificial Intelligence Laboratory, working to bridge the gap between biological and computational systems. Earlier at Turbine, he developed a robust integration language for managing active objects in a massively distributed online virtual environment. Floyd holds Bachelor’s and Master’s degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo Silicon Valley Call for Papers is now open.
SYS-CON Events announced today that DivvyCloud will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in rea...
SYS-CON Events announced today that Tintri, Inc, a leading provider of enterprise cloud infrastructure, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Tintri offers an enterprise cloud platform built with public cloud-like web services and RESTful APIs. Organizations use Tintri all-flash storage with scale-out and automation as a foundation for their own clouds – to build agile development environments...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus intern...
SYS-CON Events announced today that Progress, a global leader in application development, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Enterprises today are rapidly adopting the cloud, while continuing to retain business-critical/sensitive data inside the firewall. This is creating two separate data silos – one inside the firewall and the other outside the firewall. Cloud ISVs ofte...
SYS-CON Events announced today that Systena America will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Systena Group has been in business for various software development and verification in Japan, US, ASEAN, and China by utilizing the knowledge we gained from all types of device development for various industries including smartphones (Android/iOS), wireless communication, security technology and IoT serv...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
SYS-CON Events announced today that Hitachi Data Systems, a wholly owned subsidiary of Hitachi LTD., will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City. Hitachi Data Systems (HDS) will be featuring the Hitachi Content Platform (HCP) portfolio. This is the industry’s only offering that allows organizations to bring together object storage, file sync and share, cloud storage gateways, and sophisticated search and...
The 21st International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
SYS-CON Events announced today that Carbonite will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Carbonite protects your entire IT footprint with the right level of protection for each workload, ensuring lower costs and dependable solutions with DoubleTake and Evault.
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
SYS-CON Events announced today that Progress, a global leader in application development, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Enterprises today are rapidly adopting the cloud, while continuing to retain business-critical/sensitive data inside the firewall. This is creating two separate data silos – one inside the firewall and the other outside the firewall. Cloud ISVs oft...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.