Data Backup Digest

Do-It-Yourself Windows File Recovery Software: A Comparison

results »

Red Hat's New Storage Server 3 Features Full Hadoop Support

While Red Hat's Storage Server has always maintained a stringent focus on high-efficiency data storage and processing in a software-defined storage service, their newest update to the software, Red Hat Storage Server 3, now features built-in support for the Hadoop Distributed File System. The new release allows Red Hat's Storage Server to take on the niche of big data management through the utilization of a fully open source platform.

Ranga Rangachari, general manager and vice president of storage and big data with Red Hat, spoke highly of the most recent update. He was quoted as saying: "With this latest release, Red Hat is leading the charge on open, software-defined storage to help build agile enterprises that can rapidly gain competitive advantage by leveraging the tangible value hidden inside unstructured data."

Given the newest update, Red Hat's Storage Server 3 is now capable of handling 60 drives per server. In stark contrast, the previous version of Storage Server maxed out at a mere 36 drives. Furthermore, each cluster is now capable of handling as many as 128 servers, while the previous version only supported 64. Each cluster can provide as much as 19 petabytes of storage space with Red Hat Storage Server 3.

Simon Robinson, vice president of 451 Research; a group who has analyzed Red Hat extensively, remarked: "Red Hat's software-defined storage portfolio offers an open source alternative to proprietary technology stacks to address mounting challenges around the growth of enterprise data."

The Hadoop Distributed File System, also known as HDFS, was pioneered specifically to tackle the issue of big data storage, processing and analytics. Originally introduced in 2005 by Doug Cutting and Mike Cafarella, HDFS was initially designed specifically for use in the Nutch search engine; a Java-based, open source web search engine that is still in use today. While Hadoop itself is primarily Java-based, there are elements of C programming seen within the internal coding.

Since then, however, the Hadoop Distributed File System has skyrocketed in terms of popularity, especially during the midst of the big data boom. Some of today's biggest and most prolific users of HDFS include Amazon, Facebook, Yahoo and many more.

John Kreisa, who serves as the vice president of Hortonworks, has played an integral role in the development, refinement and integration of Hadoop. His company's data framework, known as the Hortonworks Data Platform, or HDP for short, offers a Red Hat plug-in to facilitate storage.

Kreisa was recently quoted as saying: "With the Red Hat Storage plug-in for the Hortonworks Data Platform, the industry’s only 100% open source Hadoop distribution, we are empowering joint customers with seamless access to more data for deeper analysis on a cost-effective scale-out storage solution."

The Hadoop Distributed File System also has applications in cloud computing. In fact, Hadoop is currently used, to various extents, by the most popular cloud service providers in the industry, including Microsoft, Google and Amazon. Furthermore, the HDFS has provided support to academic researchers through the Academic Cloud Computing Initiative, a program that was started in 2008 between IBM, Google and the National Science Foundation.

Comments

No comments yet. Sign in to add the first!