If you followed my previous articles, probably at this stage you should have common understanding of primary components of Hadoop ecosystem and basics of distributed calculations. But in order implement performant processing we first need to prepare a strong data foundation for it. Hadoop provides a number of solutions for this purpose and HBase data store is one of the best products in this ecosystem which allows you to organize big amounts of data in a single place. In this article I want to tell about some techniques of working with HBase – how to import the data, how to read it through native API and how to simplify its consumption through another Hadoop product called Hive.
Continue reading “Getting data into HBase and consuming it through native shell and Hive”