• No results found

SECOND EDITION

N/A
N/A
Protected

Academic year: 2021

Share "SECOND EDITION"

Copied!
513
0
0

Loading.... (view fulltext now)

Full text

(1)

Alex Holmes

SECOND EDITION

M A N N I N G

IN P RACTICE

I NCLUDES 104 T ECHNIQUES

(2)

Praise for the First Edition of Hadoop in Practice

A new book from Manning, Hadoop in Practice, is definitely the most modern book on the topic. Important subjects, like what commercial variants such as MapR offer, and the many different releases and

API

s get uniquely good coverage in this book.

—Ted Dunning, Chief Application Architect, MapR Technologies Comprehensive coverage of advanced Hadoop usage, including high-quality code samples.

—Chris Nauroth, Senior Staff Software Engineer The Walt Disney Company A very pragmatic and broad overview of Hadoop and the Hadoop tools ecosystem, with a wide set of interesting topics that tickle the creative brain.

—Mark Kemna, Chief Technology Officer, Brilig A practical introduction to the Hadoop ecosystem.

—Philipp K. Janert, Principal Value, LLC This book is the horizontal roof that each of the pillars of individual Hadoop technology books hold. It expertly ties together all the Hadoop ecosystem technologies.

—Ayon Sinha, Big Data Architect, Britely I would take this book on my path to the future.

—Alexey Gayduk, Senior Software Engineer, Grid Dynamics A high-quality and well-written book that is packed with useful examples. The breadth and detail of the material is by far superior to any other Hadoop reference guide. It is perfect for anyone who likes to learn new tools/technologies while following pragmatic, real-world examples.

—Amazon reviewer

(3)
(4)

Hadoop in Practice Second Edition

ALEX HOLMES

M A N N I N G

Shelter Island

(5)

www.manning.com. The publisher offers discounts on this book when ordered in quantity.

For more information, please contact Special Sales Department Manning Publications Co.

20 Baldwin Road PO Box 761

Shelter Island, NY 11964 Email: orders@manning.com

©2015 by Manning Publications Co. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher.

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning

Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps.

Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end.

Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine.

Development editor: Cynthia Kane Manning Publications Co. Copyeditor: Andy Carroll 20 Baldwin Road Proofreader: Melody Dolab Shelter Island, NY 11964 Typesetter: Gordan Salinovic

Cover designer: Marija Tudor

ISBN 9781617292224

Printed in the United States of America

1 2 3 4 5 6 7 8 9 10 – EBM – 19 18 17 16 15 14

(6)

v

brief contents

P ART 1 B ACKGROUND AND FUNDAMENTALS ...1

1

Hadoop in a heartbeat 3 2

Introduction to YARN 22

P ART 2 D ATA LOGISTICS ...59

3

Data serialization—working with text and beyond 61 4

Organizing and optimizing data in HDFS 139 5

Moving data into and out of Hadoop 174

P ART 3 B IG DATA PATTERNS ... 253

6

Applying MapReduce patterns to big data 255 7

Utilizing data structures and algorithms at scale 302 8

Tuning, debugging, and testing 337

P ART 4 B EYOND M AP R EDUCE ... 385

9

SQL on Hadoop 387

10

Writing a YARN application 425

(7)
(8)

vii

contents

preface xv

acknowledgments xvii about this book xviii

about the cover illustration xxiii

P ART 1 B ACKGROUND AND FUNDAMENTALS ...1

1 Hadoop in a heartbeat 3

1.1 What is Hadoop? 4

Core Hadoop components 5

The Hadoop ecosystem 10 Hardware requirements 11

Hadoop distributions 12

Who’s using Hadoop? 14

Hadoop limitations 15

1.2 Getting your hands dirty with MapReduce 17 1.3 Summary 21

2 Introduction to YARN 22

2.1 YARN overview 23

Why YARN? 24

YARN concepts and components 26 YARN configuration 29

T

ECHNIQUE

1 Determining the configuration of your cluster 29

Interacting with YARN 31

(9)

T

ECHNIQUE

2 Running a command on your YARN cluster 31 T

ECHNIQUE

3 Accessing container logs 32

T

ECHNIQUE

4 Aggregating container log files 36 YARN challenges 39

2.2 YARN and MapReduce 40

Dissecting a YARN MapReduce application 40

Configuration 42 Backward compatibility 46

T

ECHNIQUE

5 Writing code that works on Hadoop versions 1 and 2 47

Running a job 48

T

ECHNIQUE

6 Using the command line to run a job 49 Monitoring running jobs and viewing archived jobs 49 Uber jobs 50

T

ECHNIQUE

7 Running small MapReduce jobs 50 2.3 YARN applications 52

NoSQL 53

Interactive SQL 54

Graph processing 54 Real-time data processing 55

Bulk synchronous parallel 55 MPI 56

In-memory 56

DAG execution 56

2.4 Summary 57

P ART 2 D ATA LOGISTICS ...59

3 Data serialization—working with text and beyond 61

3.1 Understanding inputs and outputs in MapReduce 62 Data input 63

Data output 66

3.2 Processing common serialization formats 68 XML 69

T

ECHNIQUE

8 MapReduce and XML 69 JSON 72

T

ECHNIQUE

9 MapReduce and JSON 73 3.3 Big data serialization formats 76

Comparing SequenceFile, Protocol Buffers, Thrift, and Avro 76 SequenceFile 78

T

ECHNIQUE

10 Working with SequenceFiles 80 T

ECHNIQUE

11 Using SequenceFiles to encode Protocol

Buffers 87

Protocol Buffers 91

Thrift 92

Avro 93

T

ECHNIQUE

12 Avro’s schema and code generation 93

(10)

CONTENTS ix

T

ECHNIQUE

13 Selecting the appropriate way to use Avro in MapReduce 98

T

ECHNIQUE

14 Mixing Avro and non-Avro data in MapReduce 99 T

ECHNIQUE

15 Using Avro records in MapReduce 102

T

ECHNIQUE

16 Using Avro key/value pairs in MapReduce 104 T

ECHNIQUE

17 Controlling how sorting works in

MapReduce 108 T

ECHNIQUE

18 Avro and Hive 108 T

ECHNIQUE

19 Avro and Pig 111 3.4 Columnar storage 113

Understanding object models and storage formats 115

Parquet and the Hadoop ecosystem 116

Parquet block and page sizes 117

T

ECHNIQUE

20 Reading Parquet files via the command line 117

T

ECHNIQUE

21 Reading and writing Avro data in Parquet with Java 119

T

ECHNIQUE

22 Parquet and MapReduce 120 T

ECHNIQUE

23 Parquet and Hive/Impala 125

T

ECHNIQUE

24 Pushdown predicates and projection with Parquet 126

Parquet limitations 128 3.5 Custom file formats 129

Input and output formats 129

T

ECHNIQUE

25 Writing input and output formats for CSV 129 The importance of output committing 137

3.6 Chapter summary 138

4 Organizing and optimizing data in HDFS 139

4.1 Data organization 140

Directory and file layout 140

Data tiers 141

Partitioning 142 T

ECHNIQUE

26 Using MultipleOutputs to partition your

data 142

T

ECHNIQUE

27 Using a custom MapReduce partitioner 145 Compacting 148

T

ECHNIQUE

28 Using filecrush to compact data 149 T

ECHNIQUE

29 Using Avro to store multiple small binary

files 151 Atomic data movement 157

4.2 Efficient storage with compression 158

T

ECHNIQUE

30 Picking the right compression codec for your

data 159

(11)

T

ECHNIQUE

31 Compression with HDFS, MapReduce, Pig, and Hive 163

T

ECHNIQUE

32 Splittable LZOP with MapReduce, Hive, and Pig 168

4.3 Chapter summary 173

5 Moving data into and out of Hadoop 174

5.1 Key elements of data movement 175 5.2 Moving data into Hadoop 177

Roll your own ingest 177

T

ECHNIQUE

33 Using the CLI to load files 178 T

ECHNIQUE

34 Using REST to load files 180

T

ECHNIQUE

35 Accessing HDFS from behind a firewall 183 T

ECHNIQUE

36 Mounting Hadoop with NFS 186

T

ECHNIQUE

37 Using DistCp to copy data within and between clusters 188

T

ECHNIQUE

38 Using Java to load files 194

Continuous movement of log and binary files into HDFS 196 T

ECHNIQUE

39 Pushing system log messages into HDFS with

Flume 197

T

ECHNIQUE

40 An automated mechanism to copy files into HDFS 204

T

ECHNIQUE

41 Scheduling regular ingress activities with Oozie 209

Databases 214

T

ECHNIQUE

42 Using Sqoop to import data from MySQL 215 HBase 227

T

ECHNIQUE

43 HBase ingress into HDFS 227

T

ECHNIQUE

44 MapReduce with HBase as a data source 230 Importing data from Kafka 232

5.3 Moving data into Hadoop 234

T

ECHNIQUE

45 Using Camus to copy Avro data from Kafka into HDFS 234

5.4 Moving data out of Hadoop 241 Roll your own egress 241

T

ECHNIQUE

46 Using the CLI to extract files 241 T

ECHNIQUE

47 Using REST to extract files 242 T

ECHNIQUE

48 Reading from HDFS when behind a

firewall 243

T

ECHNIQUE

49 Mounting Hadoop with NFS 243

T

ECHNIQUE

50 Using DistCp to copy data out of Hadoop 244

(12)

CONTENTS xi

T

ECHNIQUE

51 Using Java to extract files 245 Automated file egress 246

T

ECHNIQUE

52 An automated mechanism to export files from HDFS 246

Databases 247

T

ECHNIQUE

53 Using Sqoop to export data to MySQL 247 NoSQL 251

5.5 Chapter summary 252

P ART 3 B IG DATA PATTERNS ...253

6 Applying MapReduce patterns to big data 255

6.1 Joining 256

T

ECHNIQUE

54 Picking the best join strategy for your data 257 T

ECHNIQUE

55 Filters, projections, and pushdowns 259

Map-side joins 260

T

ECHNIQUE

56 Joining data where one dataset can fit into memory 261

T

ECHNIQUE

57 Performing a semi-join on large datasets 264 T

ECHNIQUE

58 Joining on presorted and prepartitioned

data 269 Reduce-side joins 271

T

ECHNIQUE

59 A basic repartition join 271

T

ECHNIQUE

60 Optimizing the repartition join 275 T

ECHNIQUE

61 Using Bloom filters to cut down on shuffled

data 279 Data skew in reduce-side joins 283

T

ECHNIQUE

62 Joining large datasets with high join-key cardinality 284

T

ECHNIQUE

63 Handling skews generated by the hash partitioner 286

6.2 Sorting 287 Secondary sort 288

T

ECHNIQUE

64 Implementing a secondary sort 289 Total order sorting 294

T

ECHNIQUE

65 Sorting keys across multiple reducers 294 6.3 Sampling 297

T

ECHNIQUE

66 Writing a reservoir-sampling InputFormat 297

6.4 Chapter summary 301

(13)

7 Utilizing data structures and algorithms at scale 302

7.1 Modeling data and solving problems with graphs 303 Modeling graphs 304

Shortest-path algorithm 304 T

ECHNIQUE

67 Find the shortest distance between two

users 305 Friends-of-friends algorithm 313 T

ECHNIQUE

68 Calculating FoFs 313

Using Giraph to calculate PageRank over a web graph 319 7.2 Modeling data and solving problems with graphs 321

T

ECHNIQUE

69 Calculate PageRank over a web graph 322 7.3 Bloom filters 326

T

ECHNIQUE

70 Parallelized Bloom filter creation in MapReduce 328

7.4 HyperLogLog 333

A brief introduction to HyperLogLog 333

T

ECHNIQUE

71 Using HyperLogLog to calculate unique counts 335

7.5 Chapter summary 336

8 Tuning, debugging, and testing 337

8.1 Measure, measure, measure 338 8.2 Tuning MapReduce 339

Common inefficiencies in MapReduce jobs 339 T

ECHNIQUE

72 Viewing job statistics 340

Map optimizations 343

T

ECHNIQUE

73 Data locality 343

T

ECHNIQUE

74 Dealing with a large number of input splits 344

T

ECHNIQUE

75 Generating input splits in the cluster with YARN 346

Shuffle optimizations 347

T

ECHNIQUE

76 Using the combiner 347 T

ECHNIQUE

77 Blazingly fast sorting with binary

comparators 349

T

ECHNIQUE

78 Tuning the shuffle internals 353 Reducer optimizations 356

T

ECHNIQUE

79 Too few or too many reducers 356

General tuning tips 357

(14)

CONTENTS xiii

T

ECHNIQUE

80 Using stack dumps to discover unoptimized user code 358

T

ECHNIQUE

81 Profiling your map and reduce tasks 360 8.3 Debugging 362

Accessing container log output 362

T

ECHNIQUE

82 Examining task logs 362 Accessing container start scripts 363

T

ECHNIQUE

83 Figuring out the container startup command 363

Debugging OutOfMemory errors 365

T

ECHNIQUE

84 Force container JVMs to generate a heap dump 365

MapReduce coding guidelines for effective debugging 365 T

ECHNIQUE

85 Augmenting MapReduce code for better de

bugging 365 8.4 Testing MapReduce jobs 368

Essential ingredients for effective unit testing 368

MRUnit 370 T

ECHNIQUE

86 Using MRUnit to unit-test MapReduce 371

LocalJobRunner 378

T

ECHNIQUE

87 Heavyweight job testing with the LocalJobRunner 378

MiniMRYarnCluster 381

T

ECHNIQUE

88 Using MiniMRYarnCluster to test your jobs 381 Integration and QA testing 382

8.5 Chapter summary 383

P ART 4 B EYOND M AP R EDUCE ...385

9 SQL on Hadoop 387

9.1 Hive 388

Hive basics 388

Reading and writing data 391 T

ECHNIQUE

89 Working with text files 391 T

ECHNIQUE

90 Exporting data to local disk 395

User-defined functions in Hive 396 T

ECHNIQUE

91 Writing UDFs 396

Hive performance 399

T

ECHNIQUE

92 Partitioning 399

T

ECHNIQUE

93 Tuning Hive joins 404

(15)

9.2 Impala 409

Impala vs. Hive 410

Impala basics 410 T

ECHNIQUE

94 Working with text 410 T

ECHNIQUE

95 Working with Parquet 412 T

ECHNIQUE

96 Refreshing metadata 413

User-defined functions in Impala 414

T

ECHNIQUE

97 Executing Hive UDFs in Impala 415 9.3 Spark SQL 416

Spark 101 417

Spark on Hadoop 419

SQL with Spark 419 T

ECHNIQUE

98 Calculating stock averages with Spark SQL 420 T

ECHNIQUE

99 Language-integrated queries 422

T

ECHNIQUE

100 Hive and Spark SQL 423 9.4 Chapter summary 423

10 Writing a YARN application 425

10.1 Fundamentals of building a YARN application 426 Actors 426

The mechanics of a YARN application 427

10.2 Building a YARN application to collect cluster statistics 429 T

ECHNIQUE

101 A bare-bones YARN client 429

T

ECHNIQUE

102 A bare-bones ApplicationMaster 434

T

ECHNIQUE

103 Running the application and accessing logs 438 T

ECHNIQUE

104 Debugging using an unmanaged application

master 440

10.3 Additional YARN application capabilities 443 RPC between components 443

Service discovery 444

Checkpointing application progress 444

Avoiding split-brain 444 Long-running applications 444

Security 445

10.4 YARN programming abstractions 445

Twill 446

Spring 448

REEF 450

Picking a YARN API abstraction 450

10.5 Summary 450

appendix Installing Hadoop and friends 451 index 475

bonus chapters available for download from www.manning.com/holmes2

chapter 11 Integrating R and Hadoop for statistics and more

chapter 12 Predictive analytics with Mahout

(16)

xv

preface

I first encountered Hadoop in the fall of 2008 when I was working on an internet crawl-and-analysis project at Verisign. We were making discoveries similar to those that Doug Cutting and others at Nutch had made several years earlier about how to effi- ciently store and manage terabytes of crawl-and-analyzed data. At the time, we were getting by with our homegrown distributed system, but the influx of a new data stream and requirements to join that stream with our crawl data couldn’t be supported by our existing system in the required timeline.

After some research, we came across the Hadoop project, which seemed to be a perfect fit for our needs—it supported storing large volumes of data and provided a compute mechanism to combine them. Within a few months, we built and deployed a MapReduce application encompassing a number of MapReduce jobs, woven together with our own MapReduce workflow management system, onto a small cluster of 18 nodes. It was a revelation to observe our MapReduce jobs crunching through our data in minutes. Of course, what we weren’t expecting was the amount of time that we would spend debugging and performance-tuning our MapReduce jobs. Not to men- tion the new roles we took on as production administrators—the biggest surprise in this role was the number of disk failures we encountered during those first few months supporting production.

As our experience and comfort level with Hadoop grew, we continued to build

more of our functionality using Hadoop to help with our scaling challenges. We also

started to evangelize the use of Hadoop within our organization and helped kick-start

other projects that were also facing big data challenges.

(17)

The greatest challenge we faced when working with Hadoop, and specifically MapReduce, was relearning how to solve problems with it. MapReduce is its own fla- vor of parallel programming, and it’s quite different from the in- JVM programming that we were accustomed to. The first big hurdle was training our brains to think MapReduce, a topic which the book Hadoop in Action by Chuck Lam (Manning Publi- cations, 2010) covers well.

After one is used to thinking in MapReduce, the next challenge is typically related to the logistics of working with Hadoop, such as how to move data in and out of HDFS and effective and efficient ways to work with data in Hadoop. These areas of Hadoop haven’t received much coverage, and that’s what attracted me to the potential of this book—the chance to go beyond the fundamental word-count Hadoop uses and cover- ing some of the trickier and dirtier aspects of Hadoop.

As I’m sure many authors have experienced, I went into this project confidently

believing that writing this book was just a matter of transferring my experiences onto

paper. Boy, did I get a reality check, but not altogether an unpleasant one, because

writing introduced me to new approaches and tools that ultimately helped better my

own Hadoop abilities. I hope that you get as much out of reading this book as I did

writing it.

(18)

xvii

acknowledgments

First and foremost, I want to thank Michael Noll, who pushed me to write this book.

He provided invaluable insights into how to structure the content of the book, reviewed my early chapter drafts, and helped mold the book. I can’t express how much his support and encouragement has helped me throughout the process.

I’m also indebted to Cynthia Kane, my development editor at Manning, who coached me through writing this book and provided invaluable feedback on my work.

Among the many notable “aha!” moments I had when working with Cynthia, the big- gest one was when she steered me into using visual aids to help explain some of the complex concepts in this book.

All of the Manning staff were a pleasure to work with, and a special shout out goes to Troy Mott, Nick Chase, Tara Walsh, Bob Herbstman, Michael Stephens, Marjan Bace, Maureen Spencer, and Kevin Sullivan.

I also want to say a big thank you to all the reviewers of this book: Adam Kawa, Andrea Tarocchi, Anna Lahoud, Arthur Zubarev, Edward Ribeiro, Fillipe Massuda, Gerd Koenig, Jeet Marwah, Leon Portman, Mohamed Diouf, Muthuswamy Manigan- dan, Rodrigo Abreu, and Serega Sheypack. Jonathan Siedman, the primary technical reviewer, did a great job of reviewing the entire book.

Many thanks to Josh Wills, the creator of Crunch, who kindly looked over the chap- ter that covered that topic. And more thanks go to Josh Patterson, who reviewed my Mahout chapter.

Finally, a special thanks to my wife, Michal, who had to put up with a cranky husband

working crazy hours. She was a source of encouragement throughout the entire process.

(19)

xviii

about this book

Doug Cutting, the creator of Hadoop, likes to call Hadoop the kernel for big data, and I would tend to agree. With its distributed storage and compute capabilities, Hadoop is fundamentally an enabling technology for working with huge datasets.

Hadoop provides a bridge between structured ( RDBMS ) and unstructured (log files, XML , text) data and allows these datasets to be easily joined together. This has evolved from traditional use cases, such as combining OLTP and log files, to more sophisti- cated uses, such as using Hadoop for data warehousing (exemplified by Facebook) and the field of data science, which studies and makes new discoveries about data.

This book collects a number of intermediary and advanced Hadoop examples and presents them in a problem/solution format. Each technique addresses a specific task you’ll face, like using Flume to move log files into Hadoop or using Mahout for pre- dictive analysis. Each problem is explored step by step, and as you work through them, you’ll find yourself growing more comfortable with Hadoop and at home in the world of big data.

This hands-on book targets users who have some practical experience with Hadoop and understand the basic concepts of MapReduce and HDFS . Manning’s Hadoop in Action by Chuck Lam contains the necessary prerequisites to understand and apply the techniques covered in this book.

Many techniques in this book are Java-based, which means readers are expected to

possess an intermediate-level knowledge of Java. An excellent text for all levels of Java

users is Effective Java, Second Edition by Joshua Bloch (Addison-Wesley, 2008).

(20)

ABOUTTHISBOOK xix

Roadmap

This book has 10 chapters divided into four parts.

Part 1 contains two chapters that form the introduction to this book. They review Hadoop basics and look at how to get Hadoop up and running on a single host. YARN , which is new in Hadoop version 2, is also examined, and some operational tips are provided for performing basic functions in YARN .

Part 2, “Data logistics,” consists of three chapters that cover the techniques and tools required to deal with data fundamentals, how to work with various data formats, how to organize and optimize your data, and getting data into and out of Hadoop.

Picking the right format for your data and determining how to organize data in HDFS are the first items you’ll need to address when working with Hadoop, and they’re cov- ered in chapters 3 and 4 respectively. Getting data into Hadoop is one of the bigger hurdles commonly encountered when working with Hadoop, and chapter 5 is dedi- cated to looking at a variety of tools that work with common enterprise data sources.

Part 3 is called “Big data patterns,” and it looks at techniques to help you work effec- tively with large volumes of data. Chapter 6 covers how to represent data such as graphs for use with MapReduce, and it looks at several algorithms that operate on graph data.

Chapter 7 looks at more advanced data structures and algorithms such as graph pro- cessing and using HyperLogLog for working with large datasets. Chapter 8 looks at how to tune, debug, and test MapReduce performance issues, and it also covers a number of techniques to help make your jobs run faster.

Part 4 is titled “Beyond MapReduce,” and it examines a number of technologies that make it easier to work with Hadoop. Chapter 9 covers the most prevalent and promising SQL technologies for data processing on Hadoop, and Hive, Impala, and Spark SQL are examined. The final chapter looks at how to write your own YARN appli- cation, and it provides some insights into some of the more advanced features you can use in your applications.

The appendix covers instructions for the source code that accompanies this book, as well as installation instructions for Hadoop and all the other related technologies covered in the book.

Finally, there are two bonus chapters available from the publisher’s website at www.manning.com/HadoopinPracticeSecondEdition: chapter 11 “Integrating R and Hadoop for statistics and more” and chapter 12 “Predictive analytics with Mahout.”

What’s new in the second edition?

This second edition covers Hadoop 2, which at the time of writing is the current

production-ready version of Hadoop. The first edition of the book covered Hadoop 0.22

(Hadoop 1 wasn’t yet out), and Hadoop 2 has turned the world upside-down and

opened up the Hadoop platform to processing paradigms beyond MapReduce. YARN ,

the new scheduler and application manager in Hadoop 2, is complex and new to the

community, which prompted me to dedicate a new chapter 2 to covering YARN basics

and to discussing how MapReduce now functions as a YARN application.

(21)

Parquet has also recently emerged as a new way to store data in HDFS —its columnar format can yield both space and time efficiencies in your data pipelines, and it’s quickly becoming the ubiquitous way to store data. Chapter 4 includes extensive coverage of Parquet, which includes how Parquet supports sophisticated object models such as Avro and how various Hadoop tools can use Parquet.

How data is being ingested into Hadoop has also evolved since the first edition, and Kafka has emerged as the new data pipeline, which serves as the transport tier between your data producers and data consumers, where a consumer would be a sys- tem such as Camus that can pull data from Kafka into HDFS . Chapter 5, which covers moving data into and out of Hadoop, now includes coverage of Kafka and Camus.

There are many new technologies that YARN now can support side by side in the same cluster, and some of the more exciting and promising technologies are covered in the new part 4, titled “Beyond MapReduce,” where I cover some compelling new SQL technologies such as Impala and Spark SQL . The last chapter, also new for this edition, looks at how you can write your own YARN application, and it’s packed with information about important features to support your YARN application.

Getting help

You’ll no doubt have many questions when working with Hadoop. Luckily, between the wikis and a vibrant user community, your needs should be well covered:

The main wiki is located at http://wiki.apache.org/hadoop/, and it contains useful presentations, setup instructions, and troubleshooting instructions.

The Hadoop Common, HDFS , and MapReduce mailing lists can all be found at http://hadoop.apache.org/mailing_lists.html.

“Search Hadoop” is a useful website that indexes all of Hadoop and its ecosys- tem projects, and it provides full-text search capabilities: http://search- hadoop.com/.

You’ll find many useful blogs you should subscribe to in order to keep on top of current events in Hadoop. This preface includes a selection of my favorites:

o

Cloudera and Hortonworks are both prolific writers of practical applications on Hadoop—reading their blogs is always educational: http://www.cloudera .com/blog/ and http://hortonworks.com/blog/.

o

Michael Noll is one of the first bloggers to provide detailed setup instructions for Hadoop, and he continues to write about real-life challenges:

www.michael-noll.com/.

o

There’s a plethora of active Hadoop Twitter users that you may want to follow, including Arun Murthy (@acmurthy), Tom White (@tom_e_white), Eric Sam- mer (@esammer), Doug Cutting (@cutting), and Todd Lipcon (@tlipcon).

The Hadoop project tweets on @hadoop.

(22)

ABOUTTHISBOOK xxi

Code conventions and downloads

All source code in listings or in text is presented in a fixed-width font like this to separate it from ordinary text. Code annotations accompany many of the listings, highlighting important concepts.

All of the text and examples in this book work with Hadoop 2.x, and most of the MapReduce code is written using the newer org.apache.hadoop.mapreduce Map- Reduce API s. The few examples that use the older org.apache.hadoop.mapred pack- age are usually the result of working with a third-party library or a utility that only works with the old API .

All of the code used in this book is available on GitHub at https://github.com/

alexholmes/hiped2 and also from the publisher’s website at www.manning.com/

HadoopinPracticeSecondEdition. The first section in the appendix shows you how to download, install, and get up and running with the code.

Third-party libraries

I use a number of third-party libraries for convenience purposes. They’re included in the Maven-built JAR , so there’s no extra work required to work with these libraries.

Datasets

Throughout this book, you’ll work with three datasets to provide some variety in the examples. All the datasets are small to make them easy to work with. Copies of the exact data used are available in the GitHub repository in the https://github.com/

alexholmes/hiped2/tree/master/test-data directory. I also sometimes use data that’s specific to a chapter, and it’s available within chapter-specific subdirectories under the same GitHub location.

NASDAQ financial stocks

I downloaded the NASDAQ daily exchange data from InfoChimps (www.infochimps .com). I filtered this huge dataset down to just five stocks and their start-of-year values from 2000 through 2009. The data used for this book is available on GitHub at https://

github.com/alexholmes/hiped2/blob/master/test-data/stocks.txt.

The data is in CSV form, and the fields are in the following order:

Symbol,Date,Open,High,Low,Close,Volume,Adj Close

Apache log data

I created a sample log file in Apache Common Log Format

1

with some fake Class E IP addresses and some dummy resources and response codes. The file is available on GitHub at https://github.com/alexholmes/hiped2/blob/master/test-data/

apachelog.txt.

1 See http://httpd.apache.org/docs/1.3/logs.html#common.

(23)

Names

Names were retrieved from the U.S. government census at www.census.gov/genealogy/

www/data/1990surnames/dist.all.last, and this data is available at https://

github.com/alexholmes/hiped2/blob/master/test-data/names.txt.

Author Online

Purchase of Hadoop in Practice, Second Edition includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the authors and from other users. To access the forum and subscribe to it, point your web browser to www.manning.com/

HadoopinPractice, SecondEdition. This page provides information on how to get on the forum once you are registered, what kind of help is available, and the rules of con- duct on the forum. It also provides links to the source code for the examples in the book, errata, and other downloads.

Manning’s commitment to our readers is to provide a venue where a meaningful dia- log between individual readers and between readers and the author can take place. It is not a commitment to any specific amount of participation on the part of the author, whose contribution to the Author Online forum remains voluntary (and unpaid). We suggest you try asking the author challenging questions lest his interest strays!

The Author Online forum and the archives of previous discussions will be accessi-

ble from the publisher’s website as long as the book is in print.

(24)

xxiii

about the cover illustration

The figure on the cover of Hadoop in Practice, Second Edition is captioned “Momak from Kistanja, Dalmatia.” The illustration is taken from a reproduction of an album of tra- ditional Croatian costumes from the mid-nineteenth century by Nikola Arsenovic, published by the Ethnographic Museum in Split, Croatia, in 2003. The illustrations were obtained from a helpful librarian at the Ethnographic Museum in Split, itself sit- uated in the Roman core of the medieval center of the town: the ruins of Emperor Diocletian’s retirement palace from around AD 304 . The book includes finely colored illustrations of figures from different regions of Croatia, accompanied by descriptions of the costumes and of everyday life.

Kistanja is a small town located in Bukovica, a geographical region in Croatia. It is situated in northern Dalmatia, an area rich in Roman and Venetian history. The word

“momak” in Croatian means a bachelor, beau, or suitor—a single young man who is of courting age—and the young man on the cover, looking dapper in a crisp, white linen shirt and a colorful, embroidered vest, is clearly dressed in his finest clothes, which would be worn to church and for festive occasions—or to go calling on a young lady.

Dress codes and lifestyles have changed over the last 200 years, and the diversity by region, so rich at the time, has faded away. It is now hard to tell apart the inhabitants of different continents, let alone of different hamlets or towns separated by only a few miles. Perhaps we have traded cultural diversity for a more varied personal life—

certainly for a more varied and fast-paced technological life.

(25)

Manning celebrates the inventiveness and initiative of the computer business with

book covers based on the rich diversity of regional life of two centuries ago, brought

back to life by illustrations from old books and collections like this one.

(26)

Part 1 Background and fundamentals

P art 1 of this book consists of chapters 1 and 2, which cover the important Hadoop fundamentals.

Chapter 1 covers Hadoop’s components and its ecosystem and provides instructions for installing a pseudo-distributed Hadoop setup on a single host, along with a system that will enable you to run all of the examples in the book.

Chapter 1 also covers the basics of Hadoop configuration, and walks you through how to write and run a MapReduce job on your new setup.

Chapter 2 introduces YARN , which is a new and exciting development in

Hadoop version 2, transitioning Hadoop from being a MapReduce-only system

to one that can support many execution engines. Given that YARN is new to the

community, the goal of this chapter is to look at some basics such as its compo-

nents, how configuration works, and also how MapReduce works as a YARN

application. Chapter 2 also provides an overview of some applications that YARN

has enabled to execute on Hadoop, such as Spark and Storm.

(27)
(28)

3

Hadoop in a heartbeat

We live in the age of big data, where the data volumes we need to work with on a day-to-day basis have outgrown the storage and processing capabilities of a single host. Big data brings with it two fundamental challenges: how to store and work with voluminous data sizes, and more important, how to understand data and turn it into a competitive advantage.

Hadoop fills a gap in the market by effectively storing and providing computa- tional capabilities for substantial amounts of data. It’s a distributed system made up of a distributed filesystem, and it offers a way to parallelize and execute programs on a cluster of machines (see figure 1.1). You’ve most likely come across Hadoop because it’s been adopted by technology giants like Yahoo!, Facebook, and Twitter to address their big data needs, and it’s making inroads across all industrial sectors.

Because you’ve come to this book to get some practical experience with Hadoop and Java,

1

I’ll start with a brief overview and then show you how to install

This chapter covers

Examining how the core Hadoop system works

Understanding the Hadoop ecosystem

Running a MapReduce job

1 To benefit from this book, you should have some practical experience with Hadoop and understand the basic concepts of MapReduce and HDFS (covered in Manning’s Hadoop in Action by Chuck Lam, 2010).

Further, you should have an intermediate-level knowledge of Java—Effective Java, 2nd Edition by Joshua Bloch (Addison-Wesley, 2008) is an excellent resource on this topic.

(29)

Hadoop and run a MapReduce job. By the end of this chapter, you’ll have had a basic refresher on the nuts and bolts of Hadoop, which will allow you to move on to the more challenging aspects of working with it.

Let’s get started with a detailed overview.

1.1 What is Hadoop?

Hadoop is a platform that provides both distributed storage and computational capa- bilities. Hadoop was first conceived to fix a scalability issue that existed in Nutch,

2

an open source crawler and search engine. At the time, Google had published papers that described its novel distributed filesystem, the Google File System ( GFS ), and MapReduce, a computational framework for parallel processing. The successful implementation of these papers’ concepts in Nutch resulted in it being split into two separate projects, the second of which became Hadoop, a first-class Apache project.

In this section we’ll look at Hadoop from an architectural perspective, examine how industry uses it, and consider some of its weaknesses. Once we’ve covered this background, we’ll look at how to install Hadoop and run a MapReduce job.

Hadoop proper, as shown in figure 1.2, is a distributed master-slave architecture

3

that consists of the following primary components:

2 The Nutch project, and by extension Hadoop, was led by Doug Cutting and Mike Cafarella.

3 A model of communication where one process, called the master, has control over one or more other pro- cesses, called slaves.

Server cloud Distributed computation

Distributed storage

Hadoop runs on commodity hardware.

The computation tier is a general-purpose scheduler and

a distributed processing framework called MapReduce.

Storage is provided via a distributed filesystem

called HDFS.

Figure 1.1 The Hadoop environment is a distributed system that runs on commodity hardware.

(30)

5

What is Hadoop?

Hadoop Distributed File System ( HDFS ) for data storage.

Yet Another Resource Negotiator ( YARN ), introduced in Hadoop 2, a general- purpose scheduler and resource manager. Any YARN application can run on a Hadoop cluster.

MapReduce, a batch-based computational engine. In Hadoop 2, MapReduce is implemented as a YARN application.

Traits intrinsic to Hadoop are data partitioning and parallel computation of large datasets. Its storage and computational capabilities scale with the addition of hosts to a Hadoop cluster; clusters with hundreds of hosts can easily reach data volumes in the petabytes.

In the first step in this section, we’ll examine the HDFS , YARN , and MapReduce architectures.

1.1.1 Core Hadoop components

To understand Hadoop’s architecture we’ll start by looking at the basics of HDFS .

HDFS

HDFS is the storage component of Hadoop. It’s a distributed filesystem that’s modeled after the Google File System ( GFS ) paper.

4

HDFS is optimized for high throughput and works best when reading and writing large files (gigabytes and larger). To support this throughput, HDFS uses unusually large (for a filesystem) block sizes and data locality optimizations to reduce network input/output ( I/O ).

Scalability and availability are also key traits of HDFS , achieved in part due to data replication and fault tolerance. HDFS replicates files for a configured number of times, is tolerant of both software and hardware failure, and automatically re-replicates data blocks on nodes that have failed.

4 See “The Google File System‚” http://research.google.com/archive/gfs.html.

The HDFS master is responsible for partitioning the storage across

the slave nodes and keeping track of where data is located.

The MapReduce master is responsible for organizing where

computational work should be scheduled on the slave nodes.

The YARN master performs the actual scheduling of work

for YARN applications.

YARN slave MapReduce slave HDFS slave

YARN master MapReduce master HDFS master

YARN slave MapReduce slave HDFS slave

YARN slave MapReduce slave HDFS slave

Figure 1.2 High-level Hadoop 2 master-slave architecture

(31)

Figure 1.3 shows a logical representation of the components in HDFS : the NameNode and the DataNode. It also shows an application that’s using the Hadoop filesystem library to access HDFS .

Hadoop 2 introduced two significant new features for HDFS —Federation and High Availability ( HA ):

Federation allows HDFS metadata to be shared across multiple NameNode hosts, which aides with HDFS scalability and also provides data isolation, allow- ing different applications or teams to run their own NameNodes without fear of impacting other NameNodes on the same cluster.

High Availability in HDFS removes the single point of failure that existed in Hadoop 1, wherein a NameNode disaster would result in a cluster outage. HDFS HA also offers the ability for failover (the process by which a standby Name- Node takes over work from a failed primary NameNode) to be automated.

The HDFS NameNode keeps in memory the metadata about the filesystem such as which

DataNodes manage the blocks for each file.

Files are made up of blocks, and each file can be replicated multiple times, meaning

there are many identical copies of each block for the file (by default, 3).

DataNodes communicate with each other for

pipelining file reads and writes.

Client application

Hadoop filesystem

client

HDFS clients talk to the NameNode for metadata-related

activities and DataNodes for reading and writing files.

/tmp/file1.txt Block A Block B

DataNode 2 DataNode 3 DataNode 1 DataNode 3 NameNode

C

DataNode 1

D

B A

DataNode 2

C

D B

DataNode 3

A C

Figure 1.3 An HDFS client communicating with the master NameNode and slave DataNodes

(32)

7

What is Hadoop?

Now that you have a bit of HDFS knowledge, it’s time to look at YARN , Hadoop’s scheduler.

YARN

YARN is Hadoop’s distributed resource scheduler. YARN is new to Hadoop version 2 and was created to address challenges with the Hadoop 1 architecture:

Deployments larger than 4,000 nodes encountered scalability issues, and add- ing additional nodes didn’t yield the expected linear scalability improvements.

Only MapReduce workloads were supported, which meant it wasn’t suited to run execution models such as machine learning algorithms that often require iterative computations.

For Hadoop 2 these problems were solved by extracting the scheduling function from MapReduce and reworking it into a generic application scheduler, called YARN . With this change, Hadoop clusters are no longer limited to running MapReduce workloads; YARN enables a new set of workloads to be natively supported on Hadoop, and it allows alternative processing models, such as graph processing and stream pro- cessing, to coexist with MapReduce. Chapters 2 and 10 cover YARN and how to write YARN applications.

YARN ’s architecture is simple because its primary role is to schedule and manage resources in a Hadoop cluster. Figure 1.4 shows a logical representation of the core components in YARN : the ResourceManager and the NodeManager. Also shown are the components specific to YARN applications, namely, the YARN application client, the ApplicationMaster, and the container.

To fully realize the dream of a generalized distributed platform, Hadoop 2 intro- duced another change—the ability to allocate containers in various configurations.

A YARN client is responsible for creating

the YARN application.

Client ResourceManager

ApplicationMaster

NodeManager

Container

The ResourceManager is the YARN master process and is responsible

for scheduling and managing resources, called “containers.”

The ApplicationMaster is created by the ResourceManager and is responsible

for requesting containers to perform application-specific work.

The NodeManager is the slave YARN process that runs on each node.

It is responsible for launching and managing containers.

Containers are YARN application-specific processes

that perform some function pertinent to the application.

Figure 1.4 The logical YARN architecture showing typical communication between the core YARN components and YARN application components

(33)

Hadoop 1 had the notion of “slots,” which were a fixed number of map and reduce pro- cesses that were allowed to run on a single node. This was wasteful in terms of cluster utilization and resulted in underutilized resources during MapReduce operations, and it also imposed memory limits for map and reduce tasks. With YARN , each container requested by an ApplicationMaster can have disparate memory and CPU traits, and this gives YARN applications full control over the resources they need to fulfill their work.

You’ll work with YARN in more detail in chapters 2 and 10, where you’ll learn how YARN works and how to write a YARN application. Next up is an examination of MapReduce, Hadoop’s computation engine.

MAPREDUCE

MapReduce is a batch-based, distributed computing framework modeled after Google’s paper on MapReduce.

5

It allows you to parallelize work over a large amount of raw data, such as combining web logs with relational data from an OLTP database to model how users interact with your website. This type of work, which could take days or longer using conventional serial programming techniques, can be reduced to min- utes using MapReduce on a Hadoop cluster.

The MapReduce model simplifies parallel processing by abstracting away the com- plexities involved in working with distributed systems, such as computational paral- lelization, work distribution, and dealing with unreliable hardware and software. With this abstraction, MapReduce allows the programmer to focus on addressing business needs rather than getting tangled up in distributed system complications.

MapReduce decomposes work submitted by a client into small parallelized map and reduce tasks, as shown in figure 1.5. The map and reduce constructs used in

5 See “MapReduce: Simplified Data Processing on Large Clusters,” http://research.google.com/archive/

mapreduce.html.

Hadoop MapReduce master

Map

Map

Map

Reduce Client

Input data

Output data

The client submits a MapReduce job.

MapReduce decomposes the job into map and reduce tasks and schedules them for remote

execution on the slave nodes.

Job

Job parts Job parts

Reduce

Figure 1.5 A client submitting a job to MapReduce, breaking the work into small map and reduce tasks

(34)

9

What is Hadoop?

MapReduce are borrowed from those found in the Lisp functional programming lan- guage, and they use a shared-nothing model to remove any parallel execution interde- pendencies that could add unwanted synchronization points or state sharing.

6

The role of the programmer is to define map and reduce functions where the map function outputs key/value tuples, which are processed by reduce functions to pro- duce the final output. Figure 1.6 shows a pseudocode definition of a map function with regard to its input and output.

The power of MapReduce occurs between the map output and the reduce input in the shuffle and sort phases, as shown in figure 1.7.

6 A shared-nothing architecture is a distributed computing concept that represents the notion that each node is independent and self-sufficient.

The map function takes as input a key/value pair, which represents a logical record from the input data source.

In the case of a file, this could be a line, or if the input source is a table in a database, it could be a row.

list(key2, value2) map(key1, value1)

The map function produces zero or more output key/value pairs for one input pair. For example, if the map function is a filtering map function, it may only produce output if a certain condition is

met. Or it could be performing a demultiplexing operation, where a single key/value yields multiple key/value output pairs.

Figure 1.6 A logical view of the map function that takes a key/value pair as input

The shuffle and sort phases are responsible for two primary activities: determining the reducer that should receive the map output key/value pair (called partitioning);

and ensuring that all the input keys for a given reducer are sorted.

cat,doc1 dog,doc1 hamster,doc1

cat,doc2 dog,doc2

hampster,doc2 chipmunk,doc2

Map output Shuffle + sort

Mapper 1

Mapper 2

cat,list(doc1,doc2)

dog,list(doc1,doc2)

hamster,list(doc1,doc2) chipmunk,list(doc2)

Reducer 2 Sorted reduce Input

Map outputs for the same key (such as “hamster”) go to the same reducer and are then combined to

form a single input record for the reducer.

Each reducer has all of its input keys sorted.

Reducer 1

Reducer 3

Figure 1.7 MapReduce’s shuffle and sort phases

(35)

Figure 1.8 shows a pseudocode definition of a reduce function.

With the advent of YARN in Hadoop 2, MapReduce has been rewritten as a YARN application and is now referred to as MapReduce 2 (or MR v 2 ). From a developer’s per- spective, MapReduce in Hadoop 2 works in much the same way it did in Hadoop 1, and code written for Hadoop 1 will execute without code changes on version 2.

7

There are changes to the physical architecture and internal plumbing in MR v 2 that are examined in more detail in chapter 2.

With some Hadoop basics under your belt, it’s time to take a look at the Hadoop ecosystem and the projects that are covered in this book.

1.1.2 The Hadoop ecosystem

The Hadoop ecosystem is diverse and grows by the day. It’s impossible to keep track of all of the various projects that interact with Hadoop in some form. In this book the focus is on the tools that are currently receiving the greatest adoption by users, as shown in figure 1.9.

MapReduce and YARN are not for the faint of heart, which means the goal for many of these Hadoop-related projects is to increase the accessibility of Hadoop to programmers and nonprogrammers. I’ll cover many of the technologies listed in fig- ure 1.9 in this book and describe them in detail within their respective chapters. In addition, the appendix includes descriptions and installation instructions for technol- ogies that are covered in this book.

Coverage of the Hadoop ecosystem in this book

The Hadoop ecosystem grows by the day, and there are often multiple tools with overlapping features and benefits. The goal of this book is to provide practical techniques that cover the core Hadoop technologies, as well as select ecosystem technologies that are ubiquitous and essential to Hadoop.

Let’s look at the hardware requirements for your cluster.

7 Some code may require recompilation against Hadoop 2 binaries to work with MRv2; see chapter 2 for more details.

The reduce function is called once per unique

map output key.

All of the map output values that were emied across all the mappers

for "key2" are provided in a list.

Like the map function, the reduce can output zero-to-many key/value pairs. Reducer output can write to flat files in HDFS, insert/update rows in a NoSQL database, or write

to any data sink, depending on the requirements of the job.

list(key3, value3) reduce (key2, list (value2's))

Figure 1.8 A logical view of the reduce function that produces output for flat files‚ NoSQL rows‚

or any data sink

(36)

11

What is Hadoop?

1.1.3 Hardware requirements

The term commodity hardware is often used to describe Hadoop hardware require- ments. It’s true that Hadoop can run on any old servers you can dig up, but you’ll still want your cluster to perform well, and you don’t want to swamp your operations department with diagnosing and fixing hardware issues. Therefore, commodity refers to mid-level rack servers with dual sockets, as much error-correcting RAM as is affordable, and SATA drives optimized for RAID storage. Using RAID on the DataNode filesystems used to store HDFS content is strongly discouraged because HDFS already has replica- tion and error-checking built in; on the NameNode, RAID is strongly recommended for additional security.

8

From a network topology perspective with regard to switches and firewalls, all of the master and slave nodes must be able to open connections to each other. For small clusters, all the hosts would run 1 GB network cards connected to a single, good-quality switch. For larger clusters, look at 10 GB top-of-rack switches that have at least multiple 1 GB uplinks to dual-central switches. Client nodes also need to be able to talk to all of the master and slave nodes, but if necessary, that access can be from behind a firewall that permits connection establishment only from the client side.

8 HDFS uses disks to durably store metadata about the filesystem.

High-level languages

Predictive analytics

Alternative processing

Miscellaneous

SQL-on-Hadoop Weave

Scalding Cascalog Crunch Cascading

Pig

Impala Hive

RHadoop Rhipe

R

Summingbird Spark Storm ElephantDB

HDFS YARN + MapReduce

Hadoop

Figure 1.9 Hadoop and related technologies that are covered in this book

(37)

After reviewing Hadoop from a software and hardware perspective, you’ve likely developed a good idea of who might benefit from using it. Once you start working with Hadoop, you’ll need to pick a distribution to use, which is the next topic.

1.1.4 Hadoop distributions

Hadoop is an Apache open source project, and regular releases of the software are available for download directly from the Apache project’s website (http://

hadoop.apache.org/releases.html#Download). You can either download and install Hadoop from the website or use a quickstart virtual machine from a commercial dis- tribution, which is usually a great starting point if you’re new to Hadoop and want to quickly get it up and running.

After you’ve whet your appetite with Hadoop and have committed to using it in production, the next question that you’ll need to answer is which distribution to use.

You can continue to use the vanilla Hadoop distribution, but you’ll have to build the in-house expertise to manage your clusters. This is not a trivial task and is usually only successful in organizations that are comfortable with having dedicated Hadoop DevOps engineers running and managing their clusters.

Alternatively, you can turn to a commercial distribution of Hadoop, which will give you the added benefits of enterprise administration software, a support team to con- sult when planning your clusters or to help you out when things go bump in the night, and the possibility of a rapid fix for software issues that you encounter. Of course, none of this comes for free (or for cheap!), but if you’re running mission-critical ser- vices on Hadoop and don’t have a dedicated team to support your infrastructure and services, then going with a commercial Hadoop distribution is prudent.

Picking the distribution that’s right for you

It’s highly recommended that you engage with the major vendors to gain an understanding of which distribu- tion suits your needs from a feature, support, and cost perspective. Remem- ber that each vendor will highlight their advantages and at the same time expose the disadvantages of their competitors, so talking to two or more ven- dors will give you a more realistic sense of what the distributions offer. Make sure you download and test the distributions and validate that they integrate and work within your existing software and hardware stacks.

There are a number of distributions to choose from, and in this section I’ll briefly summarize each distribution and highlight some of its advantages.

APACHE

Apache is the organization that maintains the core Hadoop code and distribution, and

because all the code is open source, you can crack open your favorite IDE and browse

the source code to understand how things work under the hood. Historically the chal-

lenge with the Apache distributions has been that support is limited to the goodwill of

the open source community, and there’s no guarantee that your issue will be investi-

gated and fixed. Having said that, the Hadoop community is a very supportive one, and

(38)

13

What is Hadoop?

responses to problems are usually rapid, even if the actual fixes will likely take longer than you may be able to afford.

The Apache Hadoop distribution has become more compelling now that adminis- tration has been simplified with the advent of Apache Ambari, which provides a GUI to help with provisioning and managing your cluster. As useful as Ambari is, though, it’s worth comparing it against offerings from the commercial vendors, as the com- mercial tooling is typically more sophisticated.

CLOUDERA

Cloudera is the most tenured Hadoop distribution, and it employs a large number of Hadoop (and Hadoop ecosystem) committers. Doug Cutting, who along with Mike Caferella originally created Hadoop, is the chief architect at Cloudera. In aggregate, this means that bug fixes and feature requests have a better chance of being addressed in Cloudera compared to Hadoop distributions with fewer committers.

Beyond maintaining and supporting Hadoop, Cloudera has been innovating in the Hadoop space by developing projects that address areas where Hadoop has been weak. A prime example of this is Impala, which offers a SQL -on-Hadoop system, simi- lar to Hive but focusing on a near-real-time user experience, as opposed to Hive, which has traditionally been a high-latency system. There are numerous other projects that Cloudera has been working on: highlights include Flume, a log collection and distribution system; Sqoop, for moving relational data in and out of Hadoop; and Cloudera Search, which offers near-real-time search indexing.

HORTONWORKS

Hortonworks is also made up of a large number of Hadoop committers, and it offers the same advantages as Cloudera in terms of the ability to quickly address problems and feature requests in core Hadoop and its ecosystem projects.

From an innovation perspective, Hortonworks has taken a slightly different approach than Cloudera. An example is Hive: Cloudera’s approach was to develop a whole new SQL -on-Hadoop system, but Hortonworks has instead looked at innovating inside of Hive to remove its high-latency shackles and add new capabilities such as sup- port for ACID . Hortonworks is also the main driver behind the next-generation YARN platform, which is a key strategic piece keeping Hadoop relevant. Similarly, Horton- works has used Apache Ambari for its administration tooling rather than developing an in-house proprietary administration tool, which is the path taken by the other dis- tributions. Hortonworks’ focus on developing and expanding the Apache ecosystem tooling has a direct benefit to the community, as it makes its tools available to all users without the need for support contracts.

MAPR

MapR has fewer Hadoop committers on its team than the other distributions dis- cussed here, so its ability to fix and shape Hadoop’s future is potentially more bounded than its peers.

From an innovation perspective, MapR has taken a decidedly different approach to

Hadoop support compared to its peers. From the start it decided that HDFS wasn’t an

(39)

enterprise-ready filesystem, and instead developed its own proprietary filesystem, which offers compelling features such as POSIX compliance (offering random-write support and atomic operations), High Availability, NFS mounting, data mirroring, and snapshots.

Some of these features have been introduced into Hadoop 2, but MapR has offered them from the start, and, as a result, one can expect that these features are robust.

As part of the evaluation criteria, it should be noted that parts of the MapR stack, such as its filesystem and its HB ase offering, are closed source and proprietary. This affects the ability of your engineers to browse, fix, and contribute patches back to the community. In contrast, most of Cloudera’s and Hortonworks’ stacks are open source, especially Hortonworks’, which is unique in that the entire stack, including the man- agement platform, is open source.

MapR’s notable highlights include being made available in Amazon’s cloud as an alternative to Amazon’s own Elastic MapReduce and being integrated with Google’s Compute Cloud.

I’ve just scratched the surface of the advantages that the various Hadoop distribu- tions offer; your next steps will likely be to contact the vendors and start playing with the distributions yourself.

Next, let’s take a look at companies currently using Hadoop, and in what capacity they’re using it.

1.1.5 Who’s using Hadoop?

Hadoop has a high level of penetration in high-tech companies, and it’s starting to make inroads in a broad range of sectors, including the enterprise (Booz Allen Hamil- ton, J.P. Morgan), government ( NSA ), and health care.

Facebook uses Hadoop, Hive, and HB ase for data warehousing and real-time appli- cation serving.

9

Facebook’s data warehousing clusters are petabytes in size with thou- sands of nodes, and they use separate HB ase-driven, real-time clusters for messaging and real-time analytics.

Yahoo! uses Hadoop for data analytics, machine learning, search ranking, email antispam, ad optimization, ETL ,

10

and more. Combined, it has over 40,000 servers run- ning Hadoop with 170 PB of storage. Yahoo! is also running the first large-scale YARN deployments with clusters of up to 4,000 nodes.

11

Twitter is a major big data innovator, and it has made notable contributions to Hadoop with projects such as Scalding, a Scala API for Cascading; Summingbird, a

9 See Dhruba Borthakur, “Looking at the code behind our three uses of Apache Hadoop” on Facebook at http://mng.bz/4cMc. Facebook has also developed its own SQL-on-Hadoop tool called Presto and is migrat- ing away from Hive (see Martin Traverso, “Presto: Interacting with petabytes of data at Facebook,” http://

mng.bz/p0Xz).

10Extract, transform, and load (ETL) is the process by which data is extracted from outside sources, trans- formed to fit the project’s needs, and loaded into the target data sink. ETL is a common process in data ware- housing.

11There are more details on YARN and its use at Yahoo! in “Apache Hadoop YARN: Yet Another Resource Nego- tiator” by Vinod Kumar Vavilapalli et al., www.cs.cmu.edu/~garth/15719/papers/yarn.pdf.

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

– Visst kan man se det som lyx, en musiklektion med guldkant, säger Göran Berg, verksamhetsledare på Musik i Väst och ansvarig för projektet.. – Men vi hoppas att det snarare

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Byggstarten i maj 2020 av Lalandia och 440 nya fritidshus i Søndervig är således resultatet av 14 års ansträngningar från en lång rad lokala och nationella aktörer och ett

Omvendt er projektet ikke blevet forsinket af klager mv., som det potentielt kunne have været, fordi det danske plan- og reguleringssystem er indrettet til at afværge

I Team Finlands nätverksliknande struktur betonas strävan till samarbete mellan den nationella och lokala nivån och sektorexpertis för att locka investeringar till Finland.. För

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större