pig tutorial 1 – multiquery execution, store, dump, dependencies and replicated, skewed, merge joins

A Pig Latin statement is an operator that takes a relation as input and produces another relation as output this definition applies to all Pig Latin operators except LOAD and STORE which read data from and write data to the file system. Pig Latin statements can span multiple lines and must end with a semi-colon.

You can execute Pig Latin statements

1. Using grunt shell or command line in interactive or mapreduce mode. To run in local mode use command pig -x local. As the mapreduce mode is the default you do not need to specify anything for that mode.

2. In mapreduce mode or local mode

3. Either interactively or in batch

Structure of pig

1. Pig validates the syntax and semantics of all statements.
2. Only if Pig encounters a DUMP or STORE, Pig will execute the statements.

Retrieving Pig Latin Results

1. Use the DUMP operator to display results to a screen.
2. Use the STORE operator to write results to a file on the file system.

Debugging Pig Latin

1. Use the DESCRIBE operator to review the schema of a relation.
2. Use the EXPLAIN operator to view the logical, physical, or map reduce execution plans to compute a relation.
3. Use the ILLUSTRATE operator to view the step-by-step execution of a series of statements.

Using Comments in Scripts

1. For multi-line comments use /* …. */
2. For single line comments use – –

Case Sensitivity

The names (aliases) of relations and fields are case sensitive. The names of Pig Latin functions are case sensitive. The names of parameters and all other Pig Latin keywords are case insensitive.

In the example below, note the following:

grunt> A = LOAD 'data' USING PigStorage() AS (f1:int, f2:int, f3:int);
grunt> B = GROUP A BY f1;
grunt> C = FOREACH B GENERATE COUNT ($0);
grunt> DUMP C;

1. The names (aliases) of relations A, B, and C are case sensitive.

2. The names (aliases) of fields f1, f2, and f3 are case sensitive.

3. Function names PigStorage and COUNT are case sensitive.

4. Keywords LOAD, USING, AS, GROUP, BY, FOREACH, GENERATE, and DUMP are case insensitive. They can also be written as load, using, as, group, by, etc.

5. In the FOREACH statement, the field in relation B is referred to by positional notation ($0).

Multi-Query Execution

With multi-query execution Pig processes an entire script or a batch of statements at once.

Multi-query execution is turned on by default. To turn it off and revert to Pig’s “execute-on-dump/store” behavior, use the “-M” or “-no_multiquery” options.


$ pig -M myscript.pig
or
$ pig -no_multiquery myscript.pig

How it Works

Multi-query execution introduces some changes like for batch mode execution, the entire script is first parsed to determine if intermediate tasks can be combined to reduce the overall amount of work that needs to be done; execution starts only after the parsing is completed.

Store vs. Dump

With multi-query exection, you want to use STORE to save your results. You do not want to use DUMP as it will disable multi-query execution and is likely to slow down execution. In the below script, because the DUMP command is interactive, the multi-query execution will be disabled and two separate jobs will be created to execute this script. The first job will execute A > B > DUMP while the second job will execute A > B > C > STORE.

A = LOAD 'input' AS (x, y, z);
B = FILTER A BY x > 5;
DUMP B;
C = FOREACH B GENERATE y, z;
STORE C INTO 'output';

STORE Example

In this script, multi-query optimization will kick in allowing the entire script to be executed as a single job. Two outputs are produced: output1 and output2.


A = LOAD 'input' AS (x, y, z);
B = FILTER A BY x > 5;
STORE B INTO 'output1';
C = FOREACH B GENERATE y, z;
STORE C INTO 'output2';

Error Handling

With multi-query execution Pig processes an entire script or a batch of statements at once. By default Pig tries to run all the jobs that result from that, regardless of whether some jobs fail during execution. To check which jobs have succeeded or failed use one of these options.

First, Pig logs all successful and failed store commands. Store commands are identified by output path. At the end of execution a summary line indicates success, partial failure or failure of all store commands.

Second, Pig returns different code upon completion for these scenarios:

Return code 0: All jobs succeeded
Return code 1: Used for retrievable errors
Return code 2: All jobs have failed
Return code 3: Some jobs have failed

In some cases it might be desirable to fail the entire script upon detecting the first failed job. This can be achieved with the “-F” or “-stop_on_failure” command line flag. If used, Pig will stop execution when the first failed job is detected and discontinue further processing. This also means that file commands that come after a failed store in the script will not be executed (this can be used to create “done” files).

This is how the flag is used:

$ pig -F myscript.pig
or
$ pig -stop_on_failure myscript.pig
Implicit Dependencies

If a script has dependencies on the execution order outside of what Pig knows about, execution may fail.

Example

In this script, MYUDF might try to read from out1, a file that A was just stored into. However, Pig does not know that MYUDF depends on the out1 file and might submit the jobs producing the out2 and out1 files at the same time.


STORE A INTO 'out1';
B = LOAD 'data2';
C = FOREACH B GENERATE MYUDF($0,'out1');
STORE C INTO 'out2';

To make the script work (to ensure that the right execution order is enforced) add the exec statement. The exec statement will trigger the execution of the statements that produce the out1 file.


STORE A INTO 'out1';
EXEC;
B = LOAD 'data2';
C = FOREACH B GENERATE MYUDF($0,'out1');
STORE C INTO 'out2';

Replicated Joins

Fragment replicate join is a special type of join that works well if one or more relations are small enough to fit into main memory. In such cases, Pig can perform a very efficient join because all of the hadoop work is done on the map side. In this type of join the large relation is followed by one or more small relations. The small relations must be small enough to fit into main memory; if they don’t, the process fails and an error is generated.

Usage

Perform a replicated join with the USING clause (see inner joins and outer joins). In this example, a large relation is joined with two smaller relations. Note that the large relation comes first followed by the smaller relations; and, all small relations together must fit into main memory, otherwise an error is generated.


big = LOAD 'big_data' AS (b1,b2,b3);

tiny = LOAD 'tiny_data' AS (t1,t2,t3);

mini = LOAD 'mini_data' AS (m1,m2,m3);

C = JOIN big BY b1, tiny BY t1, mini BY m1 USING 'replicated';

Conditions

Fragment replicate joins are experimental; we don’t have a strong sense of how small the small relation must be to fit into memory. In our tests with a simple query that involves just a JOIN, a relation of up to 100 M can be used if the process overall gets 1 GB of memory.

Skewed Joins

Parallel joins are vulnerable to the presence of skew in the underlying data. If the underlying data is sufficiently skewed, load imbalances will swamp any of the parallelism gains. In order to counteract this problem, skewed join computes a histogram of the key space and uses this data to allocate reducers for a given key. Skewed join does not place a restriction on the size of the input keys. It accomplishes this by splitting the left input on the join predicate and streaming the right input. The left input is sampled to create the histogram.

Skewed join can be used when the underlying data is sufficiently skewed and you need a finer control over the allocation of reducers to counteract the skew. It should also be used when the data associated with a given key is too large to fit in memory.

Usage

Perform a skewed join with the USING clause (see inner joins and outer joins).

big = LOAD 'big_data' AS (b1,b2,b3);
massive = LOAD 'massive_data' AS (m1,m2,m3);
C = JOIN big BY b1, massive BY m1 USING 'skewed';
Conditions

Skewed join will only work under these conditions:

1. Skewed join works with two-table inner join. Currently pig do not support more than two tables for skewed join. Specifying three-way (or more) joins will fail validation. For such joins, we rely on you to break them up into two-way joins.

2. The pig.skewedjoin.reduce.memusage Java parameter specifies the fraction of heap available for the reducer to perform the join. A low fraction forces pig to use more reducers but increases copying cost. We have seen good performance when we set this value in the range 0.1 – 0.4. However, note that this is hardly an accurate range. Its value depends on the amount of heap available for the operation, the number of columns in the input and the skew. An appropriate value is best obtained by conducting experiments to achieve a good performance. The default value is =0.5=.

Merge Joins

Often user data is stored such that both inputs are already sorted on the join key. In this case, it is possible to join the data in the map phase of a MapReduce job. This provides a significant performance improvement compared to passing all of the data through unneeded sort and shuffle phases.

Pig has implemented a merge join algorithm, or sort-merge join, although in this case the sort is already assumed to have been done. Pig implements the merge join algorithm by selecting the left input of the join to be the input file for the map phase, and the right input of the join to be the side file. It then samples records from the right input to build an index that contains, for each sampled record, the key(s) the filename and the offset into the file the record begins at. This sampling is done in an initial map only job. A second MapReduce job is then initiated, with the left input as its input. Each map uses the index to seek to the appropriate record in the right input and begin doing the join.

Usage

Perform a merge join with the USING clause.


C = JOIN A BY a1, B BY b1 USING 'merge';

Conditions

Merge join will only work under these conditions:

1. Both inputs are sorted in ascending order of join keys.

2. If an input consists of many files, there should be a total ordering across the files in the ascending order of file name. So for example if one of the inputs to the join is a directory called input1 with files a and b under it, the data should be sorted in ascending order of join key when read starting at a and ending in b. Likewise if an input directory has part files part-00000, part-00001, part-00002 and part-00003, the data should be sorted if the files are read in the sequence part-00000, part-00001, part-00002 and part-00003.

3. The merge join only has two inputs

4. The loadfunc for the right input of the join should implement the OrderedLoadFunc interface (PigStorage does implement the OrderedLoadFunc interface).

5. Only inner join will be supported

6. Between the load of the sorted input and the merge join statement there can only be filter statements and foreach statement where the foreach statement should meet the following conditions:
a. There should be no UDFs in the foreach statement
b. The foreach statement should not change the position of the join keys
c. There should not transformation on the join keys which will change the sort order

For optimal performance, each part file of the left (sorted) input of the join should have a size of at least 1 hdfs block size (for example if the hdfs block size is 128 MB, each part file should be less than 128 MB). If the total input size (including all part files) is greater than blocksize, then the part files should be uniform in size (without large skews in sizes). The main idea is to eliminate skew in the amount of input the final map job performing the merge-join will process.

In local mode, merge join will revert to regular join.