Apache DataFu™

DataFu 1.0

William Vaughan

DataFu is an open-source collection of user-defined functions for working with large-scale data in Hadoop and Pig.

About two years ago, we recognized a need for a stable, well-tested library of Pig UDFs that could assist in common data mining and statistics tasks. Over the years, we had developed several routines that were used across LinkedIn and were thrown together into an internal package we affectionately called “littlepiggy.” The unfortunate part, and this is true of many such efforts, is that the UDFs were ill-documented, ill-organized, and easily got broken when someone made a change. Along came PigUnit, which allowed UDF testing, so we spent the time to clean up these routines by adding documentation and rigorous unit tests. From this “datafoo” package, we thought this would help the community at large, and there you have the initial release of DataFu.

Since then, the project has continued to evolve. We have accepted contributions from a number of sources, improved the style and quality of testing, and adapted to the changing features and versions of Pig. During this time DataFu has been used extensively at LinkedIn for many of our data driven products like "People You May Known" and "Skills and Endorsements." The library is used at numerous companies, and it has also been included in Cloudera's Hadoop distribution (CDH) as well as the Apache BigTop project. DataFu has matured, and we are proud to announce the 1.0 release.

This release of DataFu has a number of new features that can make writing Pig easier, cleaner, and more efficient. In this post, we are going to highlight some of these new features by walking through a large number of examples. Think of this as a HowTo Pig + DataFu guide.

Counting events

Let's consider a hypothetical recommendation system. In this system, a user will be recommended an item (an impression). The user can then accept that recommendation, explicitly reject that recommendation, or just simply ignore it. A common task in such a system would be to count how many times users have seen and acted on items. How could we construct a pig script to implement this task?

Setup

Before we start, it's best to define what exactly we want to do, our inputs and our outputs. The task is to generate, for each user, a list of all items that user has seen with a count of how many impressions were seen, how many were accepted, and how many were rejected.

In summary, our desired output schema is:

features: {user_id:int, items:{(item_id:int, impression_count:int, accept_count:int, reject_count:int)}}

For input, we can load a record for each event:

impressions = LOAD '$impressions' AS (user_id:int, item_id:int, timestamp:long);
accepts = LOAD '$accepts' AS (user_id:int, item_id:int, timestamp:long);
rejects = LOAD '$rejects' AS (user_id:int, item_id:int, timestamp:long);

A naive approach

The straight-forward approach to this problem generates each of the counts that we want, joins all of these counts together, and then groups them up by the user to produce the desired output:

impressions_counted = FOREACH (GROUP impressions BY (user_id, item_id)) GENERATE
  FLATTEN(group) as (user_id, item_id), COUNT_STAR(impressions) as count;
accepts_counted = FOREACH (GROUP accepts BY (user_id, item_id)) GENERATE
  FLATTEN(group) as (user_id, item_id), COUNT_STAR(accepts) as count;
rejects_counted = FOREACH (GROUP rejects BY (user_id, item_id)) GENERATE
  FLATTEN(group) as (user_id, item_id), COUNT_STAR(rejects) as count;

joined_accepts = JOIN impressions_counted BY (user_id, item_id) LEFT OUTER, accepts_counted BY (user_id, item_id);
joined_accepts = FOREACH joined_accepts GENERATE
  impressions_counted::user_id as user_id,
  impressions_counted::item_id as item_id,
  impressions_counted::count as impression_count,
  ((accepts_counted::count is null)?0:accepts_counted::count) as accept_count;

joined_accepts_rejects = JOIN joined_accepts BY (user_id, item_id) LEFT OUTER, rejects_counted BY (user_id, item_id);
joined_accepts_rejects = FOREACH joined_accepts_rejects GENERATE
  joined_accepts::user_id as user_id,
  joined_accepts::item_id as item_id,
  joined_accepts::impression_count as impression_count,
  joined_accepts::accept_count as accept_count,
  ((rejects_counted::count is null)?0:rejects_counted::count) as reject_count;

features = FOREACH (GROUP joined_accepts_rejects BY user_id) GENERATE
  group as user_id, joined_accepts_rejects.(item_id, impression_count, accept_count, reject_count) as items;

Unfortunately, this approach is not very efficient. It generates six mapreduce jobs during execution and streams a lot of the same data through these jobs.

A better approach

Recognizing that we can combine the outer joins and group operations into a single cogroup allows us to reduce the number of mapreduce jobs.

features_grouped = COGROUP impressions BY (user_id, item_id), accepts BY (user_id, item_id), rejects BY (user_id, item_id);
features_counted = FOREACH features_grouped GENERATE
  FLATTEN(group) as (user_id, item_id),
  COUNT_STAR(impressions) as impression_count,
  COUNT_STAR(accepts) as accept_count,
  COUNT_STAR(rejects) as reject_count;

features = FOREACH (GROUP features_counted BY user_id) GENERATE
  group as user_id,
  features_counted.(item_id, impression_count, accept_count, reject_count) as items;

However, we still have to perform an extra group operation to bring everything together by user_id for a total of two mapreduce jobs.

The best approach: DataFu

The two grouping operations in the last example operate on the same set of data. It would be great if we could just get rid of one of them somehow.

One thing that we have noticed is that even very big data will frequently get reasonably small once you segment it sufficiently. In this case, we have to segment down to the user level for our output. That's small enough to fit in memory. So, with a little bit of DataFu, we can group up all of the data for that user, and process it in one pass:

DEFINE CountEach datafu.pig.bags.CountEach('flatten');
DEFINE BagLeftOuterJoin datafu.pig.bags.BagLeftOuterJoin();
DEFINE Coalesce datafu.pig.util.Coalesce();

features_grouped = COGROUP impressions BY user_id, accepts BY user_id, rejects BY user_id;

features_counted = FOREACH features_grouped GENERATE
  group as user_id,
  CountEach(impressions.item_id) as impressions,
  CountEach(accepts.item_id) as accepts,
  CountEach(rejects.item_id) as rejects;

features_joined = FOREACH features_counted GENERATE
  user_id,
  BagLeftOuterJoin(
    impressions, 'item_id',
    accepts, 'item_id',
    rejects, 'item_id'
  ) as items;

features = FOREACH features_joined {
  projected = FOREACH items GENERATE
    impressions::item_id as item_id,
    impressions::count as impression_count,
    Coalesce(accepts::count, 0) as accept_count,
    Coalesce(rejects::count, 0) as reject_count;
  GENERATE user_id, projected as items;
}

So, let's step through this example and see how it works and what our data looks like along the way.

Group the features

First we group all of the data together by the user, getting a few bags with all of the respective event data in the bag.

features_grouped = COGROUP impressions BY user_id, accepts BY user_id, rejects BY user_id;

--features_grouped: {group: int,impressions: {(user_id: int,item_id: int,timestamp: long)},accepts: {(user_id: int,item_id: int,timestamp: long)},rejects: {(user_id: int,item_id: int,timestamp: long)}}

CountEach

Next we count the occurences of each item in the impression, accept and reject bag.

DEFINE CountEach datafu.pig.bags.CountEach('flatten');

features_counted = FOREACH features_grouped GENERATE
    group as user_id,
    CountEach(impressions.item_id) as impressions,
    CountEach(accepts.item_id) as accepts,
    CountEach(rejects.item_id) as rejects;

--features_counted: {user_id: int,impressions: {(item_id: int,count: int)},accepts: {(item_id: int,count: int)},rejects: {(item_id: int,count: int)}}

CountEach is a new UDF in DataFu that iterates through a bag counting the number of occurrences of each distinct tuple. In this case, we want to count occurrences of items, so we project the inner tuples of the bag to contain just the item_id. Since we specified the optional 'flatten' argument in the constructor, the output of the UDF will be a bag of each distinct input tuple (item_id) with a count field appended.

BagLeftOuterJoin

Now, we want to combine all of the separate counts for each type of event together into one tuple per item.

DEFINE BagLeftOuterJoin datafu.pig.bags.BagLeftOuterJoin();

features_joined = FOREACH features_counted GENERATE
    user_id,
    BagLeftOuterJoin(
        impressions, 'item_id',
        accepts, 'item_id',
        rejects, 'item_id'
    ) as items;

--features_joined: {user_id: int,items: {(impressions::item_id: int,impressions::count: int,accepts::item_id: int,accepts::count: int,rejects::item_id: int,rejects::count: int)}}

This is a join operation, but unfortunately, the only join operation that pig allows on bags (in a nested foreach) is CROSS. DataFu provides the BagLeftOuterJoin UDF to make up for this limitation. This UDF performs an in-memory hash join of each bag using the specified field as the join key. The output of this UDF mimics what you would expect from this bit of not (yet) valid Pig:

features_joined = FOREACH features_counted {
  items = JOIN impressions BY item_id LEFT OUTER, accepts BY item_id, rejects BY item_id;
  GENERATE
    user_id, items;
}

Because BagLeftOuterJoin is a UDF and works in memory, a separate map-reduce job is not launched. This fact will save us some time as we'll see later on in the analysis.

Coalesce

Finally, we have our data in about the right shape. We just need to clean up the schema and put some default values in place.

DEFINE Coalesce datafu.pig.util.Coalesce();

features = FOREACH features_joined {
    projected = FOREACH items GENERATE
        impressions::item_id as item_id,
        impressions::count as impression_count,
        Coalesce(accepts::count, 0) as accept_count,
        Coalesce(rejects::count, 0) as reject_count;
  GENERATE user_id, projected as items;
}

--features: {user_id: int,items: {(item_id: int,impression_count: int,accept_count: int,reject_count: int)}}

The various counts were joined together using an outer join in the previous step because a user has not necessarily performed an accept or reject action on each item that he or she has seen. If they have not acted, those fields will be null. Coalesce returns its first non-null parameter, which allows us to cleanly replace that null with a zero, avoiding the need for a bincond operator and maintaining the correct schema. Done!

Analysis

Ok great, we now have three ways to write the same script. We know that the naive way will trigger six mapreduce jobs, the better way two, and the DataFu way one, but does that really equate to a difference in performance?

Since we happened to have a dataset with a few billion records in it lying around, we decided to compare the three. We looked at two different metrics for evaluation. One is the best case wall clock time. This metric is basically the sum of the slowest map and reduce task for each job (using pig default parallelism estimates). The other is total compute time which is the sum of all map and reduce task durations.

Version Wall clock time % Total compute time %
naive100.0%100.0%
better30.2%62.6%
datafu10.5%43.0%

As we can see, the DataFu version provides a noticable improvement in both metrics. Glad to know that work wasn't all for naught.

Creating a custom purpose UDF

Many UDFs, such as those presented in the previous section, are general purpose. DataFu serves to collect these UDFs and make sure they are tested and easily available. If you are writing such a UDF, then we will happily accept contributions. However, frequently when you sit down to write a UDF, it is because you need to insert some sort of custom business logic or calculation into your pig script. These types of UDFs can easily become complex, involving a large number of parameters or nested structures.

Positional notation is bad

Even once the code is written, you are not done. You have to maintain it.

One of the difficult parts about this maintenance is that, as the pig script that uses the UDF changes, a developer has to be sure not to change the parameters to the UDF. Worse, because a standard UDF references fields by positions, it's very easy to introduce a subtle change that has an unintended side effect that does not trigger any errors during runtime, for example, when two fields of the same type swap positions.

Aliases can be better

Using aliases instead of positions makes it easier to maintain a consistent mapping between the UDF and the pig script. If an alias is removed, the UDF will fail with an error. If an alias changes position in a tuple, the UDF does not need to care. The alias also has some semantic meaning to the developer which can aid in the maintenance proces.

AliasableEvalFunc

Unfortunately, there is a problem using aliases. As of Pig 11.1 they are not available when the UDF is exec'ing on the back-end; they are only available on the front-end. The solution to this is to capture a mapping of alias to position on the front-end, store that mapping into the UDF context, retreive it on the back-end, and use it to look up each position by alias. You also need to handle a few issues with complex schemas (nested tuples and bags), keeping track of UDF instances, etc. To make this process simpler, DataFu provides AliasableEvalFunc, an extension to the standard EvalFunc with all of this behavior included.

Mortgage payment example

Using AliasableEvalFunc is pretty simple; the primary difference is that you need to override getOutputSchema instead of outputSchema and have access to the alias, position map through a number of convenience methods. Consider the following example:

public class MortgagePayment extends AliasableEvalFunc<DataBag> {
  @Override
  public Schema getOutputSchema(Schema input) {
    try {
      Schema tupleSchema = new Schema();
      tupleSchema.add(new Schema.FieldSchema("monthly_payment", DataType.DOUBLE));
      Schema bagSchema;

      bagSchema = new Schema(new Schema.FieldSchema(this.getClass().getName().toLowerCase(), tupleSchema, DataType.BAG));
      return bagSchema;
    } catch (FrontendException e) {
      throw new RuntimeException(e);
    }
  }

  @Override
  public DataBag exec(Tuple input) throws IOException  {
    DataBag output = BagFactory.getInstance().newDefaultBag();

    // get a value from the input tuple by alias
    Double principal = getDouble(input, "principal");
    Integer numPayments = getInteger(input, "num_payments");
    DataBag interestRates = getBag(input, "interest_rates");

    for (Tuple interestTuple : interestRates) {
      // get a value from the inner bag tuple by alias
      Double interest = getDouble(interestTuple, getPrefixedAliasName("interest_rates", "interest_rate"));
      double monthlyPayment = computeMonthlyPayment(principal, numPayments, interest);
      output.add(TupleFactory.getInstance().newTuple(monthlyPayment));
    }

    return output;
  }

  private double computeMonthlyPayment(Double principal, Integer numPayments, Double interest) {
    return principal * (interest * Math.pow(interest+1, numPayments)) / (Math.pow(interest+1, numPayments) - 1.0);
  }
}

In this script we retrieve by alias from the input tuple a couple of different types of fields. One of these fields is a bag, and we also want to get values from the tuples in that bag. To avoid having namespace collisions among the different levels of nested tuples, AliasableEvalFunc prepends the name of the enclosing bag or tuple. Thus, we use getPrefixedAliasName to find the field interest_rate inside the bag named interest_rates. That's all there is to using aliases in a UDF. As an added benefit, being able to dump schema information on errors helps in developing and debugging the UDF (see datafu.pig.util.DataFuException).

LinearRegression example

Having access to the schema opens up UDF development possibilities. Let's look back at the recommendation system example from the first part. The script in that part generated a bunch of features about the items that users saw and clicked. That's a good start to a recommendation workflow, but the end goal is to select which items to recommend. A common way to do this is to assign a score to each item based on some sort of machine learning algorithm. A simple algorithm for this task is linear regression. Ok, let's say we've trained our first linear regression model and are ready to plug it in to our workflow to produce our scores.

We could develop a custom UDF for this model that computes the score. It is just a weighted sum of the features. So, using AliasableEvalFunc we could retrieve each field that we need, multiply by the correct coefficient, and then sum these together. But, then every time we change the model, we are going to have to change the UDF to update the fields and coefficients. We know that our first model is not going to be very good and want to make it easy to plug in new models.

The model for a linear regression is pretty simple; it's just a mapping of fields to coefficient values. The only things that will change between models are which fields we are interested in and what the coefficient for those fields will be. So, let's just pass in a string representation of the model and then let the UDF do the work of figuring out how to apply it.

DEFINE LinearRegression datafu.test.blog.LinearRegression('intercept:1,impression_count:-0.1,accept_count:2.0,reject_count:-1.0');

features = LOAD 'test/pig/datafu/test/blog/features.dat' AS (user_id:int, items:bag{(item_id:int,impression_count:int,accept_count:int,reject_count:int)});

recommendations = FOREACH features {
  scored_items = FOREACH items GENERATE item_id, LinearRegression(*) as score;
  GENERATE user_id, scored_items as items;
}

Nice, that's clean, and we could even pass that model string in as a parameter so we don't have to change the pig script to change the model either -- very reusable.

Now, the hard work, writing the UDF:

public class LinearRegression extends AliasableEvalFunc<Double>
{
  Map<String, Double> parameters;

  public LinearRegression(String parameterString) {
    parameters = new HashMap<String, Double>();
    for (String token : parameterString.split(",")) {
      String[] keyValue = token.split(":");
      parameters.put(keyValue[0].trim(), Double.parseDouble(keyValue[1].trim()));
    }
  }

  @Override
  public Schema getOutputSchema(Schema input) {
    return new Schema(new Schema.FieldSchema("score", DataType.DOUBLE));
  }

  @Override
  public Double exec(Tuple input) throws IOException {
    double score = 0.0;
    for (String parameter : parameters.keySet()) {
      double coefficient = parameters.get(parameter);
      if (parameter.equals("intercept")) {
        score += coefficient;
      } else {
        score += coefficient * getDouble(input, parameter);
      }
    }
    return score;
  }
}

Ok, maybe not that hard... The UDF parses out the mapping of field to coeffcient in the constructor and then looks up the specified fields by name in the exec function. So, what happens when we change the model? If we decide to drop a field from our model, it just gets ignored, even if it is in the input tuple. If we add a new feature that's already available in the data it will just work. If we try and use a model with a new feature and forget to update the pig script, it will throw an error and tell us the feature that does not exist (as part of the behavior of getDouble()).

Combining this example with the feature counting example presented earlier, we have the basis for a recommendation system that was easy to write, will execute quickly, and will be simple to maintain.

Sampling the data

Working with big data can be a bit overwhelming and time consuming. Sometimes you want to avoid some of this hassle and just look at a portion of this data. Pig has built-in support for random sampling with the SAMPLE operator. But sometimes a random percentage of the records is not quite what you need. Fortunately, DataFu has a few sampling UDFs that will help in some situations, and as always, we would be happy to accept any contributions of additional sampling UDFs, if you happen to have some lying around.

These things always are easier to understand with a bit of code, so let's go back to our recommendation system context and look at a few more examples.

Example 1. Generate training data

We had mentioned previously that we were going to use a machine learning algorithm, linear regression, to generate scores for our items. We waived our hands and it happened previously, but generally this task involves some work. One of the first steps is to generate the training data set for the learning algorithm. In order to make this training efficient, we only want to use a sample of all of our raw data.

Setup

Given impression, accepts, rejects and some pre-computed features about a user and items, we'd like to generate a training set, which will have all of this information for each (user_id, item_id) pair, for some sample of users.

So, from this input:

impressions = LOAD '$impressions' AS (user_id:int, item_id:int, timestamp:long);
accepts = LOAD '$accepts' AS (user_id:int, item_id:int, timestamp:long);
rejects = LOAD '$rejects' AS (user_id:int, item_id:int, timestamp:long);
features = LOAD '$features' AS (user_id:int, item_id:int, feature_1:int, feature_2:int)

We want to produce this type of output:

{user_id, item_id, is_impressed, is_accepted, is_rejected, feature_1, feature_2}

One key point on sampling here: We want the sampling to be done by user_id. This means that if we choose one user_id to be included in the sample, all the data for that user_id should be included in the sample. This requirement is needed to preserve the original characteristics of raw data in the sampled data as well.

Naive approach

The straightforward solution for this task will be group the tracking data for each user, item pair, then group it by user_id, sample this grouped data, and then flatten it all out again again:

grouped = COGROUP impressions BY (user_id, item_id), accepts BY (user_id, item_id), rejects BY (user_id, item_id) features BY (user_id, item_id);
full_result = FOREACH grouped GENREATE
  FLATTEN(group) AS user_id, item_id,
  (impressions::timestamp is null)?1:0 AS is_impressed,
  (accepts::timestamp is null)?1:0 AS is_accepted,
  (rejects::timestamp is null)?1:0 AS is_rejected,
  Coalesce(features::feature_1, 0) AS feature_1,
  Coalesce(features::feature_2, 0) AS feature_2;

grouped_full_result = GROUP full_result BY user_id;
sampled = SAMPLE grouped_full_result BY group 0.01;
result = FOREACH sampled GENERATE
  group AS user_id,
  FLATTEN(full_result);

This job includes two group operations, which translates to two map-reduce jobs. Also, the group operation is being done on the full data even though we will sample it down later. Can we do any better than this?

A sample of DataFu -- SampleByKey

Yep.

DEFINE SampleByKey datafu.pig.sampling.SampleByKey('whatever_the_salt_you_want_to_use','0.01');

impressions = FILTER impressions BY SampleByKey('user_id');
accepts = FILTER impressions BY SampleByKey('user_id');
rejects = FILTER rejects BY SampleByKey('user_id');
features = FILTER features BY SampleByKey('user_id');

grouped = COGROUP impressions BY (user_id, item_id), accepts BY (user_id, item_id), rejects BY (user_id, item_id), features BY (user_id, item_id);
result = FOREACH grouped GENREATE
  FLATTEN(group) AS (user_id, item_id),
  (impressions::timestamp is null)?1:0 AS is_impressed,
  (accepts::timestamp is null)?1:0 AS is_accepted,
  (rejects::timestamp is null)?1:0 AS is_rejected,
  Coalesce(features::feature_1, 0) AS feature_1,
  Coalesce(features::feature_2, 0) AS feature_2;

We can use the SampleByKey FilterFunc to do this with only one group operation. And, since the group is operating on the already sampled (significantly smaller) data this job will be far more efficient.

SampleByKey lets you designate which fields you want to use as keys for the sampling, and guarantees that for each selected key, all other records with that key will also be selected, which is exactly what we want. Another charasteritic of SampleByKey is that it is deterministic, as long as the same salt is given on initialization. Thanks to this charastristic, we were able to sample the data seperately before we join them from the above example.

Example 2. Recommending your output

Ok, we've now created some training data that we used to create a model which will produce a score for each recommendation. So now we've got to pick which items to show the user. But, we've got a bit of a problem, we only have limited real-estate on the screen to present our recommendations, so how do we select which ones to show? We've got a score from our model so we could just always pick the top scoring items. But then we might be showing the same recommendations all the time, and we want to shake things up a bit so things aren't so static (OK, yes, I admit this is a contrived example; you wouldn't do it this way in real life). So let's take a sample of the output.

Setup

With this input:

recommendations = LOAD '$recommendations' AS (user_id:int, recs{item_id:int, score:double});

We want to produce the exact same output, but with fewer items per user -- let's say no more than 10.

Naive approach

We can randomize using Pig's default Sample command.

results = FOREACH recommendations {
  sampled = SAMPLE recs 1;
  limitted = LIMIT recs 10;
  GENERATE user_id, limited AS recs;
}

The problem of this approach is that results are sampled from the population in a uniformly random fashion. The score you created with your learning algorithm does not have any effect on generating final results.

The DataFu you most likely need -- WeightedSample

We should use that score we generated to help bias our sample.

DEFINE WeightedSample datafu.pig.sampling.WeightedSample();
results = FOREACH recommendations GENERATE user_id, WeightedSample(recs, 1, 10);
-- from recs, using index 1(second column) as weight, select up to 10 items

Fortunately, WeightedSample can do exactly that. It will randomly select from the candidates, but the scores of each candidate will be used as the probability of whether the candidate will be seleceted or not. So, the tuples with higher weight will have a higher chance to be included in sample - perfect.

Additional Examples

If you've made it this far into the post, you deserve an encore. So here are two more examples of how DataFu can make writing pig a bit simpler for you:

Filtering with In

One case where conditional logic can be painful is filtering based on a set of values. Suppose you want to filter tuples based on a field equalling one of a list of values. In Pig this can be achieved by joining a list of conditional checks with OR:

data = LOAD 'input' using PigStorage(',') AS (what:chararray, adj:chararray);

dump data;
-- (roses,red)
-- (violets,blue)
-- (sugar,sweet)

data2 = FILTER data BY adj == 'red' OR adj == 'blue';

dump data2;
-- (roses,red)
-- (violets,blue)

However as the number of items to check for grows this becomes very verbose. The In filter function solves this and makes the resulting code very concise:

DEFINE In datafu.pig.util.In();

data = LOAD 'input' using PigStorage(',') AS (what:chararray, adj:chararray);

dump data;
-- (roses,red)
-- (violets,blue)
-- (sugar,sweet)

data2 = FILTER data BY In(adj, 'red','blue');

dump data2;
-- (roses,red)
-- (violets,blue)

Left Outer Join of three or more relations with EmptyBagToNullFields

Pig's JOIN operator supports performing left outer joins on two relations only. If you want to perform a join on more than two relations you have two options. One is to perform a sequence of joins.

input1 = LOAD 'input1' using PigStorage(',') AS (val1:INT,val2:INT);
input2 = LOAD 'input2' using PigStorage(',') AS (val1:INT,val2:INT);
input3 = LOAD 'input3' using PigStorage(',') AS (val1:INT,val2:INT);

data1 = JOIN input1 BY val1 LEFT, input2 BY val1;
data1 = FILTER data1 BY input1::val1 IS NOT NULL;

data2 = JOIN data1 BY input1::val1 LEFT, input3 BY val1;
data2 = FILTER data2 BY input1::val1 IS NOT NULL;

However this can be inefficient as it requires multiple MapReduce jobs. For many situations, a better option is to use a single COGROUP which requires only a single MapReduce job. However the code gets pretty ugly.

input1 = LOAD 'input1' using PigStorage(',') AS (val1:INT,val2:INT);
input2 = LOAD 'input2' using PigStorage(',') AS (val1:INT,val2:INT);
input3 = LOAD 'input3' using PigStorage(',') AS (val1:INT,val2:INT);

data1 = COGROUP input1 BY val1, input2 BY val1, input3 BY val1;
data2 = FOREACH data1 GENERATE
  FLATTEN(input1), -- left join on this
  FLATTEN((IsEmpty(input2) ? TOBAG(TOTUPLE((int)null,(int)null)) : input2))
      as (input2::val1,input2::val2),
  FLATTEN((IsEmpty(input3) ? TOBAG(TOTUPLE((int)null,(int)null)) : input3))
      as (input3::val1,input3::val2);

This code uses the insight that the input1 bag will be empty when there is no match, and flattening this will remove the entire record. If the input2 or input3 bags are empty we don't want flattening them to remove the record though, so we replace them with a bag having a single tuple with null elements. When these are flattened we get a single tuple with null elements. But, we want our output to have the correct schema, so we have to specify it manually. Once, we do all of this, the approach successfully replicates the left join behavior. It's more efficient, and it's really ugly to type and read.

To clean up this code we have created EmptyBagToNullFields, which replicates the same logic as in the example above, but in a much more concise and readable fashion.

DEFINE EmptyBagToNullFields datafu.pig.bags.EmptyBagToNullFields();

input1 = LOAD 'input1' using PigStorage(',') AS (val1:INT,val2:INT);
input2 = LOAD 'input2' using PigStorage(',') AS (val1:INT,val2:INT);
input3 = LOAD 'input3' using PigStorage(',') AS (val1:INT,val2:INT);

data1 = COGROUP input1 BY val1, input2 BY val1, input3 BY val1;
data2 = FOREACH data1 GENERATE
  FLATTEN(input1),
  FLATTEN(EmptyBagToNullFields(input2)),
  FLATTEN(EmptyBagToNullFields(input3));

Notice that we do not need to specify the schema as with the previous COGROUP example. The reason is that EmptyBagToNullFields produces the same schema as the input bag. So much cleaner.

Final Example

Ok, a second encore, but no more. If you are doing a lot of these, you can turn this into a macro:

DEFINE left_outer_join(relation1, key1, relation2, key2, relation3, key3) returns joined {
  cogrouped = COGROUP $relation1 BY $key1, $relation2 BY $key2, $relation3 BY $key3;
  $joined = FOREACH cogrouped GENERATE
    FLATTEN($relation1),
    FLATTEN(EmptyBagToNullFields($relation2)),
    FLATTEN(EmptyBagToNullFields($relation3));
}

Then all you need to do is call your macro

features = left_outer_join(input1, val1, input2, val2, input3, val3);

Wrap-up

So, that's a lot to digest, but it's just a highlight into a few interesting pieces of DataFu. Check out the DataFu 1.0 release as there's even more in store.

We hope that it proves valuable to you and as always welcome any contributions. Please let us know how you're using the library — we would love to hear from you.

Update (10/15/2015): The links in this blog post have been updated to point to the correct locations within the Apache DataFu website.