collapse and dplyr

Fast (Weighted) Aggregations, Transformations and Panel Computations in a Piped Workflow

Sebastian Krantz

2020-05-22

collapse is a C/C++ based package for data manipulation in R. It’s aims are

  1. to facilitate complex data transformation and exploration tasks and

  2. to help make R code fast, flexible, parsimonious and programmer friendly.

This vignette focuses on the integration of collapse and the popular dplyr package by Hadley Wickham. In particular it will demonstrate how using collapse’s fast functions and some fast alternatives for dplyr verbs can substantially facilitate and speed up basic data manipulation, grouped and weighted aggregations and transformations, and panel-data computations (i.e. between- and within-transformations, panel-lags, differences and growth rates) in a dplyr (piped) workflow.


Notes:


1. Fast Aggregations

A key feature of collapse is it’s broad set of Fast Statistical Functions (fsum, fprod, fmean, fmedian, fmode, fvar, fsd, fmin, fmax, ffirst, flast, fNobs, fNdistinct) which are able to substantially speed-up column-wise, grouped and weighted computations on vectors, matrices or data.frame’s. The functions are S3 generic, with a default (vector), matrix and data.frame method, as well as a grouped_df method for grouped tibbles used by dplyr. The grouped tibble method has the following arguments:

FUN.grouped_df(x, [w = NULL,] TRA = NULL, [na.rm = TRUE,]
               use.g.names = FALSE, keep.group_vars = TRUE, [keep.w = TRUE,] ...)

where w is a weight variable (available only to fsum, fprod, fmean, fmode, fvar and fsd), and TRA and can be used to transform x using the computed statistics and one of 10 available transformations ("replace_fill", "replace", "-", "-+", "/", "%", "+", "*", "%%", "-%%"). These transformations perform grouped replacing or sweeping out of the statistics computed by the function (discussed in section 2). na.rm efficiently removes missing values and is TRUE by default. use.g.names generates new row-names from the unique combinations of groups (default: disabled), whereas keep.group_vars (default: enabled) will keep the grouping columns as is custom in the native data %>% group_by(...) %>% summarize(...) workflow in dplyr. Finally, keep.w regulates whether a weighting variable used is also aggregated and saved in a column. For fsum, fmean, fvar, fsd and fmode this will compute the sum of the weights in each group, whereas fprod returns the product of the weights.

With that in mind, let’s consider some straightforward applications.

1.1 Simple Aggregations

Consider the Groningen Growth and Development Center 10-Sector Database included in collapse and introduced in the main vignette:

library(collapse)
head(GGDC10S)
#   Country Regioncode             Region Variable Year      AGR      MIN       MAN        PU
# 1     BWA        SSA Sub-saharan Africa       VA 1960       NA       NA        NA        NA
# 2     BWA        SSA Sub-saharan Africa       VA 1961       NA       NA        NA        NA
# 3     BWA        SSA Sub-saharan Africa       VA 1962       NA       NA        NA        NA
# 4     BWA        SSA Sub-saharan Africa       VA 1963       NA       NA        NA        NA
# 5     BWA        SSA Sub-saharan Africa       VA 1964 16.30154 3.494075 0.7365696 0.1043936
# 6     BWA        SSA Sub-saharan Africa       VA 1965 15.72700 2.495768 1.0181992 0.1350976
#         CON      WRT      TRA     FIRE      GOV      OTH      SUM
# 1        NA       NA       NA       NA       NA       NA       NA
# 2        NA       NA       NA       NA       NA       NA       NA
# 3        NA       NA       NA       NA       NA       NA       NA
# 4        NA       NA       NA       NA       NA       NA       NA
# 5 0.6600454 6.243732 1.658928 1.119194 4.822485 2.341328 37.48229
# 6 1.3462312 7.064825 1.939007 1.246789 5.695848 2.678338 39.34710

# Summarize the Data: 
# descr(GGDC10S, cols = is.categorical)
# aperm(qsu(GGDC10S, ~Variable, cols = is.numeric))

Simple column-wise computations using the fast functions and pipe operators are performed as follows:

library(dplyr)

GGDC10S %>% fNobs                       # Number of Observations
#    Country Regioncode     Region   Variable       Year        AGR        MIN        MAN         PU 
#       5027       5027       5027       5027       5027       4364       4355       4355       4354 
#        CON        WRT        TRA       FIRE        GOV        OTH        SUM 
#       4355       4355       4355       4355       3482       4248       4364
GGDC10S %>% fNdistinct                  # Number of distinct values
#    Country Regioncode     Region   Variable       Year        AGR        MIN        MAN         PU 
#         43          6          6          2         67       4353       4224       4353       4237 
#        CON        WRT        TRA       FIRE        GOV        OTH        SUM 
#       4339       4344       4334       4349       3470       4238       4364
GGDC10S %>% select_at(6:16) %>% fmedian # Median
#        AGR        MIN        MAN         PU        CON        WRT        TRA       FIRE        GOV 
#  4394.5194   173.2234  3718.0981   167.9500  1473.4470  3773.6430  1174.8000   960.1251  3928.5127 
#        OTH        SUM 
#  1433.1722 23186.1936
GGDC10S %>% select_at(6:16) %>% fmean   # Mean
#        AGR        MIN        MAN         PU        CON        WRT        TRA       FIRE        GOV 
#  2526696.5  1867908.9  5538491.4   335679.5  1801597.6  3392909.5  1473269.7  1657114.8  1712300.3 
#        OTH        SUM 
#  1684527.3 21566436.8
GGDC10S %>% fmode                       # Mode
#            Country         Regioncode             Region           Variable               Year 
#              "USA"              "ASI"             "Asia"              "EMP"             "2010" 
#                AGR                MIN                MAN                 PU                CON 
# "171.315882316326"                "0" "4645.12507642586"                "0" "1.34623115930777" 
#                WRT                TRA               FIRE                GOV                OTH 
# "21.8380052682527" "8.97743416914571" "40.0701608636442"                "0" "3626.84423577048" 
#                SUM 
# "37.4822945751317"
GGDC10S %>% fmode(drop = FALSE)         # Keep data structure intact
#   Country Regioncode Region Variable Year      AGR MIN      MAN PU      CON      WRT      TRA
# 1     USA        ASI   Asia      EMP 2010 171.3159   0 4645.125  0 1.346231 21.83801 8.977434
#       FIRE GOV      OTH      SUM
# 1 40.07016   0 3626.844 37.48229

Moving on to grouped statistics, we can compute the average value added and employment by sector and country using:

GGDC10S %>% 
  group_by(Variable, Country) %>%
  select_at(6:16) %>% fmean
# # A tibble: 85 x 13
#    Variable Country     AGR     MIN     MAN     PU    CON    WRT    TRA   FIRE     GOV    OTH    SUM
#    <chr>    <chr>     <dbl>   <dbl>   <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>
#  1 EMP      ARG       1420.   52.1   1932.  1.02e2 7.42e2 1.98e3 6.49e2  628.   2043.  9.92e2 1.05e4
#  2 EMP      BOL        964.   56.0    235.  5.35e0 1.23e2 2.82e2 1.15e2   44.6    NA   3.96e2 2.22e3
#  3 EMP      BRA      17191.  206.    6991.  3.65e2 3.52e3 8.51e3 2.05e3 4414.   5307.  5.71e3 5.43e4
#  4 EMP      BWA        188.   10.5     18.1 3.09e0 2.53e1 3.63e1 8.36e0   15.3    61.1 2.76e1 3.94e2
#  5 EMP      CHL        702.  101.     625.  2.94e1 2.96e2 6.95e2 2.58e2  272.     NA   1.00e3 3.98e3
#  6 EMP      CHN     287744. 7050.   67144.  1.61e3 2.09e4 2.89e4 1.39e4 4929.  22669.  3.10e4 4.86e5
#  7 EMP      COL       3091.  145.    1175.  3.39e1 5.24e2 2.07e3 4.70e2  649.     NA   1.73e3 9.89e3
#  8 EMP      CRI        231.    1.70   136.  1.43e1 5.76e1 1.57e2 4.24e1   54.9   128.  6.51e1 8.87e2
#  9 EMP      DEW       2490.  407.    8473.  2.26e2 2.09e3 4.44e3 1.48e3 1689.   3945.  9.99e2 2.62e4
# 10 EMP      DNK        236.    8.03   507.  1.38e1 1.71e2 4.55e2 1.61e2  181.    549.  1.11e2 2.39e3
# # ... with 75 more rows

Similarly we can aggregate using any other of the above functions.

It is important to not use dplyr’s summarize together with these functions since that would totally eliminate their speed gain. These functions are fast because they are executed only once and carry out the grouped computations in C++, whereas summarize will apply the function to each group in the grouped tibble. - It will also work with the fast functions, but is slower than using primitive base functions since the fast functions are S3 generic -.


Excursus: What is Happening Behind the Scenes?

To drive this point home it is perhaps good to shed some light on what is happening behind the scenes of dplyr and collapse. Fundamentally both packages follow different computing paradigms:

dplyr is an efficient implementation of the Split-Apply-Combine computing paradigm. Data is split into groups, these data-chunks are then passed to a function carrying out the computation, and finally recombined to produce the aggregated data.frame. This modus operandi is evident in the grouping mechanism of dplyr. When a data.frame is passed through group_by, a ‘groups’ attribute is attached:

GGDC10S %>% group_by(Variable, Country) %>% attr("groups")
# # A tibble: 85 x 3
#    Variable Country .rows     
#    <chr>    <chr>   <list>    
#  1 EMP      ARG     <int [62]>
#  2 EMP      BOL     <int [61]>
#  3 EMP      BRA     <int [62]>
#  4 EMP      BWA     <int [52]>
#  5 EMP      CHL     <int [63]>
#  6 EMP      CHN     <int [62]>
#  7 EMP      COL     <int [61]>
#  8 EMP      CRI     <int [62]>
#  9 EMP      DEW     <int [61]>
# 10 EMP      DNK     <int [64]>
# # ... with 75 more rows

This object is a data.frame giving the unique groups and in the third (last) column vectors containing the indices of the rows belonging to that group. A command like summarize uses this information to split the data.frame into groups which are then passed sequentially to the function used and later recombined. These steps are also done in C++ which makes dplyr quite efficient.

Now collapse is based around one-pass grouped computations at the C++ level using its own grouped statistical functions. In other words the data is not split and recombined at all but the entire computation is performed in a single C++ loop running through that data and completing the computations for each group simultaneously. This modus operandi is also evident in collapse grouping objects. The method GRP.grouped_df takes a dplyr grouping object from a grouped tibble and efficiently converts it to a collapse grouping object:

GGDC10S %>% group_by(Variable, Country) %>% GRP %>% str
# List of 8
#  $ N.groups   : int 85
#  $ group.id   : int [1:5027] 46 46 46 46 46 46 46 46 46 46 ...
#  $ group.sizes: int [1:85] 62 61 62 52 63 62 61 62 61 64 ...
#  $ groups     :List of 2
#   ..$ Variable: chr [1:85] "EMP" "EMP" "EMP" "EMP" ...
#   .. ..- attr(*, "label")= chr "Variable"
#   .. ..- attr(*, "format.stata")= chr "%9s"
#   ..$ Country : chr [1:85] "ARG" "BOL" "BRA" "BWA" ...
#   .. ..- attr(*, "label")= chr "Country"
#   .. ..- attr(*, "format.stata")= chr "%9s"
#  $ group.vars : chr [1:2] "Variable" "Country"
#  $ ordered    : logi [1:2] TRUE TRUE
#  $ order      : NULL
#  $ call       : language GRP.grouped_df(X = .)
#  - attr(*, "class")= chr "GRP"

This object is a list where the first three elements give the number of groups, the group-id to which each row belongs and a vector of group-sizes. A function like fsum uses this information to (for each column) create a result vector of size ‘N.groups’ and the run through the column using the ‘group.id’ vector to add the i’th data point to the ’group.id[i]’th element of the result vector. When the loop is finished, the grouped computation is also finished.

It is thus clear that collapse is faster than dplyr since it’s method of computing involves less steps.


1.2 More Speed using collapse Verbs

collapse fast functions do not develop their maximal performance on a grouped tibble created with group_by because of the additional conversion cost of the grouping object incurred by GRP.grouped_df. This cost is already minimized through the use of C++, but we can do even better replacing group_by with collapse::fgroup_by. fgroup_by works like group_by but does the grouping with collapse::GRP (up to 10x faster than group_by) and simply attaches a collapse grouping object to the grouped_df. Thus the speed gain is 2-fold: Faster grouping and no conversion cost when calling collapse functions.

Another improvement comes from replacing the dplyr verb select with collapse::fselect, and, for selection using column names, indices or functions use collapse::get_vars instead of select_at or select_if. Next to get_vars, collapse also introduces the predicates num_vars, cat_vars, char_vars, fact_vars, logi_vars and Date_vars to efficiently select columns by type.

GGDC10S %>% fgroup_by(Variable, Country) %>% get_vars(6:16) %>% fmedian
# # A tibble: 85 x 13
#    Variable Country     AGR     MIN     MAN     PU    CON    WRT    TRA   FIRE     GOV    OTH    SUM
#    <chr>    <chr>     <dbl>   <dbl>   <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>
#  1 EMP      ARG       1325.   47.4   1988.  1.05e2 7.82e2 1.85e3 5.80e2  464.   1739.   866.  9.74e3
#  2 EMP      BOL        943.   53.5    167.  4.46e0 6.60e1 1.32e2 9.70e1   15.3    NA    384.  1.84e3
#  3 EMP      BRA      17481.  225.    7208.  3.76e2 4.05e3 6.45e3 1.58e3 4355.   4450.  4479.  5.19e4
#  4 EMP      BWA        175.   12.2     13.1 3.71e0 1.90e1 2.11e1 6.75e0   10.4    53.8   31.2 3.61e2
#  5 EMP      CHL        690.   93.9    607.  2.58e1 2.30e2 4.84e2 2.05e2  106.     NA    900.  3.31e3
#  6 EMP      CHN     293915  8150.   61761.  1.14e3 1.06e4 1.70e4 9.56e3 4328.  19468.  9954.  4.45e5
#  7 EMP      COL       3006.   84.0   1033.  3.71e1 4.19e2 1.55e3 3.91e2  655.     NA   1430.  8.63e3
#  8 EMP      CRI        216.    1.49   114.  7.92e0 5.50e1 8.98e1 2.55e1   19.6   122.    60.6 7.19e2
#  9 EMP      DEW       2178   320.    8459.  2.47e2 2.10e3 4.45e3 1.53e3 1656    3700    900   2.65e4
# 10 EMP      DNK        187.    3.75   508.  1.36e1 1.65e2 4.61e2 1.61e2  169.    642.   104.  2.42e3
# # ... with 75 more rows

microbenchmark(collapse = GGDC10S %>% fgroup_by(Variable, Country) %>% get_vars(6:16) %>% fmedian,
               hybrid = GGDC10S %>% group_by(Variable, Country) %>% select_at(6:16) %>% fmedian,
               dplyr = GGDC10S %>% group_by(Variable, Country) %>% select_at(6:16) %>% summarise_all(median, na.rm = TRUE))
# Unit: microseconds
#      expr       min        lq     mean    median        uq        max neval
#  collapse   946.936  1058.945  1202.56  1135.253  1232.534   2435.617   100
#    hybrid 12710.456 13270.942 15071.00 14847.980 15393.962  22668.460   100
#     dplyr 52049.368 54979.203 61955.01 60657.921 64398.363 101796.546   100

Benchmarks on the different components of this code and with larger data are provided under ‘Benchmarks’. Note that a grouped tibble created with fgroup_by can no longer be used for grouped computations with dplyr verbs like mutate or summarize. To avoid errors with these functions and print.grouped_df, [.grouped_df etc., the classes assigned after fgroup_by are reshuffled, so that the data.frame is treated by the dplyr ecosystem like a normal tibble:

class(group_by(GGDC10S, Variable, Country))
# [1] "grouped_df" "tbl_df"     "tbl"        "data.frame"

class(fgroup_by(GGDC10S, Variable, Country))
# [1] "tbl_df"     "tbl"        "grouped_df" "data.frame"

Also note that fselect and get_vars are not full drop-in replacements for select because they do not have a grouped_df method:

GGDC10S %>% group_by(Variable, Country) %>% select_at(6:16) %>% head(3)
# # A tibble: 3 x 13
# # Groups:   Variable, Country [1]
#   Variable Country   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
GGDC10S %>% group_by(Variable, Country) %>% get_vars(6:16) %>% head(3)
# # A tibble: 3 x 11
#     AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

Since by default keep.group_vars = TRUE in the Fast Statistical Functions, the end result is nevertheless the same:

GGDC10S %>% group_by(Variable, Country) %>% select_at(6:16) %>% fmean %>% head(3)
# # A tibble: 3 x 13
#   Variable Country    AGR   MIN   MAN     PU   CON   WRT   TRA   FIRE   GOV   OTH    SUM
#   <chr>    <chr>    <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl>  <dbl>
# 1 EMP      ARG      1420.  52.1 1932. 102.    742. 1982.  649.  628.  2043.  992. 10542.
# 2 EMP      BOL       964.  56.0  235.   5.35  123.  282.  115.   44.6   NA   396.  2221.
# 3 EMP      BRA     17191. 206.  6991. 365.   3525. 8509. 2054. 4414.  5307. 5710. 54273.
GGDC10S %>% group_by(Variable, Country) %>% get_vars(6:16) %>% fmean %>% head(3)
# # A tibble: 3 x 13
#   Variable Country    AGR   MIN   MAN     PU   CON   WRT   TRA   FIRE   GOV   OTH    SUM
#   <chr>    <chr>    <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl>  <dbl>
# 1 EMP      ARG      1420.  52.1 1932. 102.    742. 1982.  649.  628.  2043.  992. 10542.
# 2 EMP      BOL       964.  56.0  235.   5.35  123.  282.  115.   44.6   NA   396.  2221.
# 3 EMP      BRA     17191. 206.  6991. 365.   3525. 8509. 2054. 4414.  5307. 5710. 54273.

Another useful verb introduced by collapse is fgroup_vars, which can be used to efficiently obtain the grouping columns or grouping variables from a grouped tibble:

# fgroup_by fully supports grouped tibbles created with group_by or fgroup_by: 
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars %>% head(3)
# # A tibble: 3 x 2
#   Variable Country
#   <chr>    <chr>  
# 1 VA       BWA    
# 2 VA       BWA    
# 3 VA       BWA
GGDC10S %>% fgroup_by(Variable, Country) %>% fgroup_vars %>% head(3)
# # A tibble: 3 x 2
#   Variable Country
#   <chr>    <chr>  
# 1 VA       BWA    
# 2 VA       BWA    
# 3 VA       BWA

# The other possibilities:
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("unique") %>% head(3)
# # A tibble: 3 x 2
#   Variable Country
#   <chr>    <chr>  
# 1 EMP      ARG    
# 2 EMP      BOL    
# 3 EMP      BRA
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("names")
# [1] "Variable" "Country"
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("indices")
# [1] 4 1
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("named_indices")
# Variable  Country 
#        4        1
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("logical")
#  [1]  TRUE FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
GGDC10S %>% group_by(Variable, Country) %>% fgroup_vars("named_logical")
#    Country Regioncode     Region   Variable       Year        AGR        MIN        MAN         PU 
#       TRUE      FALSE      FALSE       TRUE      FALSE      FALSE      FALSE      FALSE      FALSE 
#        CON        WRT        TRA       FIRE        GOV        OTH        SUM 
#      FALSE      FALSE      FALSE      FALSE      FALSE      FALSE      FALSE

A final collapse verb to mention here is fsubset, a faster alternative to dplyr::filter which also provides an option to flexibly subset columns after the select argument:

# Two equivalent calls, the first is substantially faster
GGDC10S %>% fsubset(Variable == "VA" & Year > 1990, Country, Year, AGR:GOV) %>% head(3)
#   Country Year      AGR      MIN      MAN       PU      CON      WRT      TRA     FIRE      GOV
# 1     BWA 1991 303.1157 2646.950 472.6488 160.6079 580.0876 806.7509 232.7884 432.6965 1073.263
# 2     BWA 1992 333.4364 2690.939 537.4274 178.4532 678.7320 725.2577 285.1403 517.2141 1234.012
# 3     BWA 1993 404.5488 2624.928 567.3420 219.2183 634.2797 771.8253 349.7458 673.2540 1487.193

GGDC10S %>% filter(Variable == "VA" & Year > 1990) %>% select(Country, Year, AGR:GOV) %>% head(3)
#   Country Year      AGR      MIN      MAN       PU      CON      WRT      TRA     FIRE      GOV
# 1     BWA 1991 303.1157 2646.950 472.6488 160.6079 580.0876 806.7509 232.7884 432.6965 1073.263
# 2     BWA 1992 333.4364 2690.939 537.4274 178.4532 678.7320 725.2577 285.1403 517.2141 1234.012
# 3     BWA 1993 404.5488 2624.928 567.3420 219.2183 634.2797 771.8253 349.7458 673.2540 1487.193

1.3 Multi-Function Aggregations

One can also aggregate with multiple functions at the same time. For such operations it is often necessary to use curly braces { to prevent first argument injection so that %>% cbind(FUN1(.), FUN2(.)) does not evaluate as %>% cbind(., FUN1(.), FUN2(.)):

GGDC10S %>%
  fgroup_by(Variable, Country) %>%
  get_vars(6:16) %>% {
    cbind(fmedian(.),
          add_stub(fmean(., keep.group_vars = FALSE), "mean_"))
    } %>% head(3)
#   Variable Country        AGR       MIN       MAN         PU        CON      WRT        TRA
# 1      EMP     ARG  1324.5255  47.35255 1987.5912 104.738825  782.40283 1854.612  579.93982
# 2      EMP     BOL   943.1612  53.53538  167.1502   4.457895   65.97904  132.225   96.96828
# 3      EMP     BRA 17480.9810 225.43693 7207.7915 375.851832 4054.66103 6454.523 1580.81120
#         FIRE      GOV       OTH       SUM   mean_AGR  mean_MIN  mean_MAN    mean_PU  mean_CON
# 1  464.39920 1738.836  866.1119  9743.223  1419.8013  52.08903 1931.7602 101.720936  742.4044
# 2   15.34259       NA  384.0678  1842.055   964.2103  56.03295  235.0332   5.346433  122.7827
# 3 4354.86210 4449.942 4478.6927 51881.110 17191.3529 206.02389 6991.3710 364.573404 3524.7384
#    mean_WRT  mean_TRA  mean_FIRE mean_GOV  mean_OTH  mean_SUM
# 1 1982.1775  648.5119  627.79291 2043.471  992.4475 10542.177
# 2  281.5164  115.4728   44.56442       NA  395.5650  2220.524
# 3 8509.4612 2054.3731 4413.54448 5307.280 5710.2665 54272.985

The function add_stub used above is a collapse function adding a prefix (default) or suffix to variables names. The collapse predicate add_vars provides a more efficient alternative to cbind.data.frame. The idea here is ‘adding’ variables to the data.frame in the first argument i.e. the attributes of the first argument are preserved, so the expression below still gives a tibble instead of a data.frame:

GGDC10S %>%
  fgroup_by(Variable, Country) %>% {
   add_vars(ffirst(get_vars(., "Reg", regex = TRUE)),        # Regular expression matching column names
            add_stub(fmean(num_vars(.), keep.group_vars = FALSE), "mean_"), # num_vars selects all numeric variables
            add_stub(fmedian(fselect(., PU:TRA), keep.group_vars = FALSE), "median_"), 
            add_stub(fmin(fselect(., PU:CON), keep.group_vars = FALSE), "min_"))      
  }
# # A tibble: 85 x 22
#    Variable Country Regioncode Region mean_Year mean_AGR mean_MIN mean_MAN mean_PU mean_CON mean_WRT
#  * <chr>    <chr>   <chr>      <chr>      <dbl>    <dbl>    <dbl>    <dbl>   <dbl>    <dbl>    <dbl>
#  1 EMP      ARG     LAM        Latin~     1980.    1420.    52.1    1932.   102.      742.    1982. 
#  2 EMP      BOL     LAM        Latin~     1980      964.    56.0     235.     5.35    123.     282. 
#  3 EMP      BRA     LAM        Latin~     1980.   17191.   206.     6991.   365.     3525.    8509. 
#  4 EMP      BWA     SSA        Sub-s~     1986.     188.    10.5      18.1    3.09     25.3     36.3
#  5 EMP      CHL     LAM        Latin~     1981      702.   101.      625.    29.4     296.     695. 
#  6 EMP      CHN     ASI        Asia       1980.  287744.  7050.    67144.  1606.    20852.   28908. 
#  7 EMP      COL     LAM        Latin~     1980     3091.   145.     1175.    33.9     524.    2071. 
#  8 EMP      CRI     LAM        Latin~     1980.     231.     1.70    136.    14.3      57.6    157. 
#  9 EMP      DEW     EUR        Europe     1980     2490.   407.     8473.   226.     2093.    4442. 
# 10 EMP      DNK     EUR        Europe     1980.     236.     8.03    507.    13.8     171.     455. 
# # ... with 75 more rows, and 11 more variables: mean_TRA <dbl>, mean_FIRE <dbl>, mean_GOV <dbl>,
# #   mean_OTH <dbl>, mean_SUM <dbl>, median_PU <dbl>, median_CON <dbl>, median_WRT <dbl>,
# #   median_TRA <dbl>, min_PU <dbl>, min_CON <dbl>

Another nice feature of add_vars is that it can also very efficiently reorder columns i.e. bind columns in a different order than they are passed. This can be done by simply specifying the positions the added columns should have in the final data.frame, and then add_vars shifts the first argument columns to the right to fill in the gaps.

GGDC10S %>%
  fsubset(Variable == "VA", Country, AGR, SUM) %>% 
  fgroup_by(Country) %>% {
   add_vars(fgroup_vars(.,"unique"),
            add_stub(fmean(., keep.group_vars = FALSE), "mean_"),
            add_stub(fsd(., keep.group_vars = FALSE), "sd_"), 
            pos = c(2,4,3,5))
  } %>% head(3)
#   Country  mean_AGR    sd_AGR   mean_SUM    sd_SUM
# 1     ARG 14951.292 33061.413  152533.84 301316.25
# 2     BOL  3299.718  4456.331   22619.18  33172.98
# 3     BRA 76870.146 59441.696 1200562.67 976963.14

A much more compact solution to multi-function and multi-type aggregation with dplyr is offered by the function collapg:

# This aggregates numeric colums using the mean (fmean) and categorical columns with the mode (fmode)
GGDC10S %>% fgroup_by(Variable, Country) %>% collapg
# # A tibble: 85 x 16
#    Variable Country Regioncode Region  Year    AGR    MIN    MAN     PU    CON    WRT    TRA   FIRE
#    <chr>    <chr>   <chr>      <chr>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 EMP      ARG     LAM        Latin~ 1980. 1.42e3 5.21e1 1.93e3 1.02e2 7.42e2 1.98e3 6.49e2  628. 
#  2 EMP      BOL     LAM        Latin~ 1980  9.64e2 5.60e1 2.35e2 5.35e0 1.23e2 2.82e2 1.15e2   44.6
#  3 EMP      BRA     LAM        Latin~ 1980. 1.72e4 2.06e2 6.99e3 3.65e2 3.52e3 8.51e3 2.05e3 4414. 
#  4 EMP      BWA     SSA        Sub-s~ 1986. 1.88e2 1.05e1 1.81e1 3.09e0 2.53e1 3.63e1 8.36e0   15.3
#  5 EMP      CHL     LAM        Latin~ 1981  7.02e2 1.01e2 6.25e2 2.94e1 2.96e2 6.95e2 2.58e2  272. 
#  6 EMP      CHN     ASI        Asia   1980. 2.88e5 7.05e3 6.71e4 1.61e3 2.09e4 2.89e4 1.39e4 4929. 
#  7 EMP      COL     LAM        Latin~ 1980  3.09e3 1.45e2 1.18e3 3.39e1 5.24e2 2.07e3 4.70e2  649. 
#  8 EMP      CRI     LAM        Latin~ 1980. 2.31e2 1.70e0 1.36e2 1.43e1 5.76e1 1.57e2 4.24e1   54.9
#  9 EMP      DEW     EUR        Europe 1980  2.49e3 4.07e2 8.47e3 2.26e2 2.09e3 4.44e3 1.48e3 1689. 
# 10 EMP      DNK     EUR        Europe 1980. 2.36e2 8.03e0 5.07e2 1.38e1 1.71e2 4.55e2 1.61e2  181. 
# # ... with 75 more rows, and 3 more variables: GOV <dbl>, OTH <dbl>, SUM <dbl>

By default it aggregates numeric columns using the fmean and categorical columns using fmode, and preserves the order of all columns. Changing these defaults is very easy:

# This aggregates numeric colums using the median and categorical columns using the first value
GGDC10S %>% fgroup_by(Variable, Country) %>% collapg(fmedian, flast)
# # A tibble: 85 x 16
#    Variable Country Regioncode Region  Year    AGR    MIN    MAN     PU    CON    WRT    TRA   FIRE
#    <chr>    <chr>   <chr>      <chr>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 EMP      ARG     LAM        Latin~ 1980. 1.32e3 4.74e1 1.99e3 1.05e2 7.82e2 1.85e3 5.80e2  464. 
#  2 EMP      BOL     LAM        Latin~ 1980  9.43e2 5.35e1 1.67e2 4.46e0 6.60e1 1.32e2 9.70e1   15.3
#  3 EMP      BRA     LAM        Latin~ 1980. 1.75e4 2.25e2 7.21e3 3.76e2 4.05e3 6.45e3 1.58e3 4355. 
#  4 EMP      BWA     SSA        Sub-s~ 1986. 1.75e2 1.22e1 1.31e1 3.71e0 1.90e1 2.11e1 6.75e0   10.4
#  5 EMP      CHL     LAM        Latin~ 1981  6.90e2 9.39e1 6.07e2 2.58e1 2.30e2 4.84e2 2.05e2  106. 
#  6 EMP      CHN     ASI        Asia   1980. 2.94e5 8.15e3 6.18e4 1.14e3 1.06e4 1.70e4 9.56e3 4328. 
#  7 EMP      COL     LAM        Latin~ 1980  3.01e3 8.40e1 1.03e3 3.71e1 4.19e2 1.55e3 3.91e2  655. 
#  8 EMP      CRI     LAM        Latin~ 1980. 2.16e2 1.49e0 1.14e2 7.92e0 5.50e1 8.98e1 2.55e1   19.6
#  9 EMP      DEW     EUR        Europe 1980  2.18e3 3.20e2 8.46e3 2.47e2 2.10e3 4.45e3 1.53e3 1656  
# 10 EMP      DNK     EUR        Europe 1980. 1.87e2 3.75e0 5.08e2 1.36e1 1.65e2 4.61e2 1.61e2  169. 
# # ... with 75 more rows, and 3 more variables: GOV <dbl>, OTH <dbl>, SUM <dbl>

One can apply multiple functions to both numeric and/or categorical data:

GGDC10S %>% fgroup_by(Variable, Country) %>%
  collapg(list(fmean, fmedian), list(first, fmode, flast)) %>% head(3)
# # A tibble: 3 x 32
#   Variable Country first.Regioncode fmode.Regioncode flast.Regioncode first.Region fmode.Region
#   <chr>    <chr>   <chr>            <chr>            <chr>            <chr>        <chr>       
# 1 EMP      ARG     LAM              LAM              LAM              Latin Ameri~ Latin Ameri~
# 2 EMP      BOL     LAM              LAM              LAM              Latin Ameri~ Latin Ameri~
# 3 EMP      BRA     LAM              LAM              LAM              Latin Ameri~ Latin Ameri~
# # ... with 25 more variables: flast.Region <chr>, fmean.Year <dbl>, fmedian.Year <dbl>,
# #   fmean.AGR <dbl>, fmedian.AGR <dbl>, fmean.MIN <dbl>, fmedian.MIN <dbl>, fmean.MAN <dbl>,
# #   fmedian.MAN <dbl>, fmean.PU <dbl>, fmedian.PU <dbl>, fmean.CON <dbl>, fmedian.CON <dbl>,
# #   fmean.WRT <dbl>, fmedian.WRT <dbl>, fmean.TRA <dbl>, fmedian.TRA <dbl>, fmean.FIRE <dbl>,
# #   fmedian.FIRE <dbl>, fmean.GOV <dbl>, fmedian.GOV <dbl>, fmean.OTH <dbl>, fmedian.OTH <dbl>,
# #   fmean.SUM <dbl>, fmedian.SUM <dbl>

Applying multiple functions to only numeric (or only categorical) data allows return in a long format:

GGDC10S %>% fgroup_by(Variable, Country) %>%
  collapg(list(fmean, fmedian), cols = is.numeric, return = "long")
# # A tibble: 170 x 15
#    Function Variable Country  Year    AGR    MIN    MAN     PU    CON    WRT    TRA   FIRE     GOV
#    <chr>    <chr>    <chr>   <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>
#  1 fmean    EMP      ARG     1980. 1.42e3 5.21e1 1.93e3 1.02e2 7.42e2 1.98e3 6.49e2  628.   2043. 
#  2 fmean    EMP      BOL     1980  9.64e2 5.60e1 2.35e2 5.35e0 1.23e2 2.82e2 1.15e2   44.6    NA  
#  3 fmean    EMP      BRA     1980. 1.72e4 2.06e2 6.99e3 3.65e2 3.52e3 8.51e3 2.05e3 4414.   5307. 
#  4 fmean    EMP      BWA     1986. 1.88e2 1.05e1 1.81e1 3.09e0 2.53e1 3.63e1 8.36e0   15.3    61.1
#  5 fmean    EMP      CHL     1981  7.02e2 1.01e2 6.25e2 2.94e1 2.96e2 6.95e2 2.58e2  272.     NA  
#  6 fmean    EMP      CHN     1980. 2.88e5 7.05e3 6.71e4 1.61e3 2.09e4 2.89e4 1.39e4 4929.  22669. 
#  7 fmean    EMP      COL     1980  3.09e3 1.45e2 1.18e3 3.39e1 5.24e2 2.07e3 4.70e2  649.     NA  
#  8 fmean    EMP      CRI     1980. 2.31e2 1.70e0 1.36e2 1.43e1 5.76e1 1.57e2 4.24e1   54.9   128. 
#  9 fmean    EMP      DEW     1980  2.49e3 4.07e2 8.47e3 2.26e2 2.09e3 4.44e3 1.48e3 1689.   3945. 
# 10 fmean    EMP      DNK     1980. 2.36e2 8.03e0 5.07e2 1.38e1 1.71e2 4.55e2 1.61e2  181.    549. 
# # ... with 160 more rows, and 2 more variables: OTH <dbl>, SUM <dbl>

Finally, collapg also makes it very easy to apply aggregator functions to certain columns only:

GGDC10S %>% fgroup_by(Variable, Country) %>%
  collapg(custom = list(fmean = 6:8, fmedian = 10:12))
# # A tibble: 85 x 8
#    Variable Country fmean.AGR fmean.MIN fmean.MAN fmedian.CON fmedian.WRT fmedian.TRA
#    <chr>    <chr>       <dbl>     <dbl>     <dbl>       <dbl>       <dbl>       <dbl>
#  1 EMP      ARG         1420.     52.1     1932.        782.       1855.       580.  
#  2 EMP      BOL          964.     56.0      235.         66.0       132.        97.0 
#  3 EMP      BRA        17191.    206.      6991.       4055.       6455.      1581.  
#  4 EMP      BWA          188.     10.5       18.1        19.0        21.1        6.75
#  5 EMP      CHL          702.    101.       625.        230.        484.       205.  
#  6 EMP      CHN       287744.   7050.     67144.      10578.      17034.      9564.  
#  7 EMP      COL         3091.    145.      1175.        419.       1553.       391.  
#  8 EMP      CRI          231.      1.70     136.         55.0        89.8       25.5 
#  9 EMP      DEW         2490.    407.      8473.       2095.       4454.      1525.  
# 10 EMP      DNK          236.      8.03     507.        165.        461.       161.  
# # ... with 75 more rows

To understand more about collapg, look it up in the documentation (?collapg).

1.4 Weighted Aggregations

Weighted aggregations are currently possible with the functions fsum, fprod, fmean, fmode, fvar and fsd. The implementation is such that by default (option keep.w = TRUE) these functions also aggregate the weights, so that further weighted computations can be performed on the aggregated data. fsum, fmean, fsd, fvar and fmode compute a grouped sum of the weight column and place it next to the group-identifiers; fprod computes the product of the weights.

# This computes a frequency-weighted grouped standard-deviation, taking the total EMP / VA as weight
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
  fselect(AGR:SUM) %>% fsd(SUM)
# # A tibble: 85 x 13
#    Variable Country  sum.SUM     AGR    MIN    MAN     PU    CON    WRT    TRA   FIRE     GOV    OTH
#    <chr>    <chr>      <dbl>   <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>
#  1 EMP      ARG       6.54e5   225.  2.22e1 1.76e2 2.05e1 2.85e2 8.56e2 1.95e2  493.   1123.  5.06e2
#  2 EMP      BOL       1.35e5    99.7 1.71e1 1.68e2 4.87e0 1.23e2 3.24e2 9.81e1   69.8    NA   2.58e2
#  3 EMP      BRA       3.36e6  1587.  7.38e1 2.95e3 9.38e1 1.86e3 6.28e3 1.31e3 3003.   3621.  4.26e3
#  4 EMP      BWA       1.85e4    32.2 3.72e0 1.48e1 1.59e0 1.80e1 3.87e1 6.02e0   13.5    39.8 8.94e0
#  5 EMP      CHL       2.51e5    71.0 3.99e1 1.29e2 1.24e1 1.88e2 5.51e2 1.34e2  313.     NA   4.26e2
#  6 EMP      CHN       2.91e7 56281.  3.09e3 4.04e4 1.27e3 1.92e4 2.45e4 9.26e3 2853.  11541.  3.74e4
#  7 EMP      COL       6.03e5   637.  1.48e2 5.94e2 1.52e1 3.97e2 1.89e3 3.62e2  435.     NA   1.01e3
#  8 EMP      CRI       5.50e4    40.4 1.04e0 7.93e1 1.37e1 3.44e1 1.68e2 4.53e1   79.8    80.7 4.34e1
#  9 EMP      DEW       1.10e6  1175.  1.83e2 7.42e2 5.32e1 1.94e2 6.06e2 2.12e2  699.   1225.  3.55e2
# 10 EMP      DNK       1.53e5   139.  7.45e0 7.73e1 1.92e0 2.56e1 5.33e1 1.57e1   91.6   248.  1.95e1
# # ... with 75 more rows

# This computes a weighted grouped mode, taking the total EMP / VA as weight
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
  fselect(AGR:SUM) %>% fmode(SUM)
# # A tibble: 85 x 13
#    Variable Country  sum.SUM     AGR     MIN    MAN     PU    CON    WRT    TRA   FIRE    GOV    OTH
#    <chr>    <chr>      <dbl>   <dbl>   <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 EMP      ARG       6.54e5  1.16e3  127.   2.16e3 1.52e2 1.41e3  3768. 1.06e3 1.75e3  4336. 2.00e3
#  2 EMP      BOL       1.35e5  8.19e2   37.6  6.04e2 1.08e1 4.33e2   893. 3.33e2 3.21e2    NA  1.06e3
#  3 EMP      BRA       3.36e6  1.65e4  313.   1.18e4 3.88e2 8.15e3 21860. 5.17e3 1.20e4 12149. 1.42e4
#  4 EMP      BWA       1.85e4  1.71e2   13.1  4.33e1 3.93e0 1.81e1   129. 2.10e1 4.67e1   113. 2.62e1
#  5 EMP      CHL       2.51e5  6.30e2  249.   7.42e2 6.07e1 6.71e2  1989. 4.81e2 8.54e2    NA  1.88e3
#  6 EMP      CHN       2.91e7  2.66e5 9247.   1.43e5 3.53e3 6.99e4 84165. 3.12e4 1.08e4 43240. 1.03e5
#  7 EMP      COL       6.03e5  3.93e3  513.   2.37e3 5.89e1 1.41e3  6069. 1.36e3 1.82e3    NA  3.57e3
#  8 EMP      CRI       5.50e4  2.83e2    2.42 2.49e2 4.38e1 1.20e2   489. 1.44e2 2.25e2   328. 1.75e2
#  9 EMP      DEW       1.10e6  1.03e3  260    8.73e3 2.91e2 2.06e3  4398  1.63e3 3.26e3  6129  1.79e3
# 10 EMP      DNK       1.53e5  7.85e1    3.12 3.99e2 1.14e1 1.95e2   579. 1.87e2 3.82e2   835. 1.50e2
# # ... with 75 more rows

The weighted variance / standard deviation is currently only implemented with frequency weights. Reliability weights may be implemented in a future update of collapse, if this is a strongly requested feature.

Weighted aggregations may also be performed with collapg.

# This aggregates numeric colums using the weighted mean and categorical columns using the weighted mode
GGDC10S %>% group_by(Variable, Country) %>% collapg(w = SUM)
# # A tibble: 85 x 16
#    Variable Country    SUM Regioncode Region  Year    AGR    MIN    MAN     PU    CON    WRT    TRA
#    <chr>    <chr>    <dbl> <chr>      <chr>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 EMP      ARG     6.54e5 LAM        Latin~ 1985. 1.36e3 5.65e1 1.93e3 1.05e2 8.11e2 2.22e3 6.95e2
#  2 EMP      BOL     1.35e5 LAM        Latin~ 1987. 9.77e2 5.79e1 2.96e2 7.07e0 1.67e2 4.00e2 1.52e2
#  3 EMP      BRA     3.36e6 LAM        Latin~ 1989. 1.77e4 2.38e2 8.47e3 3.89e2 4.44e3 1.14e4 2.62e3
#  4 EMP      BWA     1.85e4 SSA        Sub-s~ 1993. 2.00e2 1.21e1 2.43e1 3.70e0 3.14e1 5.08e1 1.08e1
#  5 EMP      CHL     2.51e5 LAM        Latin~ 1988. 6.93e2 1.07e2 6.68e2 3.35e1 3.67e2 8.95e2 3.09e2
#  6 EMP      CHN     2.91e7 ASI        Asia   1988. 3.09e5 8.23e3 8.34e4 2.09e3 2.80e4 3.80e4 1.75e4
#  7 EMP      COL     6.03e5 LAM        Latin~ 1989. 3.44e3 2.04e2 1.49e3 4.20e1 7.18e2 3.02e3 6.39e2
#  8 EMP      CRI     5.50e4 LAM        Latin~ 1991. 2.54e2 2.10e0 1.87e2 2.19e1 7.84e1 2.47e2 6.50e1
#  9 EMP      DEW     1.10e6 EUR        Europe 1971. 2.40e3 3.95e2 8.51e3 2.29e2 2.10e3 4.49e3 1.50e3
# 10 EMP      DNK     1.53e5 EUR        Europe 1981. 2.23e2 7.41e0 5.03e2 1.39e1 1.72e2 4.60e2 1.62e2
# # ... with 75 more rows, and 3 more variables: FIRE <dbl>, GOV <dbl>, OTH <dbl>

2. Fast Transformations

collapse also provides some fast transformations that significantly extend in scope and speed up manipulations that can be performed with dplyr::mutate.

2.1 Fast Transform and Compute Variables

The function ftransform can be used to manipulate columns in the same ways as mutate:

GGDC10S %>% fsubset(Variable == "VA", Country, Year, AGR, SUM) %>%
  ftransform(AGR_perc = AGR / SUM * 100,  # Computing % of VA in Agriculture
             AGR_mean = fmean(AGR),       # Average Agricultural VA
             AGR = NULL, SUM = NULL) %>%  # Deleting columns AGR and SUM
             head
#   Country Year AGR_perc AGR_mean
# 1     BWA 1960       NA  5137561
# 2     BWA 1961       NA  5137561
# 3     BWA 1962       NA  5137561
# 4     BWA 1963       NA  5137561
# 5     BWA 1964 43.49132  5137561
# 6     BWA 1965 39.96990  5137561

If only the computed columns need to be returned, fcompute provides an efficient alternative:

GGDC10S %>% fsubset(Variable == "VA", Country, Year, AGR, SUM) %>%
  fcompute(AGR_perc = AGR / SUM * 100,
           AGR_mean = fmean(AGR)) %>% head
#   AGR_perc AGR_mean
# 1       NA  5137561
# 2       NA  5137561
# 3       NA  5137561
# 4       NA  5137561
# 5 43.49132  5137561
# 6 39.96990  5137561

ftransform and fcompute are an order of magnitude faster than mutate, but they do not support grouped computations. For common grouped operations like replacing and sweeping out statistics, collapse however provides very efficient alternatives…

2.2 Replacing and Sweeping out Statistics

All statistical (scalar-valued) functions in the collapse package (fsum, fprod, fmean, fmedian, fmode, fvar, fsd, fmin, fmax, ffirst, flast, fNobs, fNdistinct) have a TRA argument which can be used to efficiently transforms data by either (column-wise) replacing data values with computed statistics or sweeping the statistics out of the data. Operations can be specified using either an integer or quoted operator / string. The 10 operations supported by TRA are:

Simple transformations are again straightforward to specify:

# This subtracts the median value from all data points i.e. centers on the median
GGDC10S %>% num_vars %>% fmedian(TRA = "-") %>% head
#   Year       AGR       MIN       MAN        PU       CON       WRT       TRA      FIRE       GOV
# 1  -22        NA        NA        NA        NA        NA        NA        NA        NA        NA
# 2  -21        NA        NA        NA        NA        NA        NA        NA        NA        NA
# 3  -20        NA        NA        NA        NA        NA        NA        NA        NA        NA
# 4  -19        NA        NA        NA        NA        NA        NA        NA        NA        NA
# 5  -18 -4378.218 -169.7294 -3717.362 -167.8456 -1472.787 -3767.399 -1173.141 -959.0059 -3923.690
# 6  -17 -4378.792 -170.7277 -3717.080 -167.8149 -1472.101 -3766.578 -1172.861 -958.8783 -3922.817
#         OTH       SUM
# 1        NA        NA
# 2        NA        NA
# 3        NA        NA
# 4        NA        NA
# 5 -1430.831 -23148.71
# 6 -1430.494 -23146.85

# This replaces all data points with the mode
GGDC10S %>% char_vars %>% fmode(TRA = "replace") %>% head
#   Country Regioncode Region Variable
# 1     USA        ASI   Asia      EMP
# 2     USA        ASI   Asia      EMP
# 3     USA        ASI   Asia      EMP
# 4     USA        ASI   Asia      EMP
# 5     USA        ASI   Asia      EMP
# 6     USA        ASI   Asia      EMP

We can also easily specify code to grouped demean, scale or compute percentages by groups:

# Demeaning sectoral data by Variable and Country (within transformation)
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fmean(TRA = "-") %>% head(3)
# # A tibble: 3 x 13
#   Variable Country   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

# Scaling sectoral data by Variable and Country
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fsd(TRA = "/") %>% head(3)
# # A tibble: 3 x 13
#   Variable Country   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

# Normalizing Data by expressing them in percentages of the median value within each country and sector (i.e. the median is 100%)
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>%  
   fgroup_by(Variable, Country) %>% fmedian(TRA = "%") %>% head(3)
# # A tibble: 3 x 13
#   Variable Country   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

Weighted demeaning and scaling can be computed using:

# Weighted demeaning (within transformation), weighted by SUM
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fmean(SUM, "-") %>% head(3)
# # A tibble: 3 x 13
#   Variable Country   SUM   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

# Weighted scaling, weighted by SUM
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fsd(SUM, "/") %>% head(3)
# # A tibble: 3 x 13
#   Variable Country   SUM   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 3 VA       BWA        NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

Alternatively we could also replace data points with their groupwise weighted mean or standard deviation:

# This conducts a weighted between transformation (replacing with weighted mean)
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fmean(SUM, "replace")
# # A tibble: 5,027 x 13
#    Variable Country   SUM   AGR    MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH
#  * <chr>    <chr>   <dbl> <dbl>  <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#  1 VA       BWA      NA     NA     NA    NA    NA    NA    NA    NA    NA    NA    NA 
#  2 VA       BWA      NA     NA     NA    NA    NA    NA    NA    NA    NA    NA    NA 
#  3 VA       BWA      NA     NA     NA    NA    NA    NA    NA    NA    NA    NA    NA 
#  4 VA       BWA      NA     NA     NA    NA    NA    NA    NA    NA    NA    NA    NA 
#  5 VA       BWA      37.5 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  6 VA       BWA      39.3 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  7 VA       BWA      43.1 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  8 VA       BWA      41.4 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  9 VA       BWA      41.1 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
# 10 VA       BWA      51.2 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
# # ... with 5,017 more rows

# This also replaces missing values in each group
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fmean(SUM, "replace_fill")
# # A tibble: 5,027 x 13
#    Variable Country   SUM   AGR    MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH
#  * <chr>    <chr>   <dbl> <dbl>  <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#  1 VA       BWA      NA   1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  2 VA       BWA      NA   1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  3 VA       BWA      NA   1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  4 VA       BWA      NA   1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  5 VA       BWA      37.5 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  6 VA       BWA      39.3 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  7 VA       BWA      43.1 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  8 VA       BWA      41.4 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
#  9 VA       BWA      41.1 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
# 10 VA       BWA      51.2 1317. 13321. 2965.  529. 2747. 6547. 2158. 4432. 7556. 2615.
# # ... with 5,017 more rows

Sequential operations are also easily performed:

# This scales and then subtracts the median
GGDC10S %>%
  fselect(Variable, Country, AGR:SUM) %>% 
   fgroup_by(Variable, Country) %>% fsd(TRA = "/") %>% fmedian(TRA = "-")
# # A tibble: 5,027 x 13
#    Variable Country    AGR    MIN    MAN     PU    CON     WRT     TRA    FIRE    GOV     OTH    SUM
#  * <chr>    <chr>    <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>   <dbl>   <dbl>  <dbl>   <dbl>  <dbl>
#  1 VA       BWA     NA     NA     NA     NA     NA     NA      NA      NA      NA     NA      NA    
#  2 VA       BWA     NA     NA     NA     NA     NA     NA      NA      NA      NA     NA      NA    
#  3 VA       BWA     NA     NA     NA     NA     NA     NA      NA      NA      NA     NA      NA    
#  4 VA       BWA     NA     NA     NA     NA     NA     NA      NA      NA      NA     NA      NA    
#  5 VA       BWA     -0.182 -0.235 -0.183 -0.245 -0.118 -0.0820 -0.0724 -0.0661 -0.108 -0.0848 -0.146
#  6 VA       BWA     -0.183 -0.235 -0.183 -0.245 -0.117 -0.0817 -0.0722 -0.0660 -0.108 -0.0846 -0.146
#  7 VA       BWA     -0.180 -0.235 -0.183 -0.245 -0.117 -0.0813 -0.0720 -0.0659 -0.107 -0.0843 -0.145
#  8 VA       BWA     -0.177 -0.235 -0.183 -0.245 -0.117 -0.0826 -0.0724 -0.0659 -0.107 -0.0841 -0.146
#  9 VA       BWA     -0.174 -0.235 -0.183 -0.245 -0.117 -0.0823 -0.0717 -0.0661 -0.108 -0.0848 -0.146
# 10 VA       BWA     -0.173 -0.234 -0.182 -0.243 -0.115 -0.0821 -0.0715 -0.0660 -0.108 -0.0846 -0.145
# # ... with 5,017 more rows

Of course it is also possible to combine multiple functions as in the aggregation section, or to add variables to existing data, as shown below:

# This adds a groupwise observation count next to each column
add_vars(GGDC10S, seq(7,27,2)) <- GGDC10S %>%
    fgroup_by(Variable, Country) %>% fselect(AGR:SUM) %>%
    fNobs("replace_fill") %>% add_stub("N_")

head(GGDC10S)
#   Country Regioncode             Region Variable Year      AGR N_AGR      MIN N_MIN       MAN N_MAN
# 1     BWA        SSA Sub-saharan Africa       VA 1960       NA    47       NA    47        NA    47
# 2     BWA        SSA Sub-saharan Africa       VA 1961       NA    47       NA    47        NA    47
# 3     BWA        SSA Sub-saharan Africa       VA 1962       NA    47       NA    47        NA    47
# 4     BWA        SSA Sub-saharan Africa       VA 1963       NA    47       NA    47        NA    47
# 5     BWA        SSA Sub-saharan Africa       VA 1964 16.30154    47 3.494075    47 0.7365696    47
# 6     BWA        SSA Sub-saharan Africa       VA 1965 15.72700    47 2.495768    47 1.0181992    47
#          PU N_PU       CON N_CON      WRT N_WRT      TRA N_TRA     FIRE N_FIRE      GOV N_GOV
# 1        NA   47        NA    47       NA    47       NA    47       NA     47       NA    47
# 2        NA   47        NA    47       NA    47       NA    47       NA     47       NA    47
# 3        NA   47        NA    47       NA    47       NA    47       NA     47       NA    47
# 4        NA   47        NA    47       NA    47       NA    47       NA     47       NA    47
# 5 0.1043936   47 0.6600454    47 6.243732    47 1.658928    47 1.119194     47 4.822485    47
# 6 0.1350976   47 1.3462312    47 7.064825    47 1.939007    47 1.246789     47 5.695848    47
#        OTH N_OTH      SUM N_SUM
# 1       NA    47       NA    47
# 2       NA    47       NA    47
# 3       NA    47       NA    47
# 4       NA    47       NA    47
# 5 2.341328    47 37.48229    47
# 6 2.678338    47 39.34710    47
rm(GGDC10S)

Certainly There are lots of other examples one could construct using the 10 operations and 13 functions listed above, the examples provided just outline the suggested programming basics.

2.3 More Control using the TRA Function

Behind the scenes of the TRA = ... argument, the fast functions first compute the grouped statistics on all columns of the data, and these statistics are then directly fed into a C++ function that uses them to replace or sweep them out of data points in one of the 10 ways described above. This function can however also be called directly by the name of TRA (shorthand for ‘transforming’ data by replacing or sweeping out statistics). Fundamentally, TRA is a generalization of base::sweep for column-wise grouped operations1. Direct calls to TRA enable more control over inputs and outputs.

The two operations below are equivalent, although the first is slightly more efficient as it only requires one method dispatch and one check of the inputs:

# This divides by the product
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
    get_vars(6:16) %>% fprod(TRA = "/") 
# # A tibble: 5,027 x 11
#           AGR        MIN        MAN        PU        CON        WRT       TRA      FIRE        GOV
#  *      <dbl>      <dbl>      <dbl>     <dbl>      <dbl>      <dbl>     <dbl>     <dbl>      <dbl>
#  1 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  2 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  3 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  4 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  5  1.29e-105  2.81e-127  1.40e-101  4.44e-74  4.19e-102  3.97e-113  6.91e-92  1.01e-97  2.51e-117
#  6  1.24e-105  2.00e-127  1.94e-101  5.75e-74  8.55e-102  4.49e-113  8.08e-92  1.13e-97  2.96e-117
#  7  1.39e-105  1.58e-127  1.53e-101  8.62e-74  8.55e-102  5.26e-113  8.98e-92  1.23e-97  3.31e-117
#  8  1.51e-105  1.85e-127  1.78e-101  8.62e-74  5.70e-102  2.74e-113  7.18e-92  1.39e-97  3.66e-117
#  9  1.66e-105  1.48e-127  1.43e-101  8.62e-74  7.74e-102  3.29e-113  1.02e-91  9.33e-98  2.61e-117
# 10  1.72e-105  4.21e-127  4.07e-101  2.46e-73  2.21e-101  3.66e-113  1.13e-91  1.11e-97  2.91e-117
# # ... with 5,017 more rows, and 2 more variables: OTH <dbl>, SUM <dbl>

# Same thing
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
    get_vars(6:16) %>% TRA(fprod(., keep.group_vars = FALSE), "/") # [same as TRA(.,fprod(., keep.group_vars = FALSE),"/")]
# # A tibble: 5,027 x 11
#           AGR        MIN        MAN        PU        CON        WRT       TRA      FIRE        GOV
#  *      <dbl>      <dbl>      <dbl>     <dbl>      <dbl>      <dbl>     <dbl>     <dbl>      <dbl>
#  1 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  2 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  3 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  4 NA         NA         NA         NA        NA         NA         NA        NA        NA        
#  5  1.29e-105  2.81e-127  1.40e-101  4.44e-74  4.19e-102  3.97e-113  6.91e-92  1.01e-97  2.51e-117
#  6  1.24e-105  2.00e-127  1.94e-101  5.75e-74  8.55e-102  4.49e-113  8.08e-92  1.13e-97  2.96e-117
#  7  1.39e-105  1.58e-127  1.53e-101  8.62e-74  8.55e-102  5.26e-113  8.98e-92  1.23e-97  3.31e-117
#  8  1.51e-105  1.85e-127  1.78e-101  8.62e-74  5.70e-102  2.74e-113  7.18e-92  1.39e-97  3.66e-117
#  9  1.66e-105  1.48e-127  1.43e-101  8.62e-74  7.74e-102  3.29e-113  1.02e-91  9.33e-98  2.61e-117
# 10  1.72e-105  4.21e-127  4.07e-101  2.46e-73  2.21e-101  3.66e-113  1.13e-91  1.11e-97  2.91e-117
# # ... with 5,017 more rows, and 2 more variables: OTH <dbl>, SUM <dbl>

TRA.grouped_df was designed such that it matches the columns of the statistics (aggregated columns) to those of the original data, and only transforms matching columns while returning the whole data.frame. Thus it is easily possible to only apply a transformation to the first two sectors:

# This only demeans Agriculture (AGR) and Mining (MIN)
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
    get_vars(6:16) %>% TRA(fmean(fselect(., AGR, MIN), keep.group_vars = FALSE), "-")
# # A tibble: 5,027 x 11
#      AGR    MIN    MAN     PU    CON   WRT   TRA  FIRE   GOV   OTH   SUM
#  * <dbl>  <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#  1   NA     NA  NA     NA     NA     NA    NA    NA    NA    NA     NA  
#  2   NA     NA  NA     NA     NA     NA    NA    NA    NA    NA     NA  
#  3   NA     NA  NA     NA     NA     NA    NA    NA    NA    NA     NA  
#  4   NA     NA  NA     NA     NA     NA    NA    NA    NA    NA     NA  
#  5 -446. -4505.  0.737  0.104  0.660  6.24  1.66  1.12  4.82  2.34  37.5
#  6 -446. -4506.  1.02   0.135  1.35   7.06  1.94  1.25  5.70  2.68  39.3
#  7 -444. -4507.  0.804  0.203  1.35   8.27  2.15  1.36  6.37  2.99  43.1
#  8 -443. -4506.  0.938  0.203  0.897  4.31  1.72  1.54  7.04  3.31  41.4
#  9 -441. -4507.  0.750  0.203  1.22   5.17  2.44  1.03  5.03  2.36  41.1
# 10 -440. -4503.  2.14   0.578  3.47   5.75  2.72  1.23  5.59  2.63  51.2
# # ... with 5,017 more rows

Another potential use of TRA is to do computations in two- or more steps, for example if both aggregated and transformed data are needed, or if computations are more complex and involve other manipulations in-between the aggregating and sweeping part:

# Get grouped tibble
gGGDC <- GGDC10S %>% fgroup_by(Variable, Country)

# Get aggregated data
gsumGGDC <- gGGDC %>% fselect(AGR:SUM) %>% fsum
head(gsumGGDC)
# # A tibble: 6 x 13
#   Variable Country     AGR     MIN     MAN     PU     CON    WRT    TRA   FIRE     GOV    OTH    SUM
#   <chr>    <chr>     <dbl>   <dbl>   <dbl>  <dbl>   <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>
# 1 EMP      ARG      8.80e4   3230.  1.20e5  6307.  4.60e4 1.23e5 4.02e4 3.89e4  1.27e5 6.15e4 6.54e5
# 2 EMP      BOL      5.88e4   3418.  1.43e4   326.  7.49e3 1.72e4 7.04e3 2.72e3 NA      2.41e4 1.35e5
# 3 EMP      BRA      1.07e6  12773.  4.33e5 22604.  2.19e5 5.28e5 1.27e5 2.74e5  3.29e5 3.54e5 3.36e6
# 4 EMP      BWA      8.84e3    493.  8.49e2   145.  1.19e3 1.71e3 3.93e2 7.21e2  2.87e3 1.30e3 1.85e4
# 5 EMP      CHL      4.42e4   6389.  3.94e4  1850.  1.86e4 4.38e4 1.63e4 1.72e4 NA      6.32e4 2.51e5
# 6 EMP      CHN      1.73e7 422972.  4.03e6 96364.  1.25e6 1.73e6 8.36e5 2.96e5  1.36e6 1.86e6 2.91e7

# Get transformed (scaled) data
head(TRA(gGGDC, gsumGGDC, "/"))
# # A tibble: 6 x 16
#   Country Regioncode Region Variable  Year      AGR      MIN      MAN       PU      CON      WRT
#   <chr>   <chr>      <chr>  <chr>    <dbl>    <dbl>    <dbl>    <dbl>    <dbl>    <dbl>    <dbl>
# 1 BWA     SSA        Sub-s~ VA        1960 NA       NA       NA       NA       NA       NA      
# 2 BWA     SSA        Sub-s~ VA        1961 NA       NA       NA       NA       NA       NA      
# 3 BWA     SSA        Sub-s~ VA        1962 NA       NA       NA       NA       NA       NA      
# 4 BWA     SSA        Sub-s~ VA        1963 NA       NA       NA       NA       NA       NA      
# 5 BWA     SSA        Sub-s~ VA        1964  7.50e-4  1.65e-5  1.66e-5  1.03e-5  1.57e-5  6.82e-5
# 6 BWA     SSA        Sub-s~ VA        1965  7.24e-4  1.18e-5  2.30e-5  1.33e-5  3.20e-5  7.72e-5
# # ... with 5 more variables: TRA <dbl>, FIRE <dbl>, GOV <dbl>, OTH <dbl>, SUM <dbl>

As discussed above, whether using the argument to fast statistical functions or TRA directly, these data transformations are essentially a two-step process: Statistics are first computed and then used to transform this original data. This process is already very efficient since all functions are written in C++, and programmatically separating the computation of statistics and data transformation tasks allows for unlimited combinations and drastically simplifies the code base of this package.

Nonetheless there are of course more memory efficient and faster ways to program such data transformations, which principally involve doing them column-by-column with a single C++ function. To ensure that this collapse lives up to the highest standards of performance for common uses, it also provides slightly more efficient functions for the very commonly applied tasks of centering and averaging data by groups (widely known as ‘between’-group and ‘within’-group transformations), and scaling and centering data by groups (also known as ‘standardizing’ data).

2.4 Faster Centering, Averaging and Standardizing

The functions fbetween and fwithin are slightly more memory efficient implementations of fmean invoked with different TRA options:

GGDC10S %>% # Same as ... %>% fmean(TRA = "replace")
  fgroup_by(Variable, Country) %>% get_vars(6:16) %>% fbetween %>% head(2)
# # A tibble: 2 x 11
#     AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

GGDC10S %>% # Same as ... %>% fmean(TRA = "replace_fill")
  fgroup_by(Variable, Country) %>% get_vars(6:16) %>% fbetween(fill = TRUE) %>% head(2)
# # A tibble: 2 x 11
#     AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH    SUM
#   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>  <dbl>
# 1  462. 4509.  942.  216.  895. 1948.  635. 1359. 2373.  773. 14112.
# 2  462. 4509.  942.  216.  895. 1948.  635. 1359. 2373.  773. 14112.

GGDC10S %>% # Same as ... %>% fmean(TRA = "-")
  fgroup_by(Variable, Country) %>% get_vars(6:16) %>% fwithin %>% head(2)
# # A tibble: 2 x 11
#     AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA
# 2    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA

Apart from higher speed,fwithin has a mean argument to assign an arbitrary mean to centered data, the default being mean = 0. A very common choice for such an added mean is just the overall mean of the data, which can be added in by invoking mean = "overall.mean":

GGDC10S %>% 
  fgroup_by(Variable, Country) %>% 
    fselect(Country, Variable, AGR:SUM) %>% fwithin(mean = "overall.mean")
# # A tibble: 5,027 x 13
#    Country Variable     AGR     MIN     MAN      PU     CON     WRT     TRA    FIRE     GOV     OTH
#  * <chr>   <chr>      <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
#  1 BWA     VA       NA      NA      NA          NA  NA      NA      NA      NA      NA      NA     
#  2 BWA     VA       NA      NA      NA          NA  NA      NA      NA      NA      NA      NA     
#  3 BWA     VA       NA      NA      NA          NA  NA      NA      NA      NA      NA      NA     
#  4 BWA     VA       NA      NA      NA          NA  NA      NA      NA      NA      NA      NA     
#  5 BWA     VA        2.53e6  1.86e6  5.54e6 335463.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
#  6 BWA     VA        2.53e6  1.86e6  5.54e6 335463.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
#  7 BWA     VA        2.53e6  1.86e6  5.54e6 335463.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
#  8 BWA     VA        2.53e6  1.86e6  5.54e6 335463.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
#  9 BWA     VA        2.53e6  1.86e6  5.54e6 335463.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
# 10 BWA     VA        2.53e6  1.86e6  5.54e6 335464.  1.80e6  3.39e6  1.47e6  1.66e6  1.71e6  1.68e6
# # ... with 5,017 more rows, and 1 more variable: SUM <dbl>

This can also be done using weights. The code below uses the SUM column as weights, and then for each variable and each group subtracts out the weighted mean, and then adds the overall weighted column mean back to the centered columns. The SUM column is just kept as it is and added in front.

GGDC10S %>% 
  fgroup_by(Variable, Country) %>% 
    fselect(Country, Variable, AGR:SUM) %>% fwithin(SUM, mean = "overall.mean")
# # A tibble: 5,027 x 13
#    Country Variable   SUM     AGR     MIN     MAN      PU     CON     WRT     TRA    FIRE     GOV
#  * <chr>   <chr>    <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
#  1 BWA     VA        NA   NA      NA      NA      NA      NA      NA      NA      NA      NA     
#  2 BWA     VA        NA   NA      NA      NA      NA      NA      NA      NA      NA      NA     
#  3 BWA     VA        NA   NA      NA      NA      NA      NA      NA      NA      NA      NA     
#  4 BWA     VA        NA   NA      NA      NA      NA      NA      NA      NA      NA      NA     
#  5 BWA     VA        37.5  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
#  6 BWA     VA        39.3  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
#  7 BWA     VA        43.1  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
#  8 BWA     VA        41.4  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
#  9 BWA     VA        41.1  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
# 10 BWA     VA        51.2  4.29e8  3.70e8  7.38e8  2.73e7  2.83e8  4.33e8  1.97e8  1.55e8  2.10e8
# # ... with 5,017 more rows, and 1 more variable: OTH <dbl>

Apart from fbetween and fwithin, the function fscale exists to efficiently scale and center data, to avoid sequential calls such as ... %>% fsd(TRA = "/") %>% fmean(TRA = "-") shown in an earlier example.

# This efficiently scales and centers (i.e. standardizes) the data
GGDC10S %>%
  fgroup_by(Variable, Country) %>%
    fselect(Country, Variable, AGR:SUM) %>% fscale
# # A tibble: 5,027 x 13
#    Country Variable    AGR    MIN    MAN     PU    CON    WRT    TRA   FIRE    GOV    OTH    SUM
#  * <chr>   <chr>     <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 BWA     VA       NA     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA    
#  2 BWA     VA       NA     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA    
#  3 BWA     VA       NA     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA    
#  4 BWA     VA       NA     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA    
#  5 BWA     VA       -0.738 -0.717 -0.668 -0.805 -0.692 -0.603 -0.589 -0.635 -0.656 -0.596 -0.676
#  6 BWA     VA       -0.739 -0.717 -0.668 -0.805 -0.692 -0.603 -0.589 -0.635 -0.656 -0.596 -0.676
#  7 BWA     VA       -0.736 -0.717 -0.668 -0.805 -0.692 -0.603 -0.589 -0.635 -0.656 -0.595 -0.676
#  8 BWA     VA       -0.734 -0.717 -0.668 -0.805 -0.692 -0.604 -0.589 -0.635 -0.655 -0.595 -0.676
#  9 BWA     VA       -0.730 -0.717 -0.668 -0.805 -0.692 -0.604 -0.588 -0.635 -0.656 -0.596 -0.676
# 10 BWA     VA       -0.729 -0.716 -0.667 -0.803 -0.690 -0.603 -0.588 -0.635 -0.656 -0.596 -0.675
# # ... with 5,017 more rows

fscale also has additional mean and sd arguments allowing the user to (group-) scale data to an arbitrary mean and standard deviation. Setting mean = FALSE just scales the data but preserves the means, and is thus different from fsd(..., TRA = "/") which just divides all values by the standard deviation:

# Saving grouped tibble
gGGDC <- GGDC10S %>%
  fgroup_by(Variable, Country) %>%
    fselect(Country, Variable, AGR:SUM)

# Original means
head(fmean(gGGDC)) 
# # A tibble: 6 x 13
#   Variable Country     AGR    MIN     MAN      PU     CON    WRT    TRA   FIRE     GOV    OTH    SUM
#   <chr>    <chr>     <dbl>  <dbl>   <dbl>   <dbl>   <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>
# 1 EMP      ARG       1420.   52.1  1932.   102.     742.  1.98e3 6.49e2  628.   2043.  9.92e2 1.05e4
# 2 EMP      BOL        964.   56.0   235.     5.35   123.  2.82e2 1.15e2   44.6    NA   3.96e2 2.22e3
# 3 EMP      BRA      17191.  206.   6991.   365.    3525.  8.51e3 2.05e3 4414.   5307.  5.71e3 5.43e4
# 4 EMP      BWA        188.   10.5    18.1    3.09    25.3 3.63e1 8.36e0   15.3    61.1 2.76e1 3.94e2
# 5 EMP      CHL        702.  101.    625.    29.4    296.  6.95e2 2.58e2  272.     NA   1.00e3 3.98e3
# 6 EMP      CHN     287744. 7050.  67144.  1606.   20852.  2.89e4 1.39e4 4929.  22669.  3.10e4 4.86e5

# Mean Preserving Scaling
head(fmean(fscale(gGGDC, mean = FALSE)))
# # A tibble: 6 x 13
#   Variable Country     AGR    MIN     MAN      PU     CON    WRT    TRA   FIRE     GOV    OTH    SUM
#   <chr>    <chr>     <dbl>  <dbl>   <dbl>   <dbl>   <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>
# 1 EMP      ARG       1420.   52.1  1932.   102.     742.  1.98e3 6.49e2  628.   2043.  9.92e2 1.05e4
# 2 EMP      BOL        964.   56.0   235.     5.35   123.  2.82e2 1.15e2   44.6    NA   3.96e2 2.22e3
# 3 EMP      BRA      17191.  206.   6991.   365.    3525.  8.51e3 2.05e3 4414.   5307.  5.71e3 5.43e4
# 4 EMP      BWA        188.   10.5    18.1    3.09    25.3 3.63e1 8.36e0   15.3    61.1 2.76e1 3.94e2
# 5 EMP      CHL        702.  101.    625.    29.4    296.  6.95e2 2.58e2  272.     NA   1.00e3 3.98e3
# 6 EMP      CHN     287744. 7050.  67144.  1606.   20852.  2.89e4 1.39e4 4929.  22669.  3.10e4 4.86e5
head(fsd(fscale(gGGDC, mean = FALSE)))
# # A tibble: 6 x 13
#   Variable Country   AGR   MIN   MAN    PU   CON   WRT   TRA  FIRE   GOV   OTH   SUM
#   <chr>    <chr>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 EMP      ARG      1.    1.    1.00  1.00  1.00  1.00  1.00  1.00  1.00  1.00  1.  
# 2 EMP      BOL      1.    1.00  1.    1.00  1.00  1.    1.    1.   NA     1.    1.  
# 3 EMP      BRA      1.    1.    1.    1.00  1.    1.00  1.00  1.00  1.    1.00  1.00
# 4 EMP      BWA      1.00  1.00  1.    1.    1.    1.00  1.    1.00  1.    1.00  1.00
# 5 EMP      CHL      1.    1.    1.00  1.    1.    1.    1.00  1.   NA     1.    1.00
# 6 EMP      CHN      1.    1.    1.    1.00  1.00  1.    1.    1.    1.00  1.00  1.

One can also set mean = "overall.mean", which group-centers columns on the overall mean as illustrated with fwithin. Another interesting option is setting sd = "within.sd". This group-scales data such that every group has a standard deviation equal to the within-standard deviation of the data:

# Just using VA data for this example
gGGDC <- GGDC10S %>%
  fsubset(Variable == "VA", Country, AGR:SUM) %>% 
      fgroup_by(Country)

# This calculates the within- standard deviation for all columns
fsd(num_vars(ungroup(fwithin(gGGDC))))
#       AGR       MIN       MAN        PU       CON       WRT       TRA      FIRE       GOV       OTH 
#  45046972  40122220  75608708   3062688  30811572  44125207  20676901  16030868  20358973  18780869 
#       SUM 
# 306429102

# This scales all groups to take on the within- standard deviation while preserving group means 
fsd(fscale(gGGDC, mean = FALSE, sd = "within.sd"))
# # A tibble: 43 x 12
#    Country      AGR      MIN      MAN     PU     CON     WRT     TRA    FIRE     GOV     OTH     SUM
#    <chr>      <dbl>    <dbl>    <dbl>  <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
#  1 ARG       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
#  2 BOL       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7 NA       1.88e7  3.06e8
#  3 BRA       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
#  4 BWA       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
#  5 CHL       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7 NA       1.88e7  3.06e8
#  6 CHN       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
#  7 COL       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7 NA       1.88e7  3.06e8
#  8 CRI       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
#  9 DEW       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
# 10 DNK       4.50e7   4.01e7   7.56e7 3.06e6  3.08e7  4.41e7  2.07e7  1.60e7  2.04e7  1.88e7  3.06e8
# # ... with 33 more rows

A grouped scaling operation with both mean = "overall.mean" and sd = "within.sd" thus efficiently achieves a complete harmonization of all groups in the first two moments without changing the fundamental properties (in terms of level and scale) of the data.

2.5 Lags / Leads, Differences and Growth Rates

This section introduces 3 further powerful collapse functions: flag, fdiff and fgrowth. The first function, flag, efficiently computes sequences of fully identified lags and leads on time-series and panel-data. The following code computes 1 fully-identified panel-lag and 1 fully identified panel-lead of each variable in the data:

GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% flag(-1:1, Year)
# # A tibble: 5,027 x 36
#    Country Variable  Year F1.AGR   AGR L1.AGR F1.MIN   MIN L1.MIN F1.MAN    MAN L1.MAN  F1.PU     PU
#  * <chr>   <chr>    <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>
#  1 BWA     VA        1960   NA    NA     NA    NA    NA     NA    NA     NA     NA     NA     NA    
#  2 BWA     VA        1961   NA    NA     NA    NA    NA     NA    NA     NA     NA     NA     NA    
#  3 BWA     VA        1962   NA    NA     NA    NA    NA     NA    NA     NA     NA     NA     NA    
#  4 BWA     VA        1963   16.3  NA     NA     3.49 NA     NA     0.737 NA     NA      0.104 NA    
#  5 BWA     VA        1964   15.7  16.3   NA     2.50  3.49  NA     1.02   0.737 NA      0.135  0.104
#  6 BWA     VA        1965   17.7  15.7   16.3   1.97  2.50   3.49  0.804  1.02   0.737  0.203  0.135
#  7 BWA     VA        1966   19.1  17.7   15.7   2.30  1.97   2.50  0.938  0.804  1.02   0.203  0.203
#  8 BWA     VA        1967   21.1  19.1   17.7   1.84  2.30   1.97  0.750  0.938  0.804  0.203  0.203
#  9 BWA     VA        1968   21.9  21.1   19.1   5.24  1.84   2.30  2.14   0.750  0.938  0.578  0.203
# 10 BWA     VA        1969   23.1  21.9   21.1  10.2   5.24   1.84  4.15   2.14   0.750  1.12   0.578
# # ... with 5,017 more rows, and 22 more variables: L1.PU <dbl>, F1.CON <dbl>, CON <dbl>,
# #   L1.CON <dbl>, F1.WRT <dbl>, WRT <dbl>, L1.WRT <dbl>, F1.TRA <dbl>, TRA <dbl>, L1.TRA <dbl>,
# #   F1.FIRE <dbl>, FIRE <dbl>, L1.FIRE <dbl>, F1.GOV <dbl>, GOV <dbl>, L1.GOV <dbl>, F1.OTH <dbl>,
# #   OTH <dbl>, L1.OTH <dbl>, F1.SUM <dbl>, SUM <dbl>, L1.SUM <dbl>

If the time-variable passed does not exactly identify the data (i.e. because of gaps or repeated values in each group), all 3 functions will issue appropriate error messages. flag, fdiff and fgrowth support unbalanced panels with different start and end periods and duration of coverage for each individual, but not irregular panels. A workaround for such panels exists with the function seqid which generates a new panel-id identifying consecutive time-sequences at the sub-individual level, see ?seqid.

It is also possible to omit the time-variable if one is certain that the data is sorted:

GGDC10S %>%
  fselect(Variable, Country,AGR:SUM) %>% 
    fgroup_by(Variable, Country) %>% flag
# # A tibble: 5,027 x 13
#    Variable Country L1.AGR L1.MIN L1.MAN  L1.PU L1.CON L1.WRT L1.TRA L1.FIRE L1.GOV L1.OTH L1.SUM
#  * <chr>    <chr>    <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>  <dbl>   <dbl>  <dbl>  <dbl>  <dbl>
#  1 VA       BWA       NA    NA    NA     NA     NA      NA     NA      NA     NA     NA      NA  
#  2 VA       BWA       NA    NA    NA     NA     NA      NA     NA      NA     NA     NA      NA  
#  3 VA       BWA       NA    NA    NA     NA     NA      NA     NA      NA     NA     NA      NA  
#  4 VA       BWA       NA    NA    NA     NA     NA      NA     NA      NA     NA     NA      NA  
#  5 VA       BWA       NA    NA    NA     NA     NA      NA     NA      NA     NA     NA      NA  
#  6 VA       BWA       16.3   3.49  0.737  0.104  0.660   6.24   1.66    1.12   4.82   2.34   37.5
#  7 VA       BWA       15.7   2.50  1.02   0.135  1.35    7.06   1.94    1.25   5.70   2.68   39.3
#  8 VA       BWA       17.7   1.97  0.804  0.203  1.35    8.27   2.15    1.36   6.37   2.99   43.1
#  9 VA       BWA       19.1   2.30  0.938  0.203  0.897   4.31   1.72    1.54   7.04   3.31   41.4
# 10 VA       BWA       21.1   1.84  0.750  0.203  1.22    5.17   2.44    1.03   5.03   2.36   41.1
# # ... with 5,017 more rows

fdiff computes sequences of lagged-leaded and iterated differences as well as quasi-differences and log-differences on time-series and panel-data. The code below computes the 1 and 10 year first and second differences of each variable in the data:

GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fdiff(c(1, 10), 1:2, Year)
# # A tibble: 5,027 x 47
#    Country Variable  Year D1.AGR D2.AGR L10D1.AGR L10D2.AGR D1.MIN D2.MIN L10D1.MIN L10D2.MIN D1.MAN
#  * <chr>   <chr>    <dbl>  <dbl>  <dbl>     <dbl>     <dbl>  <dbl>  <dbl>     <dbl>     <dbl>  <dbl>
#  1 BWA     VA        1960 NA     NA            NA        NA NA     NA            NA        NA NA    
#  2 BWA     VA        1961 NA     NA            NA        NA NA     NA            NA        NA NA    
#  3 BWA     VA        1962 NA     NA            NA        NA NA     NA            NA        NA NA    
#  4 BWA     VA        1963 NA     NA            NA        NA NA     NA            NA        NA NA    
#  5 BWA     VA        1964 NA     NA            NA        NA NA     NA            NA        NA NA    
#  6 BWA     VA        1965 -0.575 NA            NA        NA -0.998 NA            NA        NA  0.282
#  7 BWA     VA        1966  1.95   2.53         NA        NA -0.525  0.473        NA        NA -0.214
#  8 BWA     VA        1967  1.47  -0.488        NA        NA  0.328  0.854        NA        NA  0.134
#  9 BWA     VA        1968  1.95   0.488        NA        NA -0.460 -0.788        NA        NA -0.188
# 10 BWA     VA        1969  0.763 -1.19         NA        NA  3.41   3.87         NA        NA  1.39 
# # ... with 5,017 more rows, and 35 more variables: D2.MAN <dbl>, L10D1.MAN <dbl>, L10D2.MAN <dbl>,
# #   D1.PU <dbl>, D2.PU <dbl>, L10D1.PU <dbl>, L10D2.PU <dbl>, D1.CON <dbl>, D2.CON <dbl>,
# #   L10D1.CON <dbl>, L10D2.CON <dbl>, D1.WRT <dbl>, D2.WRT <dbl>, L10D1.WRT <dbl>, L10D2.WRT <dbl>,
# #   D1.TRA <dbl>, D2.TRA <dbl>, L10D1.TRA <dbl>, L10D2.TRA <dbl>, D1.FIRE <dbl>, D2.FIRE <dbl>,
# #   L10D1.FIRE <dbl>, L10D2.FIRE <dbl>, D1.GOV <dbl>, D2.GOV <dbl>, L10D1.GOV <dbl>,
# #   L10D2.GOV <dbl>, D1.OTH <dbl>, D2.OTH <dbl>, L10D1.OTH <dbl>, L10D2.OTH <dbl>, D1.SUM <dbl>,
# #   D2.SUM <dbl>, L10D1.SUM <dbl>, L10D2.SUM <dbl>

Log-differences of the form \(log(x_t) - log(x_{t-s})\) are also easily computed, although one caveat of log-differencing in C++ is that log(NA) - log(NA) gives a NaN value.

GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fdiff(c(1, 10), 1, Year, logdiff = TRUE)
# # A tibble: 5,027 x 25
#    Country Variable  Year Dlog1.AGR L10Dlog1.AGR Dlog1.MIN L10Dlog1.MIN Dlog1.MAN L10Dlog1.MAN
#  * <chr>   <chr>    <dbl>     <dbl>        <dbl>     <dbl>        <dbl>     <dbl>        <dbl>
#  1 BWA     VA        1960   NA                NA    NA               NA    NA               NA
#  2 BWA     VA        1961  NaN                NA   NaN               NA   NaN               NA
#  3 BWA     VA        1962  NaN                NA   NaN               NA   NaN               NA
#  4 BWA     VA        1963  NaN                NA   NaN               NA   NaN               NA
#  5 BWA     VA        1964  NaN                NA   NaN               NA   NaN               NA
#  6 BWA     VA        1965   -0.0359           NA    -0.336           NA     0.324           NA
#  7 BWA     VA        1966    0.117            NA    -0.236           NA    -0.236           NA
#  8 BWA     VA        1967    0.0796           NA     0.154           NA     0.154           NA
#  9 BWA     VA        1968    0.0972           NA    -0.223           NA    -0.223           NA
# 10 BWA     VA        1969    0.0355           NA     1.05            NA     1.05            NA
# # ... with 5,017 more rows, and 16 more variables: Dlog1.PU <dbl>, L10Dlog1.PU <dbl>,
# #   Dlog1.CON <dbl>, L10Dlog1.CON <dbl>, Dlog1.WRT <dbl>, L10Dlog1.WRT <dbl>, Dlog1.TRA <dbl>,
# #   L10Dlog1.TRA <dbl>, Dlog1.FIRE <dbl>, L10Dlog1.FIRE <dbl>, Dlog1.GOV <dbl>, L10Dlog1.GOV <dbl>,
# #   Dlog1.OTH <dbl>, L10Dlog1.OTH <dbl>, Dlog1.SUM <dbl>, L10Dlog1.SUM <dbl>

Finally, it is also possible to compute quasi-differences and quasi-log-differences of the form \(x_t - \rho x_{t-s}\) or \(log(x_t) - \rho log(x_{t-s})\):

GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fdiff(t = Year, rho = 0.95)
# # A tibble: 5,027 x 14
#    Country Variable  Year QD1.AGR QD1.MIN QD1.MAN  QD1.PU QD1.CON QD1.WRT QD1.TRA QD1.FIRE QD1.GOV
#  * <chr>   <chr>    <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>    <dbl>   <dbl>
#  1 BWA     VA        1960  NA      NA      NA     NA      NA       NA      NA       NA      NA    
#  2 BWA     VA        1961  NA      NA      NA     NA      NA       NA      NA       NA      NA    
#  3 BWA     VA        1962  NA      NA      NA     NA      NA       NA      NA       NA      NA    
#  4 BWA     VA        1963  NA      NA      NA     NA      NA       NA      NA       NA      NA    
#  5 BWA     VA        1964  NA      NA      NA     NA      NA       NA      NA       NA      NA    
#  6 BWA     VA        1965   0.241  -0.824   0.318  0.0359  0.719    1.13    0.363    0.184   1.11 
#  7 BWA     VA        1966   2.74   -0.401  -0.163  0.0743  0.0673   1.56    0.312    0.174   0.955
#  8 BWA     VA        1967   2.35    0.427   0.174  0.0101 -0.381   -3.55   -0.323    0.246   0.988
#  9 BWA     VA        1968   2.91   -0.345  -0.141  0.0101  0.365    1.08    0.804   -0.427  -1.66 
# 10 BWA     VA        1969   1.82    3.50    1.43   0.385   2.32     0.841   0.397    0.252   0.818
# # ... with 5,017 more rows, and 2 more variables: QD1.OTH <dbl>, QD1.SUM <dbl>

The quasi-differencing feature was added to fdiff to facilitate the preparation of time-series and panel data for least-squares estimations suffering from serial correlation following Cochrane & Orcutt (1949).

Finally, fgrowth computes growth rates in the same way. By default exact growth rates are computed in percentage terms using \((x_t-x_{t-s}) / x_{t-s} \times 100\) (the default argument is scale = 100). The user can also request growth rates obtained by log-differencing using \(log(x_t/ x_{t-s}) \times 100\).

# Exact growth rates, computed as: (x - lag(x)) / lag(x) * 100
GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fgrowth(c(1, 10), 1, Year)
# # A tibble: 5,027 x 25
#    Country Variable  Year G1.AGR L10G1.AGR G1.MIN L10G1.MIN G1.MAN L10G1.MAN G1.PU L10G1.PU G1.CON
#  * <chr>   <chr>    <dbl>  <dbl>     <dbl>  <dbl>     <dbl>  <dbl>     <dbl> <dbl>    <dbl>  <dbl>
#  1 BWA     VA        1960  NA           NA   NA          NA   NA          NA  NA         NA   NA  
#  2 BWA     VA        1961  NA           NA   NA          NA   NA          NA  NA         NA   NA  
#  3 BWA     VA        1962  NA           NA   NA          NA   NA          NA  NA         NA   NA  
#  4 BWA     VA        1963  NA           NA   NA          NA   NA          NA  NA         NA   NA  
#  5 BWA     VA        1964  NA           NA   NA          NA   NA          NA  NA         NA   NA  
#  6 BWA     VA        1965  -3.52        NA  -28.6        NA   38.2        NA  29.4       NA  104. 
#  7 BWA     VA        1966  12.4         NA  -21.1        NA  -21.1        NA  50.0       NA    0  
#  8 BWA     VA        1967   8.29        NA   16.7        NA   16.7        NA   0         NA  -33.3
#  9 BWA     VA        1968  10.2         NA  -20          NA  -20          NA   0         NA   35.7
# 10 BWA     VA        1969   3.61        NA  185.         NA  185.         NA 185.        NA  185. 
# # ... with 5,017 more rows, and 13 more variables: L10G1.CON <dbl>, G1.WRT <dbl>, L10G1.WRT <dbl>,
# #   G1.TRA <dbl>, L10G1.TRA <dbl>, G1.FIRE <dbl>, L10G1.FIRE <dbl>, G1.GOV <dbl>, L10G1.GOV <dbl>,
# #   G1.OTH <dbl>, L10G1.OTH <dbl>, G1.SUM <dbl>, L10G1.SUM <dbl>

# Log-difference growth rates, computed as: log(x / lag(x)) * 100
GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fgrowth(c(1, 10), 1, Year, logdiff = TRUE)
# # A tibble: 5,027 x 25
#    Country Variable  Year Dlog1.AGR L10Dlog1.AGR Dlog1.MIN L10Dlog1.MIN Dlog1.MAN L10Dlog1.MAN
#  * <chr>   <chr>    <dbl>     <dbl>        <dbl>     <dbl>        <dbl>     <dbl>        <dbl>
#  1 BWA     VA        1960     NA              NA      NA             NA      NA             NA
#  2 BWA     VA        1961    NaN              NA     NaN             NA     NaN             NA
#  3 BWA     VA        1962    NaN              NA     NaN             NA     NaN             NA
#  4 BWA     VA        1963    NaN              NA     NaN             NA     NaN             NA
#  5 BWA     VA        1964    NaN              NA     NaN             NA     NaN             NA
#  6 BWA     VA        1965     -3.59           NA     -33.6           NA      32.4           NA
#  7 BWA     VA        1966     11.7            NA     -23.6           NA     -23.6           NA
#  8 BWA     VA        1967      7.96           NA      15.4           NA      15.4           NA
#  9 BWA     VA        1968      9.72           NA     -22.3           NA     -22.3           NA
# 10 BWA     VA        1969      3.55           NA     105.            NA     105.            NA
# # ... with 5,017 more rows, and 16 more variables: Dlog1.PU <dbl>, L10Dlog1.PU <dbl>,
# #   Dlog1.CON <dbl>, L10Dlog1.CON <dbl>, Dlog1.WRT <dbl>, L10Dlog1.WRT <dbl>, Dlog1.TRA <dbl>,
# #   L10Dlog1.TRA <dbl>, Dlog1.FIRE <dbl>, L10Dlog1.FIRE <dbl>, Dlog1.GOV <dbl>, L10Dlog1.GOV <dbl>,
# #   Dlog1.OTH <dbl>, L10Dlog1.OTH <dbl>, Dlog1.SUM <dbl>, L10Dlog1.SUM <dbl>

fdiff and fgrowth can also perform leaded (forward) differences and growth rates (i.e. ... %>% fgrowth(-c(1, 10), 1:2, Year) would compute one and 10-year leaded first and second differences). Again it is possible to perform sequential operations:

# This computes the 1 and 10-year growth rates, for the current period and lagged by one period
GGDC10S %>%
  fselect(-Region, -Regioncode) %>% 
    fgroup_by(Variable, Country) %>% fgrowth(c(1, 10), 1, Year) %>% flag(0:1, Year)
# # A tibble: 5,027 x 47
#    Country Variable  Year G1.AGR L1.G1.AGR L10G1.AGR L1.L10G1.AGR G1.MIN L1.G1.MIN L10G1.MIN
#  * <chr>   <chr>    <dbl>  <dbl>     <dbl>     <dbl>        <dbl>  <dbl>     <dbl>     <dbl>
#  1 BWA     VA        1960  NA        NA           NA           NA   NA        NA          NA
#  2 BWA     VA        1961  NA        NA           NA           NA   NA        NA          NA
#  3 BWA     VA        1962  NA        NA           NA           NA   NA        NA          NA
#  4 BWA     VA        1963  NA        NA           NA           NA   NA        NA          NA
#  5 BWA     VA        1964  NA        NA           NA           NA   NA        NA          NA
#  6 BWA     VA        1965  -3.52     NA           NA           NA  -28.6      NA          NA
#  7 BWA     VA        1966  12.4      -3.52        NA           NA  -21.1     -28.6        NA
#  8 BWA     VA        1967   8.29     12.4         NA           NA   16.7     -21.1        NA
#  9 BWA     VA        1968  10.2       8.29        NA           NA  -20        16.7        NA
# 10 BWA     VA        1969   3.61     10.2         NA           NA  185.      -20          NA
# # ... with 5,017 more rows, and 37 more variables: L1.L10G1.MIN <dbl>, G1.MAN <dbl>,
# #   L1.G1.MAN <dbl>, L10G1.MAN <dbl>, L1.L10G1.MAN <dbl>, G1.PU <dbl>, L1.G1.PU <dbl>,
# #   L10G1.PU <dbl>, L1.L10G1.PU <dbl>, G1.CON <dbl>, L1.G1.CON <dbl>, L10G1.CON <dbl>,
# #   L1.L10G1.CON <dbl>, G1.WRT <dbl>, L1.G1.WRT <dbl>, L10G1.WRT <dbl>, L1.L10G1.WRT <dbl>,
# #   G1.TRA <dbl>, L1.G1.TRA <dbl>, L10G1.TRA <dbl>, L1.L10G1.TRA <dbl>, G1.FIRE <dbl>,
# #   L1.G1.FIRE <dbl>, L10G1.FIRE <dbl>, L1.L10G1.FIRE <dbl>, G1.GOV <dbl>, L1.G1.GOV <dbl>,
# #   L10G1.GOV <dbl>, L1.L10G1.GOV <dbl>, G1.OTH <dbl>, L1.G1.OTH <dbl>, L10G1.OTH <dbl>,
# #   L1.L10G1.OTH <dbl>, G1.SUM <dbl>, L1.G1.SUM <dbl>, L10G1.SUM <dbl>, L1.L10G1.SUM <dbl>

3. Benchmarks

This section seeks to demonstrate that the functionality introduced in the preceeding 2 sections indeed produces code that evaluates substantially faster than native dplyr.

To do this properly, the different components of a typical piped call (selecting / subsetting, grouping, and performing some computation) are bechmarked separately on 2 different data sizes.

All benchmarks are run on a Windows 8.1 laptop with a 2x 2.2 GHZ Intel i5 processor, 8GB DDR3 RAM and a Samsung 850 EVO SSD hard drive.

3.1 Data

Bechmarks are run on the original GGDC10S data used throughout this vignette and a larger dataset with approx. 1 million observations, obtained by replicating and row-binding GGDC10S 200 times while maintaining unique groups.

# This shows the groups in GGDC10S
GRP(GGDC10S, ~ Variable + Country)
# collapse grouping object of length 5027 with 85 ordered groups
# 
# Call: GRP.default(X = GGDC10S, by = ~Variable + Country), unordered
# 
# Distribution of group sizes: 
#    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
#    4.00   53.00   62.00   59.14   63.00   65.00 
# 
# Groups with sizes: 
# EMP.ARG EMP.BOL EMP.BRA EMP.BWA EMP.CHL EMP.CHN 
#      62      61      62      52      63      62 
#   ---
# VA.TWN VA.TZA VA.USA VA.VEN VA.ZAF VA.ZMB 
#     63     52     65     63     52     52

# This replicates the data 200 times 
data <- replicate(200, GGDC10S, simplify = FALSE) 
# This function adds a number i to the country and variable columns of each dataset
uniquify <- function(x, i) `get_vars<-`(x, c(1,4), value = lapply(unclass(x)[c(1,4)], paste0, i))
# Making datasets unique and row-binding them
data <- unlist2d(Map(uniquify, data, as.list(1:200)), idcols = FALSE)
dim(data)
# [1] 1005400      16

# This shows the groups in the replicated data
GRP(data, ~ Variable + Country)
# collapse grouping object of length 1005400 with 17000 ordered groups
# 
# Call: GRP.default(X = data, by = ~Variable + Country), unordered
# 
# Distribution of group sizes: 
#    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
#    4.00   53.00   62.00   59.14   63.00   65.00 
# 
# Groups with sizes: 
# EMP1.ARG1 EMP1.BOL1 EMP1.BRA1 EMP1.BWA1 EMP1.CHL1 EMP1.CHN1 
#        62        61        62        52        63        62 
#   ---
# VA99.TWN99 VA99.TZA99 VA99.USA99 VA99.VEN99 VA99.ZAF99 VA99.ZMB99 
#         63         52         65         63         52         52

gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1849039  98.8    3536118 188.9  3536118 188.9
# Vcells 19744242 150.7   28138280 214.7 22920896 174.9

3.1 Selecting, Subsetting and Grouping

## Selecting columns
# Small
microbenchmark(dplyr = select(GGDC10S, Country, Variable, AGR:SUM),
               collapse = fselect(GGDC10S, Country, Variable, AGR:SUM))
# Unit: microseconds
#      expr      min       lq       mean   median       uq      max neval
#     dplyr 3484.298 3527.360 3656.58006 3605.230 3637.806 6765.999   100
#  collapse   12.495   17.404   28.01564   20.528   39.270   48.642   100

# Large
microbenchmark(dplyr = select(data, Country, Variable, AGR:SUM),
               collapse = fselect(data, Country, Variable, AGR:SUM))
# Unit: microseconds
#      expr      min        lq       mean   median       uq      max neval
#     dplyr 3495.007 3513.0800 3600.46001 3527.807 3569.307 6587.946   100
#  collapse   12.495   14.2805   25.25789   17.627   36.593   44.625   100

## Subsetting columns 
# Small
microbenchmark(dplyr = filter(GGDC10S, Variable == "VA"),
               collapse = fsubset(GGDC10S, Variable == "VA"))
# Unit: microseconds
#      expr     min       lq      mean    median       uq      max neval
#     dplyr 813.063 955.4155 1301.3549 1131.4595 1343.873 3291.519   100
#  collapse 153.063 177.1605  284.1392  200.5885  332.454 1236.997   100

# Large
microbenchmark(dplyr = filter(data, Variable == "VA"),
               collapse = fsubset(data, Variable == "VA"))
# Unit: milliseconds
#      expr      min        lq      mean    median        uq       max neval
#     dplyr 13.88230 14.187534 18.073788 15.496823 17.197024 162.43751   100
#  collapse  7.68616  7.793482  9.053992  7.964395  9.041635  24.29057   100

## Grouping 
# Small
microbenchmark(dplyr = group_by(GGDC10S, Country, Variable),
               collapse = fgroup_by(GGDC10S, Country, Variable))
# Unit: microseconds
#      expr      min       lq      mean   median       uq      max neval
#     dplyr 1154.441 1189.026 1217.1971 1203.082 1224.056 1978.213   100
#  collapse  356.106  370.385  388.9939  391.805  399.838  438.661   100

# Large
microbenchmark(dplyr = group_by(data, Country, Variable),
               collapse = fgroup_by(data, Country, Variable), times = 10)
# Unit: milliseconds
#      expr       min        lq      mean   median        uq       max neval
#     dplyr 146.10933 146.57209 151.04049 148.4976 152.87845 164.92355    10
#  collapse  66.93483  67.04595  67.36586  67.1767  67.74343  68.24948    10

## Computing a new column 
# Small
microbenchmark(dplyr = mutate(GGDC10S, NEW = AGR+1),
               collapse = ftransform(GGDC10S, NEW = AGR+1))
# Unit: microseconds
#      expr     min       lq      mean   median      uq      max neval
#     dplyr 535.943 542.4135 570.98235 545.3145 550.223 2844.826   100
#  collapse  22.312  27.2210  33.84809  38.1540  39.270   54.443   100

# Large
microbenchmark(dplyr = mutate(data, NEW = AGR+1),
               collapse = ftransform(data, NEW = AGR+1))
# Unit: milliseconds
#      expr      min       lq     mean   median       uq      max neval
#     dplyr 4.308070 4.394195 5.573354 4.427663 4.652572 18.20420   100
#  collapse 3.540525 3.641822 4.643973 3.675515 3.701620 17.23005   100

## All combined with pipes 
# Small
microbenchmark(dplyr = filter(GGDC10S, Variable == "VA") %>% 
                       select(Country, AGR:SUM) %>% 
                       mutate(NEW = AGR+1) %>%
                       group_by(Country),
               collapse = fsubset(GGDC10S, Variable == "VA", Country, AGR:SUM) %>% 
                       ftransform(NEW = AGR+1) %>%
                       fgroup_by(Country))
# Unit: microseconds
#      expr      min       lq      mean    median       uq       max neval
#     dplyr 5472.775 5594.154 6018.0081 5727.8045 6248.352 10536.340   100
#  collapse  445.801  521.440  596.2444  567.1805  631.440  1087.951   100

# Large
microbenchmark(dplyr = filter(data, Variable == "VA") %>% 
                       select(Country, AGR:SUM) %>% 
                       mutate(NEW = AGR+1) %>%
                       group_by(Country),
               collapse = fsubset(data, Variable == "VA", Country, AGR:SUM) %>% 
                       ftransform(NEW = AGR+1) %>%
                       fgroup_by(Country), times = 10)
# Unit: milliseconds
#      expr       min       lq      mean    median        uq       max neval
#     dplyr 18.162257 18.37869 19.919935 18.585300 18.837875 28.691902    10
#  collapse  7.980683  8.02263  8.273377  8.088898  8.140886  9.225713    10

gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1849541  98.8    3536118 188.9  3536118 188.9
# Vcells 20834241 159.0   33845936 258.3 33843751 258.3

3.1 Aggregation

## Grouping the data
cgGGDC10S <- fgroup_by(GGDC10S, Variable, Country) %>% fselect(-Region, -Regioncode)
gGGDC10S <- group_by(GGDC10S, Variable, Country) %>% fselect(-Region, -Regioncode)
cgdata <- fgroup_by(data, Variable, Country) %>% fselect(-Region, -Regioncode)
gdata <- group_by(data, Variable, Country) %>% fselect(-Region, -Regioncode)
rm(data, GGDC10S) 
gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1866672  99.7    3536118 188.9  3536118 188.9
# Vcells 19935919 152.1   33845936 258.3 33843751 258.3

## Conversion of Grouping object: This time would be required extra in all hybrid calls 
## i.e. when calling collapse functions on data grouped with dplyr::group_by
# Small
microbenchmark(GRP(gGGDC10S))
# Unit: microseconds
#           expr    min     lq     mean median     uq    max neval
#  GRP(gGGDC10S) 29.452 30.345 31.50531 30.791 31.238 90.588   100

# Large
microbenchmark(GRP(gdata))
# Unit: milliseconds
#        expr      min       lq     mean   median       uq      max neval
#  GRP(gdata) 4.159916 4.241355 4.817822 4.301376 4.394196 22.47256   100


## Sum 
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, sum, na.rm = TRUE),
               collapse = fsum(cgGGDC10S))
# Unit: microseconds
#      expr      min       lq      mean   median       uq      max neval
#     dplyr 1382.027 1391.845 1405.1610 1398.093 1407.240 1612.738   100
#  collapse  237.850  244.098  261.4074  250.568  275.335  327.992   100

# Large
microbenchmark(dplyr = summarise_all(gdata, sum, na.rm = TRUE),
               collapse = fsum(cgdata), times = 10)
# Unit: milliseconds
#      expr      min       lq     mean   median       uq      max neval
#     dplyr 90.80548 93.32588 93.27738 93.72728 93.78061 95.12828    10
#  collapse 39.78873 40.41883 40.52941 40.69037 40.75530 40.93201    10

## Mean
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, mean.default, na.rm = TRUE),
               collapse = fmean(cgGGDC10S))
# Unit: microseconds
#      expr      min       lq      mean   median        uq       max neval
#     dplyr 5952.937 6059.367 7022.5951 6133.220 6790.5420 28032.795   100
#  collapse  253.022  268.641  306.6657  323.083  330.2235   373.062   100

# Large
microbenchmark(dplyr = summarise_all(gdata, mean.default, na.rm = TRUE),
               collapse = fmean(cgdata), times = 10)
# Unit: milliseconds
#      expr       min         lq       mean     median         uq        max neval
#     dplyr 1069.6810 1072.22822 1088.73342 1088.79157 1102.81087 1108.67054    10
#  collapse   42.6822   42.72861   43.00456   42.91982   43.40556   43.46625    10

## Median
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, median, na.rm = TRUE),
               collapse = fmedian(cgGGDC10S))
# Unit: microseconds
#      expr       min         lq       mean    median         uq       max neval
#     dplyr 43426.982 44598.8265 48298.2816 45616.493 50668.9025 74759.774   100
#  collapse   494.442   517.6465   565.8058   568.296   583.0215  1006.287   100

# Large
microbenchmark(dplyr = summarise_all(gdata, median, na.rm = TRUE),
               collapse = fmedian(cgdata), times = 2)
# Unit: milliseconds
#      expr        min         lq       mean     median        uq       max neval
#     dplyr 9057.25573 9057.25573 9133.36719 9133.36719 9209.4786 9209.4786     2
#  collapse   87.92272   87.92272   94.27527   94.27527  100.6278  100.6278     2

## Standard Deviation
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, sd, na.rm = TRUE),
               collapse = fsd(cgGGDC10S))
# Unit: microseconds
#      expr       min        lq       mean    median        uq       max neval
#     dplyr 18201.972 18510.552 19660.2991 18953.675 19456.596 33023.177   100
#  collapse   426.166   456.065   495.3702   506.937   525.456   592.616   100

# Large
microbenchmark(dplyr = summarise_all(gdata, sd, na.rm = TRUE),
               collapse = fsd(cgdata), times = 2)
# Unit: milliseconds
#      expr        min         lq       mean     median         uq        max neval
#     dplyr 3593.06014 3593.06014 3694.16992 3694.16992 3795.27969 3795.27969     2
#  collapse   76.76565   76.76565   76.81229   76.81229   76.85892   76.85892     2

## Maximum
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, max, na.rm = TRUE),
               collapse = fmax(cgGGDC10S))
# Unit: microseconds
#      expr      min       lq     mean   median       uq      max neval
#     dplyr 1217.362 1230.526 1247.408 1236.105 1244.584 1591.317   100
#  collapse  176.714  187.201  204.194  204.159  209.067  590.832   100

# Large
microbenchmark(dplyr = summarise_all(gdata, max, na.rm = TRUE),
               collapse = fmax(cgdata), times = 10)
# Unit: milliseconds
#      expr      min       lq     mean   median       uq      max neval
#     dplyr 58.64490 58.89167 60.50664 59.32030 61.59236 66.00485    10
#  collapse 23.70107 23.75061 24.04567 24.09556 24.13795 24.83767    10

## First Value
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, first),
               collapse = ffirst(cgGGDC10S, na.rm = FALSE))
# Unit: microseconds
#      expr     min       lq      mean   median      uq      max neval
#     dplyr 664.462 672.7175 720.66694 681.4195 758.398 1147.301   100
#  collapse  58.012  66.9375  80.34256  83.4485  93.712  175.822   100

# Large
microbenchmark(dplyr = summarise_all(gdata, first),
               collapse = ffirst(cgdata, na.rm = FALSE), times = 10)
# Unit: milliseconds
#      expr      min        lq      mean    median        uq       max neval
#     dplyr 14.33479 14.461529 15.117245 14.853112 15.841771 15.980555    10
#  collapse  4.36028  4.372776  4.425254  4.414276  4.429002  4.606609    10

## Number of Distinct Values
# Small
microbenchmark(dplyr = summarise_all(gGGDC10S, n_distinct, na.rm = TRUE),
               collapse = fNdistinct(cgGGDC10S))
# Unit: milliseconds
#      expr       min        lq      mean    median        uq       max neval
#     dplyr 13.666317 14.010150 14.939442 14.300657 15.740474 26.882817   100
#  collapse  1.322676  1.371094  1.439517  1.421074  1.458112  1.878254   100

# Large
microbenchmark(dplyr = summarise_all(gdata, n_distinct, na.rm = TRUE),
               collapse = fNdistinct(cgdata), times = 5)
# Unit: milliseconds
#      expr       min        lq      mean    median        uq       max neval
#     dplyr 2494.7666 2526.5961 2553.5629 2530.2290 2534.2796 2681.9432     5
#  collapse  299.7027  301.5952  312.0312  308.6196  318.2482  331.9904     5

gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1868723  99.9    3536118 188.9  3536118 188.9
# Vcells 19940588 152.2   33845936 258.3 33845936 258.3

Below are some additional benchmarks for weighted aggregations and aggregations using the statistical mode, which cannot easily or efficiently be performed with dplyr.

## Weighted Mean
# Small
microbenchmark(fmean(cgGGDC10S, SUM)) 
# Unit: microseconds
#                   expr     min      lq     mean   median       uq    max neval
#  fmean(cgGGDC10S, SUM) 278.458 280.243 288.1821 281.5825 295.6395 393.59   100

# Large 
microbenchmark(fmean(cgdata, SUM), times = 10) 
# Unit: milliseconds
#                expr      min      lq     mean  median       uq      max neval
#  fmean(cgdata, SUM) 47.61189 47.7712 49.68274 48.5557 51.26219 53.66389    10

## Weighted Standard-Deviation
# Small
microbenchmark(fsd(cgGGDC10S, SUM)) 
# Unit: microseconds
#                 expr     min      lq     mean median      uq     max neval
#  fsd(cgGGDC10S, SUM) 427.951 430.852 439.2681 432.86 448.032 546.653   100

# Large 
microbenchmark(fsd(cgdata, SUM), times = 10) 
# Unit: milliseconds
#              expr      min       lq     mean   median       uq      max neval
#  fsd(cgdata, SUM) 77.00306 77.16683 77.29374 77.26768 77.43949 77.62289    10

## Statistical Mode
# Small
microbenchmark(fmode(cgGGDC10S)) 
# Unit: milliseconds
#              expr      min       lq     mean   median       uq     max neval
#  fmode(cgGGDC10S) 1.549817 1.572352 1.601791 1.608275 1.619877 1.79079   100

# Large 
microbenchmark(fmode(cgdata), times = 10) 
# Unit: milliseconds
#           expr      min      lq     mean   median       uq      max neval
#  fmode(cgdata) 378.4514 382.075 395.8943 396.3004 404.9652 423.0217    10

## Weighted Statistical Mode
# Small
microbenchmark(fmode(cgGGDC10S, SUM)) 
# Unit: milliseconds
#                   expr     min      lq     mean   median      uq      max neval
#  fmode(cgGGDC10S, SUM) 1.83943 1.85505 1.883979 1.864644 1.90079 2.303081   100

# Large 
microbenchmark(fmode(cgdata, SUM), times = 10) 
# Unit: milliseconds
#                expr      min       lq    mean   median       uq      max neval
#  fmode(cgdata, SUM) 446.6157 456.0266 481.108 476.2327 514.0155 521.9574    10

gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1868044  99.8    3536118 188.9  3536118 188.9
# Vcells 19936972 152.2   33845936 258.3 33845936 258.3

3.2 Transformation


## Replacing with group sum
# Small
microbenchmark(dplyr = mutate_all(gGGDC10S, sum, na.rm = TRUE),
               collapse = fsum(cgGGDC10S, TRA = "replace_fill"))
# Unit: microseconds
#      expr      min        lq      mean    median       uq      max neval
#     dplyr 2659.186 2743.5275 2901.6684 2801.7625 2892.128 9098.532   100
#  collapse  303.002  321.5215  346.2702  352.0895  359.452  437.769   100

# Large
microbenchmark(dplyr = mutate_all(gdata, sum, na.rm = TRUE),
               collapse = fsum(cgdata, TRA = "replace_fill"), times = 10)
# Unit: milliseconds
#      expr       min        lq     mean    median       uq      max neval
#     dplyr 261.12327 264.31661 302.4534 271.25352 306.2210 437.6698    10
#  collapse  79.69393  91.43737 106.7055  95.94937 105.2094 216.6034    10

## Dividing by group sum
# Small
microbenchmark(dplyr = mutate_all(gGGDC10S, function(x) x/sum(x, na.rm = TRUE)),
               collapse = fsum(cgGGDC10S, TRA = "/"))
# Unit: microseconds
#      expr      min       lq      mean   median       uq       max neval
#     dplyr 5665.107 5767.075 6204.4412 5847.845 6411.232 18948.990   100
#  collapse  549.776  569.635  633.4568  619.168  682.089   808.154   100

# Large
microbenchmark(dplyr = mutate_all(gdata, function(x) x/sum(x, na.rm = TRUE)),
               collapse = fsum(cgdata, TRA = "/"), times = 10)
# Unit: milliseconds
#      expr      min       lq     mean   median        uq       max neval
#     dplyr 916.6995 919.6844 964.5582 928.6275 1049.7615 1077.3806    10
#  collapse 131.4492 138.6784 157.8198 148.3504  156.2249  269.6145    10

## Centering
# Small
microbenchmark(dplyr = mutate_all(gGGDC10S, function(x) x-mean.default(x, na.rm = TRUE)),
               collapse = fwithin(cgGGDC10S))
# Unit: microseconds
#      expr      min        lq      mean   median        uq       max neval
#     dplyr 8448.350 8606.7680 9700.1981 8693.117 9603.2375 34581.471   100
#  collapse  306.572  327.3225  361.3444  368.154  375.2945   430.629   100

# Large
microbenchmark(dplyr = mutate_all(gdata, function(x) x-mean.default(x, na.rm = TRUE)),
               collapse = fwithin(cgdata), times = 10)
# Unit: milliseconds
#      expr        min        lq      mean    median        uq       max neval
#     dplyr 1573.20527 1603.7130 1644.1533 1617.4777 1689.4883 1747.0962    10
#  collapse   90.55067  100.3967  129.2609  108.1047  111.0589  235.5136    10

## Centering and Scaling (Standardizing)
# Small
microbenchmark(dplyr = mutate_all(gGGDC10S, function(x) (x-mean.default(x, na.rm = TRUE))/sd(x, na.rm = TRUE)),
               collapse = fscale(cgGGDC10S))
# Unit: microseconds
#      expr       min         lq       mean    median        uq       max neval
#     dplyr 25317.828 25973.3660 28179.4951 26870.099 28438.212 39703.942   100
#  collapse   494.888   516.0845   548.4156   556.247   566.065   645.273   100

# Large
microbenchmark(dplyr = mutate_all(gdata, function(x) (x-mean.default(x, na.rm = TRUE))/sd(x, na.rm = TRUE)),
               collapse = fscale(cgdata), times = 2)
# Unit: milliseconds
#      expr       min        lq      mean    median        uq       max neval
#     dplyr 5410.9159 5410.9159 5438.1283 5438.1283 5465.3407 5465.3407     2
#  collapse  129.8485  129.8485  132.8861  132.8861  135.9237  135.9237     2

## Lag
# Small
microbenchmark(dplyr_unordered = mutate_all(gGGDC10S, dplyr::lag),
               collapse_unordered = flag(cgGGDC10S),
               dplyr_ordered = mutate_all(gGGDC10S, dplyr::lag, order_by = "Year"),
               collapse_ordered = flag(cgGGDC10S, t = Year))
# Unit: microseconds
#                expr       min         lq       mean     median         uq       max neval
#     dplyr_unordered  2016.145  2113.6495  2211.2616  2172.7775  2211.6010  2851.965   100
#  collapse_unordered   340.040   376.1865   441.2136   439.5535   485.9630   634.117   100
#       dplyr_ordered 49583.853 50956.5085 53893.6488 52644.6610 55580.9670 75382.289   100
#    collapse_ordered   317.282   342.7180   378.7569   380.8725   403.8535   530.588   100

# Large
microbenchmark(dplyr_unordered = mutate_all(gdata, dplyr::lag),
               collapse_unordered = flag(cgdata),
               dplyr_ordered = mutate_all(gdata, dplyr::lag, order_by = "Year"),
               collapse_ordered = flag(cgdata, t = Year), times = 2)
# Unit: milliseconds
#                expr         min          lq       mean     median          uq         max neval
#     dplyr_unordered   184.04658   184.04658   196.4893   196.4893   208.93199   208.93199     2
#  collapse_unordered    52.20243    52.20243   132.0862   132.0862   211.97004   211.97004     2
#       dplyr_ordered 10660.72013 10660.72013 10674.7593 10674.7593 10688.79844 10688.79844     2
#    collapse_ordered    91.04779    91.04779    91.4300    91.4300    91.81221    91.81221     2

## First-Difference (unordered)
# Small
microbenchmark(dplyr_unordered = mutate_all(gGGDC10S, function(x) x - dplyr::lag(x)),
               collapse_unordered = fdiff(cgGGDC10S))
# Unit: microseconds
#                expr       min         lq       mean    median         uq       max neval
#     dplyr_unordered 31439.000 32377.4575 35122.4928 33240.723 37201.3885 50139.430   100
#  collapse_unordered   364.584   392.6975   464.4141   465.882   516.3085   626.531   100

# Large
microbenchmark(dplyr_unordered = mutate_all(gdata, function(x) x - dplyr::lag(x)),
               collapse_unordered = fdiff(cgdata), times = 2)
# Unit: milliseconds
#                expr        min         lq      mean    median         uq        max neval
#     dplyr_unordered 6726.73257 6726.73257 6966.3916 6966.3916 7206.05058 7206.05058     2
#  collapse_unordered   57.28028   57.28028   60.0276   60.0276   62.77492   62.77492     2

gc()
#            used  (Mb) gc trigger  (Mb) max used  (Mb)
# Ncells  1871343 100.0    3536120 188.9  3536120 188.9
# Vcells 20987844 160.2   48914147 373.2 48914147 373.2

Again below are some benchmarks for transformations not easily of efficiently performed with dplyr, such as centering on the overall mean, mean-preserving scaling, weighted scaling and centering, sequences of lags / leads, (iterated) panel-differences and growth rates.

# Centering on overall mean
microbenchmark(fwithin(cgdata, mean = "overall.mean"), times = 10)
# Unit: milliseconds
#                                    expr     min       lq    mean   median       uq      max neval
#  fwithin(cgdata, mean = "overall.mean") 86.8379 90.44447 100.644 99.38146 110.4631 117.7775    10

# Weighted Centering
microbenchmark(fwithin(cgdata, SUM), times = 10)
# Unit: milliseconds
#                  expr      min       lq     mean  median       uq      max neval
#  fwithin(cgdata, SUM) 85.66873 88.54167 113.0686 103.233 111.3918 239.7391    10
microbenchmark(fwithin(cgdata, SUM, mean = "overall.mean"), times = 10)
# Unit: milliseconds
#                                         expr      min       lq     mean  median       uq      max
#  fwithin(cgdata, SUM, mean = "overall.mean") 87.15027 93.07108 115.4685 105.556 110.4747 237.0188
#  neval
#     10

# Weighted Scaling and Standardizing
microbenchmark(fsd(cgdata, SUM, TRA = "/"), times = 10)
# Unit: milliseconds
#                         expr      min       lq     mean   median       uq      max neval
#  fsd(cgdata, SUM, TRA = "/") 155.6354 158.3106 181.9655 173.3973 179.5904 299.5264    10
microbenchmark(fscale(cgdata, SUM), times = 10)
# Unit: milliseconds
#                 expr      min       lq     mean  median       uq      max neval
#  fscale(cgdata, SUM) 118.0694 120.6001 144.3765 134.097 139.2568 262.9136    10

# Sequence of lags and leads
microbenchmark(flag(cgdata, -1:1), times = 10)
# Unit: milliseconds
#                expr     min       lq    mean   median       uq      max neval
#  flag(cgdata, -1:1) 126.653 149.5249 218.783 254.2807 254.5447 258.2999    10

# Iterated difference
microbenchmark(fdiff(cgdata, 1, 2), times = 10)
# Unit: milliseconds
#                 expr      min       lq     mean   median       uq      max neval
#  fdiff(cgdata, 1, 2) 86.16406 90.16244 114.0001 105.2255 112.9898 238.6128    10

# Growth Rate
microbenchmark(fgrowth(cgdata,1), times = 10)
# Unit: milliseconds
#                expr      min      lq     mean   median       uq      max neval
#  fgrowth(cgdata, 1) 93.15185 98.5019 126.6505 110.4267 122.9045 282.0549    10

References

Timmer, M. P., de Vries, G. J., & de Vries, K. (2015). “Patterns of Structural Change in Developing Countries.” . In J. Weiss, & M. Tribe (Eds.), Routledge Handbook of Industry and Development. (pp. 65-83). Routledge.

Cochrane, D. & Orcutt, G. H. (1949). “Application of Least Squares Regression to Relationships Containing Auto-Correlated Error Terms”. Journal of the American Statistical Association. 44 (245): 32–61.

Prais, S. J. & Winsten, C. B. (1954). “Trend Estimators and Serial Correlation”. Cowles Commission Discussion Paper No. 383. Chicago.


  1. Row-wise operations are not supported by TRA.↩︎