Reads a dataset downloaded from the IPUMS extract system, but does so by reading a chunk, then applying your code to that chunk and then continuing, which can allow you to deal with data that is too large to store in your computer's RAM all at once.

read_ipums_micro_chunked(
ddi,
callback,
chunk_size = 10000,
vars = NULL,
data_file = NULL,
verbose = TRUE,
var_attrs = c("val_labels", "var_label", "var_desc"),
lower_vars = FALSE
)

ddi,
callback,
chunk_size = 10000,
vars = NULL,
data_file = NULL,
verbose = TRUE,
var_attrs = c("val_labels", "var_label", "var_desc"),
lower_vars = FALSE
)

## Arguments

ddi Either a filepath to a DDI xml file downloaded from the website, or a ipums_ddi object parsed by read_ipums_ddi An ipums_callback object, or a function that will be converted to an IpumsSideEffectCallback object. An integer indicating how many observations to read in per chunk (defaults to 10,000). Setting this higher uses more RAM, but will usually be faster. Names of variables to load. Accepts a character vector of names, or dplyr_select_style conventions. For hierarchical data, the rectype id variable will be added even if it is not specified. Specify a directory to look for the data file. If left empty, it will look in the same directory as the DDI file. Logical, indicating whether to print progress information to console. Variable attributes to add from the DDI, defaults to adding all (val_labels, var_label and var_desc). See set_ipums_var_attributes for more details. Only if reading a DDI from a file, a logical indicating whether to convert variable names to lowercase (default is FALSE, in line with IPUMS conventions). Note that this argument will be ignored if argument ddi is an ipums_ddi object rather than a file path. See read_ipums_ddi for converting variable names to lowercase when reading in the DDI. Also note that if reading in chunks from a .csv or .csv.gz file, the callback function will be called *before* variable names are converted to lowercase, and thus should reference uppercase variable names.

## Value

Depends on the callback object

Other ipums_read: read_ipums_micro_yield(), read_ipums_micro(), read_ipums_sf(), read_nhgis(), read_terra_area(), read_terra_micro(), read_terra_raster()

## Examples

# Select Minnesotan cases from CPS example (Note you can also accomplish
# this and avoid having to even download a huge file using the "Select Cases"
# functionality of the IPUMS extract system)
ipums_example("cps_00006.xml"),
IpumsDataFrameCallback$new(function(x, pos) { x[x$STATEFIP == 27, ]
}),
chunk_size = 1000 # Generally you want this larger, but this example is a small file
)#> Use of data from IPUMS-CPS is subject to conditions including that users should
#> cite the data appropriately. Use command ipums_conditions() for more details.
# Tabulate INCTOT average by state without storing full dataset in memory
library(dplyr)#>
#> Attaching package: 'dplyr'#> The following objects are masked from 'package:stats':
#>
#>     filter, lag#> The following objects are masked from 'package:base':
#>
#>     intersect, setdiff, setequal, unioninc_by_state <- read_ipums_micro_chunked(
ipums_example("cps_00006.xml"),
IpumsDataFrameCallback$new(function(x, pos) { x %>% mutate( INCTOT = lbl_na_if( INCTOT, ~.lbl %in% c("Missing.", "N.I.U. (Not in Universe).")) ) %>% filter(!is.na(INCTOT)) %>% group_by(STATEFIP = as_factor(STATEFIP)) %>% summarize(INCTOT_SUM = sum(INCTOT), n = n()) }), chunk_size = 1000 # Generally you want this larger, but this example is a small file ) %>% group_by(STATEFIP) %>% summarize(avg_inc = sum(INCTOT_SUM) / sum(n))#> Use of data from IPUMS-CPS is subject to conditions including that users should #> cite the data appropriately. Use command ipums_conditions() for more details.#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument)#> summarise() ungrouping output (override with .groups argument) # x will be a list when using read_ipums_micro_list_chunked() read_ipums_micro_list_chunked( ipums_example("cps_00010.xml"), IpumsSideEffectCallback$new(function(x, pos) {
print(paste0(nrow(x$PERSON), " persons and ", nrow(x$HOUSEHOLD), " households in this chunk."))
}),
chunk_size = 1000 # Generally you want this larger, but this example is a small file
)#> Use of data from IPUMS-CPS is subject to conditions including that users should
#> cite the data appropriately. Use command ipums_conditions() for more details.
#> [1] "699 persons and 301 households in this chunk."
#> [1] "701 persons and 299 households in this chunk."
#> [1] "693 persons and 307 households in this chunk."
#> [1] "685 persons and 315 households in this chunk."
#> [1] "696 persons and 304 households in this chunk."
#> [1] "691 persons and 309 households in this chunk."
#> [1] "695 persons and 305 households in this chunk."
#> [1] "691 persons and 309 households in this chunk."
#> [1] "694 persons and 306 households in this chunk."
#> [1] "692 persons and 308 households in this chunk."
#> [1] "692 persons and 308 households in this chunk."
#> [1] "39 persons and 14 households in this chunk."#> NULL
# Using the biglm package, you can even run a regression without storing
# the full dataset in memory
library(dplyr)
if (require(biglm)) {
ipums_example("cps_00015.xml"),
IpumsBiglmCallback\$new(
INCTOT ~ AGE + HEALTH, # Simple regression (may not be very useful)
function(x, pos) {
x %>%
mutate(
INCTOT = lbl_na_if(
INCTOT, ~.lbl %in% c("Missing.", "N.I.U. (Not in Universe).")
),
HEALTH = as_factor(HEALTH)
)
}),
chunk_size = 1000 # Generally you want this larger, but this example is a small file
)
summary(lm_results)
#> cite the data appropriately. Use command ipums_conditions() for more details.#> Large data regression model: biglm(INCTOT ~ AGE + HEALTH, data, ...)