7 Defining the inquiry
An inquiry is a question you ask of a model. If we stipulate a reference model, \(m\), then our inquiry is a summary of \(m\). Suppose in some reference model that \(X\) possibly affects \(Y\). Using the framework provided in Pearl and Mackenzie (2018), one inquiry might be descriptive, or associational: what is the average level of \(Y\) when \(X=1\)? A second might be about the effects of interventions: what is the average treatment effect of \(X\) on \(Y\)? A third is about counterfactuals: for what share of units would \(Y\) have been different if \(X\) were different? If a theory involves more variables, many more questions open up, for instance regarding how the effect of one variable passes through, or is modified by, another.
When designing research, you should have your inquiries front of mind. Amazingly, many research projects do not: sometimes researchers start implementing data and answer strategies without any particular goal in mind, then end up discovering a question to answer in the process of generating estimates. To be clear, we are all for the kind of model-building research that helps us understand models well enough to even state a worthwhile inquiry. We also think that it’s possible to learn unexpected things in the course of doing research. But it’s essentially not possible to proactively design research to answer a question well unless we have an inquiry to target.
Formally, an inquiry is a summary function I that operates on an instance of a model \(m \in M\). When we summarize the model with the inquiry, we obtain an “answer under the model.” We formalized this as \(I(m) = a_m\). You can think of the difference between I and \(a_m\) as the difference between a question and its answer. I is the question we ask about the model and \(a_m\) is the answer.
In this book when we talk about inquiries, we will usually be referring to single-number summaries of models. Some common inquiries are descriptive, such as the means, conditional means, correlations, partial correlations, quantiles, and truth statements about variables in the model. Others are causal, such as the average difference in one variable when a second variable is set to two different values. We can think of a single-number inquiry as the atom of a research question.
While most inquiries are “atomic” in this way, some inquiries are more complex than a single-number summary. For example, the best linear predictor of \(Y\) given \(X\) is a two-number summary: it is the pair of numbers (the slope and intercept) that minimizes the total squared distance between the line and each value of \(Y\). No need to stop at two-number summaries though. We could imagine the best quadratic predictor of \(Y\) given \(X\) (three-number summary), and so on. We could have an inquiry that is the full conditional expectation function of \(Y\) given \(X\), no matter how wiggly, nonlinear, and nuanced the shape of that function. In could in principle be a 1,000 number summary of the model, or much more.
The inquiry could be constituted by a series of interrelated questions about the model. For instance, a researcher might articulate a handful of important questions about the model that all have to come out a certain way or the model itself should be rejected. These complex inquires are made up of a series of atomic inquiries. We’re interested in the sub-inquiries only insofar as they help us understand the real inquiry – is this model of the world a good one or not.
7.1 Elements
Every inquiry operates on the events generated by the model. We can think of the events as the “data”-set that describes the units, treatment conditions, and outcome variables over which inquiries can be defined. This definition is closely connected to the common UTOS (units, treatments, outcomes, and settings) framework (Shadish, Cook, and Campbell 2002). The units are the set of units within the model that the inquiry refers to, either all or a subset. The treatment conditions represents the set chosen for study. A descriptive inquiry is a summary of a single condition (reality), whereas a causal inquiry is a summary of multiple conditions. The outcomes are the set of nodes in the model that the inquiry concerns. Finally, the inquiry operates on the model events via a summary function. For example, the “population average” inquiry summarizes the outcome for all units in the population with the mean function. We discuss each element of inquiries in turn.
7.1.1 Units
The units of an inquiry defined by the set of people, places, or things that we are interested in studying. A study’s units might be the counties in Alabama, the set of students enrolled in Los Angeles Unified School System this March, countries in the world with mean income under $100 per day, or the houses in the Westlands neighborhood in Nairobi, Kenya. In both descriptive and causal inquiries, units may be all of the units or a subset of them, defined for example as those selected by a sampling procedure or those with a specific characteristic. In causal inquiries, the units may be those who are treated, who are not treated, or those who comply with treatments, again either in the entire population or a subset.
The reason we need to define the units of an inquiry is inquiry values may differ across units. If the units that are included in the sample live in easier-to-reach areas and people who live in easier-to-reach areas are wealthier than others, the sample average will differ from the population average — and from the average among those in hard-to-reach places.
The choice of which set of units to focus on is a theoretical one. To whom does the theoretical expectation apply? As a general matter, seeking insights that apply across many individuals is the goal of many social scientists. We are not typically interested in the effect of a treatment or the average outcome in a random sample of 100 units because we care about those units in particular, but because we wish to understand the treatment effect or outcomes in a broader population. Our theories often have so-called scope conditions, which define the types of units for which our theory is operative. A mechanism might operate only for coethnics of a country’s president, small-to-medium towns, blue collar workers, or the mothers of daughters. The units of an inquiry should be defined by these theoretical expectations, not by what inquiries our data and answer strategies can target easily.
This distinction often arises in debates over instrumental variables designs, which target local average treatment effects (LATEs), meaning the average treatment among a subset. The effect these designs estimate is the average treatment effect among those units who are “compliers.” Compliers are the subset of units who take treatment if assigned and don’t take treatment if not assigned. The effect among compliers may or may not be like the effect among the whole sample or the population from which the sample was drawn. The debate between Deaton (2010) and Guido W. Imbens (2010) centers precisely on which inquiry is the appropriate one, the LATE among compliers or the ATE in the whole sample. In many settings, the LATE may be the only inquiry we can reliably estimate, so the question becomes – is the LATE an inquiry with theoretical relevance?
If the inquiry is defined with respect to the units sampled by the data strategy, then we do not have to engage in generalization inference – we learn directly about the sample from the sample. But if the inquiry is defined population-level, then we need to generalize from the sample to the population. We also need to engage in generalization inference when we want to generalize study results to other populations that we did not explicitly sample from. Whether an inquiry requires generalization inference depends on the data strategy in this way. If the data strategy samples the units that define the inquiry, we do not need to generalize beyond the study. If the data strategy explicitly samples from a well-defined population, we can generalize from sample to population using canonical sampling theory. But if we want to generalize to an inquiry defined over some other set of units (for example, Brazilian citizens ten years in the future), we need to engage in generalization inference (See Egami and Hartman (2021)).
7.1.2 Outcomes
Every inquiry is also defined by what outcomes are considered for each of the units. The choice of outcome is again a theoretical one: what outcomes are to be described, or with which outcomes do we want to measure the effects of treatment? An inquiry might be about a single outcome or multiple outcomes. The average belief that climate change is real would be a single-outcome inquiry, and the difference between that belief and support for government rebates for purchasing electric vehicles a multiple-outcome inquiry.
In some cases, an inquiry will be about a latent outcome that we cannot directly measure, such as preferences, attitudes, or emotions. We can construct data strategies that elicit these latent outcomes using observable measures from asking or observing individuals, but we cannot directly measure them. Even though these constructs may be difficult or impossible to measure well, it is preferable to define the inquiry in terms of the true latent outcome of interest so we can later evaluate how well we do.
7.1.3 Treatment conditions
The final element of an inquiry is the treatment conditions under consideration and, in the case of more than one, compared.
Descriptive inquiries are defined with respect to one single treatment condition. That treatment condition is usually the “unmanipulated” condition in which the researcher exposes units to no additional causal agents. Here the goal is not to learn about the summaries of the distributions of outcomes as we observe them. Table 7.1 (top panel) enumerates some common descriptive estimands. These estimands have in common that you do not need any counterfactual quantities in order to define them. The covariance (similarly, the correlation) between \(X\) and \(Y\) enters as a descriptive estimand, so too does the line of best fit for \(Y\) given \(X\).
In Table 7.1, we enumerate several common types of descriptive inquiries, listing the units, treatment conditions, and outcomes that define them. We also provide R
code snippets for each.
Inquiry | Units | Treatment conditions | Outcomes | Code |
---|---|---|---|---|
Average value of variable Y in a finite population | Units in the population | Unmanipulated | Y | mean(Y) |
Average value of variable Y in a sample | Sampled units | Unmanipulated | Y | mean(Y[S == 1]) |
Conditional average value of Y given X = 1 | Units for whom X = 1 | Unmanipulated | Y | mean(Y[X == 1]) |
The variance of Y | Units in the population | Unmanipulated | Y | pop.var(Y) |
The covariance of X and Y | Units in the population | Unmanipulated | X, Y | pop.cov(X, Y) |
The best linear predictor of Y given X | Units in the population | Unmanipulated | Y | cov(Y, X) / var(X) |
Conditional expectation function of Y given X | Units in the population | Unmanipulated | Y | cef(Y, X) |
Causal inquiries involve a comparison of at least two possible treatment conditions. For example, an inquiry might be the causal effect of \(X\) on \(Y\) for a single unit. In order to infer that causal effect, we would need to know the value of \(Y\) in two worlds: one world in which \(X\) is set to 1 and one in which \(X\) is set to 0. Table 7.2 (middle panel) enumerates some common causal estimands. These estimands vary in the population they refer to. For instance, some are questions about samples (SATEs) and others about populations (PATEs). Inquiries can also be defined for units of a particular covariate class (CATEs). Finally, they may be summaries of more than one potential outcome. For instance, the interaction effect is defined here at the individual level as the effect of one treatment on the effect of another treatment.
Inquiry | Units | Treatment conditions | Outcomes | Code |
---|---|---|---|---|
Average treatment effect in a finite population (PATE) | Units in the population | D = 0, D = 1 | Y | mean(Y_D_1 - Y_D_0) |
Conditional average treatment effect (CATE) for X = 1 | Units for whom X = 1 | D = 0, D = 1 | Y | mean(Y_D_1[X == 1] - Y_D_0[X == 1]) |
Complier average causal effect (CACE) | Complier units | D = 0, D = 1 | Y | mean(Y_D_1[D_Z_1 > D_Z_0] - Y_D_0[D_Z_1 > D_Z_0]) |
Causal interactions of \(D_1\) and \(D_2\) | Units in the population | D1 = 1, D1 = 0, D2 = 1, D2 = 0 | Y | mean((Y_D1_1_D2_1 - Y_D1_0_D2_1) - (Y_D1_1_D2_0 - Y_D1_0_D2_0)) |
Generations of students have been told to excise words that connote causality from their empirical writing. “Affects” becomes “is associated with” and “impacts” becomes “moves with.” Being careful about causal language is of course very important (it’s really true that correlation does not imply causation!). But this change in language is not usually accompanied by a change in inquiry. Many times we are faced with drawing causal inferences from less than ideal data – but the deficiencies of the data strategy should not lead us too far away from our inferential targets. If the inquiry is a causal inquiry, then the move from “causes” to “is correlated with” might be a good description of the actual data analysis, but it doesn’t move us closer to providing an answer to the inquiry.
7.1.4 Summary functions
With the units, treatments, and outomes specified, the last element of the inquiry is the summary function that is applied to them. For a great many inquiries, this function is the mean
function: the ATE, the CATE, the LATE, the SATE, the population mean – these are all averages. These and other inquiries are decomposable in the sense that you can think of an average effect for a large group as being the average of a set of average effects of smaller groups.
However, not all inquiries are of this form. For example, the line of best fix is defined as the covariance of X and Y divided by the variance of X. This inquiry is a complex summary of all the units in the model.
The inquiry that the regression discontinuity design shoots at is also non-decomposable. In the RDD model (see Section 15.5), we imagine units with \(Y_i(1)\), \(Y_i(0)\). Each \(i\) also has a value on a “running variable,” \(X_i\), and units receive treatment if and only if \(X_i>0\). In this case the “effect at the point of discontinuity” might be written:
\[E_{i|X_i = 0}(Y_i(Z=1) - Y_i(Z=0))\]
Curiously, however, there may be no units for whom \(X_i\) equals exactly zero (a candidate who wins exactly 50% of the vote happens, but it is rare), so we cannot easily think of the inquiry as being a summary of individual potential outcomes. Instead, we construct a conditional expectation function for both potential outcome funcations withe respect to \(X\) and evaluate the difference between these when \(X=0\). Though not an average of individual effects, this difference is nevertheless a summary of the potential outcomes.
7.2 Examples of inquiries
The largest division in the typology of inquiries is between descriptive and causal inquiries. It is for this reason that Part III, the design library, is organized into descriptive and causal chapters, separated by whether the data strategy involves assignment. In this section, we describe other important ways inquiries vary and how to think about declaring them.
7.2.1 Data-dependent inquiries
The inquiries we have introduced thus far depend on variables in the model, but not on features of the data and answer strategies. However, common inquiries do depend on realizations of the research design.
The first type depends on realizations of the data \(d\): inquiries about units within a sample depend on which units enter the sample; inquiries about treated units depend on which are treated. For example, the average treatment effect on the treated (ATT) is a data-dependent inquiry in the sense that it is the average effect of treatment among the particular set of units that happed to be randomly assigned to treatment. The value of that particular ATT doesn’t change depending on the data strategy, of course, but which ATT we end up estimating depends on the realization of the data strategy. Table 7.3 describes three data dependent inquiries
Inquiry | Units | Treatment conditions | Outcomes | Code |
---|---|---|---|---|
Average treatment effect in a sample (SATE) | Sampled units | D = 0, D = 1 | Y | mean(Y_D_1[S == 1] - Y_D_0[S == 1]) |
Average treatment effect on the treated (ATT) | Treated units | D = 0, D = 1 | Y | mean(Y_D_1[D == 1] - Y_D_0[D == 1]) |
Average treatment effect on the untreated (ATU) | Untreated units | D = 0, D = 1 | Y | mean(Y_D_1[D == 0] - Y_D_0[D == 0]) |
7.2.2 Causal attribution inquiries
A causal attribution is a different kind of data-dependent inquiry. Whereas a causal effect inquiry focuses on the change in an outcome that would be induced by a change in the causal variable, irrespective of the values that the outcome takes in the realized data. By contrast, causal attribution inquiries focus on probabilities that condition on realized outcomes, such as, the “probability of the absence of the outcome in the hypothetical absence of the treatment (\(Y_i(0) = 0\)) given the actual presence of both (\(D_i = Y_i = 1\))” (Yamamoto 2012, 240–41). Goertz and Mahoney (2012) refers to causal attribution inquiries as cause-of-effects questions because they start with an outcome (an effect) and seek to validate a hypothesis about its cause.
The dependence of these inquiries on actual outcomes makes them harder (though not impossible!) to answer with the tools of quantitative science, though they are often of central interest to scientific and policy agendas and have occupied a large number of qualitative studies. Questions like “‘Was economic crisis necessary for democratization in the Southern Cone of Latin America?’ or ‘Were high levels of foreign investment in combination with soft authoritarianism and export-oriented policies sufficient for the economic miracles in South Korea and Taiwan?’” are examples of such inquiries (Goertz and Mahoney 2012). Though they bear a resemblance and are related to causal effects inquiries that focus on observed subsets (such as the average treatment effect on the treated, or ATT)^{3} it is important not to confuse the two kinds of inquiries.
While it is increasingly common to explicitly formalize causal effect inquiries, it is less common to formalize causal attribution inquiries. Doing so, however, can be important to provide the specificity required to diagnose a design on a computer. Pearl (1999) provides formal definitions for these inquiries using the language of causal necessity and sufficiency, depicted in the table below. To put these inquiries in the context of the democratic peace hypothesis, for example, in a given country dyad-year, \(Y_i = 1\) and \(D_i = 1\) could represent “Peace” and “Both democracies” and \(Y_i = 0\) and \(D_i = 0\) could represent “War” and “Not both democracies.” Then \(\Pr(Y_i(D_i = 0) = 0 \mid D_i = Y_i = 1)\) asks, among peaceful, fully democratic dyads, what is the proportion that would have had wars were they not both democracies—that is, in what proportion of dyad-years was democracy a necessary cause of peace? Similarly, \(\Pr(Y_i(D_i = 1)=1 \mid D_i = Y_i = 0)\) asks, among dyads that had a war and at least one non-democracy in a given year, what is the proportion that would have experienced peace if both countries were democracies—in other words, in what proportion of cases would democracy have been sufficient to cause peace? Yamamoto (2012) extends on this account to focus on causal attribution inquiries that focus on important subsets, such as compilers.
Like all designs, those with causal attribution inquiries can be declared, simulated, and diagnosed on a computer. Something to consider, however, is that the model may produce datasets in which the effect does not occur, and so questions defined over units for whom it occurred are undefined. One way to avoid this is to construct a model such that the event occurs for at least one unit with probability one.
Inquiry | Units | Treatment conditions | Outcomes | Code |
---|---|---|---|---|
Probability D necessary for \(Y\) | Units for whom D = 1 and Y = 1 | D = 0 | Y | mean(Y_D_0[D == 1 & Y == 1] == 0) |
Probability D sufficient for \(Y\) | Units for whom D = 0 and Y = 0 | D = 1 | Y | mean(Y_D_1[D == 0 & Y == 0] == 1) |
Complier probability D necessary for \(Y\) | Units for whom D = 1 and Y = 1 who are compliers | D = 0, Z = 1, Z = 0 | Y | mean(Y_D_0[D == 1 & Y == 1 & D_Z_1 == 1 & D_Z_0 == 0] == 0) |
7.2.3 Complex counterfactual inquiries
Thus far, the causal inquiries we have considered have involved comparisons of the counterfactutal values an outcome could take, depending on the value of one or more treatment variables. These inquiries are mind-bending in that we have to imagine two counterfactual states at the same time. Complex counterfactual inquiries require more mind bending still.
An example is the “controlled direct effect.” Suppose our model contains a treatment \(Z\), a mediator \(M\), and outcome \(Y\). The controlled direct effect of the treatment is defined as: \[\mathrm{CDE} = Y(Z=1, M=1) - Y(Z=0, M=1)\] So far so good. but suppose now we stipulate that at least for some units \(M\) = 1 only when \(Z\) = 1, but it equals 0 when \(Z\) = 0 In order to imagine the CDE, we have to hold in our minds the complex counterfactual: what is the level of \(Y\), when \(Z\) equals 1, but \(M\) is at the value it would take if \(Z\) equalled one.
7.2.4 Inquiries with continuous causal variables
We have mainly considered causal inquiries that compare across discrete treatment conditions Treatment versus control, or one of many arms in a multi-arm trial.
But sometimes, we can imagine a continuous treatment space. For example, we could think of the effects of any level of salary from 5 dollars an hour to 500 dollars an hour on workplace satisfaction. We could “discretize” these continuous treatment in bins, in which case we are back to defining inquiries as we have for multi-arm trials with discrete treatment conditions. Alternative, we could describe the estimand as the average of the slopes from many lines of best fit. For each subject, we describe the line of best fit of the outcome with respect to the treatment. Our inquiry is then the average of the resulting slops. This inquiry is decomposable in the sense that it is the average of many slopes, but it differs from the the other causal inquiries in that it is not a direct contrast between a pair of conditions, it is a description of difference across a continuum of conditions.
7.3 How to choose among inquiries
It’s hard to know where to start when choosing an inquiry. We want to pick one that is interesting in its own right or one that would facilitate a real-world decision. We want to pick research questions that we can learn the answer to someday, possibly with a lot of effort. Unfeasible research questions should be abandoned as soon as possible, but of course, that’s hard to do. The trouble is that it’s hard to know what research questions are feasible before you start looking into it, and it’s hard to quit research projects once you learn they are unfeasible. Among feasible research questions, we want to select ones that we are likely to obtain the most informative answers, in terms of moving our priors the most.
Sometimes, people advise students to follow a “theory-first” route to picking a research question. Read the literature, find an unsolved puzzle, then start choosing among the methodological approaches that might answer the problem. Others eschew the theory-first approach: “How on earth are you going to happen to land upon an unsolved – and yet somehow solvable – puzzle just by reading!?” These advice-givers emphasize a method-first route. Master the technical data-gathering and analysis procedures first, then set off to find opportunities to apply them. The theory-first people then say: “how would you know an interesting theoretical question if it smacked you in the face!?”
Iteration between the two is typically necessary. In order to select inquiries, emprical researchers have to be concerned about the entire research design. We have to learn a lot about how to select data and answer strategies in ways that map on to inquiries about models. So empiricists have to learn both about models and inquiries (theory) as well as about data strategies and answer strategies (empirics).
The first criterion is the subjective importance of a question. The object of the importance may be a scientist, considering the value of building a theoretical understanding of the world; to a policymaker, deciding how to collect and allocate resources in a government; a private firm, who is making decisions about how to invest their own resources to maximize profit; or another individual or organization. The scientific enterprise is designed around the idea that importance is in the eye of the beholder and is not some objective quantity. This is for two reasons. First, the scientific or practical importance of a discovery may not be understood until decades later, when other pieces of the causal model are put together or the world faces new problems. Moreover, “importance” differs for different segments of society, and scientists must be able to study questions not judged important by groups in power in order to discover new ways to solve problems faced by the left-out groups.
A second important criterion flows from Principle 3.4: Select answerable inquiries. How could an inquiry not be answerable? The main way is if we can’t find a feasible data or answer strategy. When for ethical, legal, logistic, or financial constraints, we simply can’t conduct the study, the inquiry is not answerable.
There are subtler ways in which an inquiry might not be answerable. For example, it might be undefined. Inquiries are undefined when I returns \(I(m) = a_m = \mathrm{NA}\). Sometimes audit studies consider the effect of treatment on responding to an email and on the tone of the email. However, in conditions where the email is never sent, it has no tone. As a result, we can’t learn about the average effect of treatment on tone, we can only learn about the effect in a subgroup: those units who always repond to email, regardless of condition. This new inquiry is defined, but hard to estimate (see Coppock (2019))
An inquiry is also not answerable if it is not “identified.” Identification means: a question is at least partly answerable if there are at least two different sets of data you might observe that would lead you to make two different inferences. In the best case, one might imagine that you have lots of data and each possible data pattern you see is consistent with only one possible answer. You might then say that your model, or inquiry, is identified. Failing that you might imagine that different data patterns at least let you rule out some answers even though you can’t be sure of the right answer. In this case we have “partial identification.” Some inquiries might not even be partially identifiable. For instance if we have a model that says an outcome \(Y\) is defined by the equation \(Y=(a+b)X\), no amount of data can tell us the exact values of \(a\) and \(b\). Indeed without limits on the values of \(a\) and \(b\) (such as \(a\geq0\)), no amount of data can even narrow down the ranges of \(a\) and \(b\). The basic problem is that for any value of \(a\) we can choose a \(b\) that keeps the sum of \(a+b\) constant. In this setting, even though there is an answer to our inquiry (\(a\)) in theory it is not one we can ever answer in practice. Many other types of inquiries, such as mediation inquiries, are not identifiable. There are some circumstances in which we can provide a partial answer to the inquiry, such as learning a range of values within which the parameter lives. At a minimum, we urge you to pose inquiries that are at least partially answerable with possible data.
7.4 Declaring inquiries in code
An inquiry is a summary function of events generated by a model. When we declare inquiries in code, we declare this summary function. Here, we declare a causal inquiry, the mean
of the differences in two potential outcomes described in the model:
M <- declare_model(N = 100, U = rnorm(N), potential_outcomes(Y ~ Z + U))
I <- declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0))
Descriptive inquiries can be declared in a similar way: they are just functions of outcomes rather than potential outcomes.
7.4.1 Inquiries among subsets of units
We often want to learn about an inquiry defined among a subgroup of units. For example, if we are interested in the conditional average treatment effect (CATE) among units with X = 1, we can use the subset argument.
M <- declare_model(
N = 100,
U = rnorm(N),
X = rbinom(N, 1, prob = 0.5),
potential_outcomes(Y ~ 0.3 * Z + 0.2*X + 0.1*Z*X + U))
I <- declare_inquiry(CATE = mean(Y_Z_1 - Y_Z_0), subset = X == 1)
Equivalently, we could use R’s []
syntax for subsetting:
I <- declare_inquiry(CATE = mean(Y_Z_1[X == 1] - Y_Z_0[X == 1]))
7.4.2 Inquiries with continuous potential outcomes
“Non-decomposable” inquiries are not as simple as an average over the units in the model. A common example arises with continuous potential outcomes. The regression discontinuity design described in Section 15.5 hase an inquiry that is defined by two continuous functions of the running variable. The control function is a polynomial function representing the potential outcome under control and the treatment function is a different polynomial representing treated potential outcomes. The inquiry is the difference in the two functions evaluated at the cut-off point on the running variable. We declare it as follows:
7.4.3 Multiple inquiries
In some designs, we are interested in the value of an inquiry for many units or for many types of units.
We can enumerate them one-by-one, to describe the average treatment effect, two conditional average treatment effects, and the difference between them.
<- declare_inquiry(
I ATE = CATE_X1 = mean(Y_Z_1[X == 1] - Y_Z_0[X == 1]),
CATE_X0 = mean(Y_Z_1[X == 0] - Y_Z_0[X == 0]),
CATE_X1 = mean(Y_Z_1[X == 1] - Y_Z_0[X == 1]),
Difference_in_CATEs = CATE_X1 - CATE_X0)
In the multilevel regression and poststratification (MRP) design in Section 14.3, we want to know what the average of a survey question is in each state.
We declare an inquiry at the county level below. We rely on group_by
and summarize
from dplyr
to write a function MRP_inquiry
that uses a pipeline to group the data into counties and take the average. Now, our design targets an inquiry for each state.
M <-
declare_model(
counties = add_level(N = 5, county_quality_mean = rnorm(N)),
schools = add_level(N = 5, school_quality = rnorm(N, mean = county_quality_mean))
)
MRP_inquiry <-
function(data) {
data %>%
group_by(counties) %>%
summarize(mean_school_quality = mean(school_quality),
.groups = "drop")
}
I <- declare_inquiry(handler = MRP_inquiry)
We discuss further in 9.4 how to link inquiries to answer strategies, including the case of multiple inquiries.
Further reading
- Goertz and Mahoney (2012) on differences across inquiries in qualitative and quantitative research.
- Dawid (2000) on cause-of-effects questions.
- Yamamoto (2012) on causal attribution.
- Zhang and Rubin (2003) on “truncation-by-death”