The objective of this notebook is to illustrate how to perform treatment effect estimation with missing attributes on a small toy example. By replacing the toy example with the dataset of your choice (following the format instructions below) you can estimate average treatment effects (ATE) on your own data.

For more information and details on the theoretical background of the methodology, we refer to (Mayer et al. 2020).

**You can replace the chunks rename_data and confounders with any data set you wish to analyze. For this, you have to specify:**

`X.na`

: confounders. A data.frame of size`#observations x #covariates`

. With or without missing values.`W`

: treatment assignment. A binary vector coded either with`{0,1}`

or with`{FALSE,TRUE}`

(representing`{control,treatment}`

). Without missing values.`Y`

: observed outcome. A numerical or binary vector (if binary, then coded with`{0,1}`

). Without missing values.

And at the end of the chunk `rename_data`

, set `synthetic=FALSE`

, if you choose your own data set.

If `X.na`

contains confounders but also other covariates that are not to be considered as confounders in the analysis, specify the names of the confounders in the chunk `confounders`

and the role of the remaining covariates (either treatment regressors or outcome regressors).

```
library(cobalt)
library(ggplot2)
library(dplyr)
library(MASS)
library(pracma)
library(assertthat)
library(caret) #
library(ranger) # Random Forests prediction
library(FactoMineR) # Factorial data analysis
library(grf) # Doubly robust treatment effect estimation
library(norm)
library(missMDA) # PCA/MCA with missing values + iterative PC imputation
library(mice) # Multiple imputation
library(VIM) # Missing values exploration and visualization
library(devtools)
# Load the ATE estimation functions `ipw` and `dr` from the GitHub repository
source_url("https://raw.githubusercontent.com/imkemayer/causal-inference-missing/master/Helper/helper_causalInference.R")
# Load function to generate missing values using different mechanisms from R-miss-tastic
source_url("https://rmisstastic.netlify.app/how-to/generate/amputation.R")
# Set random generator seed for reproducible results
set.seed(1234)
```

We will generate a simple toy data set with normally distributed confounders and MCAR missing values. For generating missing values, we use code provided by (Mayer et al. 2019).

```
seed <- 4321
n <- 5000 # number of observations
p <- 5 # number of confounders
tau <- 1 # true value of the ATE
perc.NA <- 0.3 # percentage of missing values generated in the confounders
```

```
# Here, the relationship between confounders are non-linear, the treatment is
# generated according to a logistic model but satisfying the unconfoundedness
# despite missingness assumption (see Mayer 2020).
# Finally the outcome is generated according to a non-linear model with constant
# treatment effect.
rho <- 0.3
Sigma <- diag(p) + rho*upper.tri(diag(p)) + rho*lower.tri(diag(p))
X <- mvrnorm(n=n, mu=rep(1, p), Sigma=Sigma)
res.na <- produce_NA(X, mechanism="MCAR", perc.missing=perc.NA, seed=seed)
X.na <- res.na$data.incomp
# nonlinear transformations of X
X.tmp <- X
for (j in 1:p){
X.tmp[,j] <- (mod(j,5)==1)*((X.tmp[,j]<quantile(X.tmp[,j],0.7)) + (X.tmp[,j]> quantile(X.tmp[,j],0.2))) +
(mod(j,5)==2)*(1/(0.001+exp(X.tmp[,j]*X.tmp[,1]))) +
(mod(j,5)==3)*(-(X.tmp[,j])*(X.tmp[,2]>0)) +
(mod(j,5)==4)*(-2.5*sqrt(abs(X.tmp[,j]))) +
(mod(j,5)==0)*(X.tmp[,3]*X.tmp[,j])
}
# propensity scores
X.tmp.ps <- X.tmp
X.tmp.ps[res.na$idx_newNA] <- 0
expit <- function(x){ return(1/(1+exp(-x))) }
alpha <- array(c(-0.6, 0.6), dim=p)
prop_scores <- apply(data.frame(X.tmp), MARGIN=1,
FUN=function(z) expit(z%*%alpha))
prop_scores <- 0.01 + (pmin(0.98,prop_scores) - min(prop_scores))/(max(prop_scores)-min(prop_scores))
W <- sapply(prop_scores, FUN=function(p) rbinom(n=1, size=1, prob=p))
# outcome
epsilons <- rnorm(n, sd=0.2)
Y <- rep(0,n)
beta <- runif(p, -1, 1)
Y[which(W==1)] <- sign(X.tmp[which(W==1),]%*%beta)*log(abs(X.tmp[which(W==1),]%*%beta)) + tau + epsilons[which(W==1)]
Y[which(!(W==1))] <- sign(X.tmp[which(!(W==1)),]%*%beta)*log(abs(X.tmp[which(!(W==1)),]%*%beta)) + epsilons[which(!(W==1))]
toy_data <- list(X=X, X.na=X.na, W=W, Y=Y)
```

**If you want to use your own data set, change the following chunk, following the instructions given in the introduction.**

```
synthetic <- TRUE
X.na <- data.frame(toy_data$X.na)
W <- toy_data$W
Y <- toy_data$Y
covariate_names <- colnames(X.na)
n <- dim(X.na)[1]
p <- dim(X.na)[2]
df.na <- data.frame(cbind(X.na, W=W, Y=Y))
colnames(df.na) <- c(covariate_names, "W", "Y")
```

In certain cases, not all covariates are necessarily confounders. Therefore we specify the set of confounders (here we consider all covariates to be confounders). Additionally, in some cases there are also variables that are only predictive of the treatment assignment. Their names can be specified in `only_treatment_pred_names`

.

```
confounder_names <- covariate_names
only_treatment_pred_names <- c()
only_outcome_pred_names <- setdiff(covariate_names,
c(confounder_names, only_treatment_pred_names))
```

First we compute the unadjusted ATE, ignoring the confounders.

`ate_raw <- mean(Y[which(as.logical(W))]) - mean(Y[which(!as.logical(W))])`

This gives us \(ATE_{naive}=\) 1.1095881.

Let us also assess how the outcome is distributed with respect to the treatment group.

From these two variables, \(W\) and \(Y\), we can compute the following descriptive statistics:

- Average outcome: \(\widehat {\mathbb{E}}[Y] =\) 0.53
- \(\widehat {Pr}(Treatment=1)=\) 0.3906
- \(\widehat {\mathbb{E}}[(Y \,\mid\, Treatment=1)]=\) 1.2107948
- \(\widehat {\mathbb{E}}[(Y \,\mid\, Treatment=0)]=\) 0.1012067

Before handling missing values, it is important to explore the data and to take a look at the missing values which might exhibit certain patterns. See the appendix for an example of how to graphically examine the pattern of missing values.

And for a comprehensive overview on missing values handling in statistical analysis with R, we refer to the 2-day course *Dealing With Missing Values in R* by Julie Josse at ETH Zürich (2020).

To estimate ATE with missing values, we will use different strategies. First, we will impute data to get a completed data set and estimate ATE using classical estimators (IPW and the doubly robust AIPW).This approach assumes Missing At Random values and classical unconfoundedness assumption.

We will impute the data with two different methods, iterative PCA and mice.

Note that we use both the outcome variable and the treatment assignment in the imputation model as recommended by (Seaman and White 2014).

We now perform a **single** imputation with a (regularized) iterative PCA model, which will allow imputation to take into account similarities between both individuals and relationships between variables. For more details, see (Lê, Josse, and Husson 2008).

First we find the optimal number of dimensions for the imputation method by cross-validation.

**Note that if your data X.na contains only qualitative variables (i.e., categorical data), replace estim_ncpFAMD with estim_ncpMCA.**

```
ncomp <- estim_ncpFAMD(data.frame(X.na, W = as.factor(W), Y = Y),
ncp.min = 1,
ncp.max = p,
method = "Regularized",
method.cv = "Kfold",
nbsim = 10,
verbose = TRUE)
plot(1:length(ncomp$criterion), ncomp$criterion, xlab = "nb dim", ylab = "MSEP")
```

On this toy example, we do not run the above chunk to speed up the calculations. We will arbitrarily set `ncp=3`

.

**Same as for the ncomp_all chunk, replace imputeFAMD with imputeMCA.**

```
ncp <- 3
df.tmp <- data.frame(X.na, W = as.factor(W), Y = Y)
colnames(df.tmp) <- colnames(df.na)
df.imp.pc <- data.frame(imputeFAMD(df.tmp, ncp=ncp, seed=seed)$completeObs)
levels(df.imp.pc$W) <- c(0,1)
df.imp.pc$W <- as.numeric(as.character(df.imp.pc$W))
# Remark:
# many causal inference functions use the treatment as numeric and not as factor
# even though it is a binary treatment.
```

We also do a **multiple** imputation analysis, using the `mice`

package. We impute the data `m=5`

times.

```
m <- 5
df.imp.mice <- mice(df.na, m=m, seed=seed, printFlag=F)
```

We illustrate here how to estimate the ATE. To estimate other estimands such as ATT and ATC, you can change the variable `target`

in the chunk `choose_target`

below to

`target="treated"`

for the ATT`target ="control"`

for the ATC`target=,"overlap"`

for the ATE with overlap weights

`target <- "all"`

We estimate the ATE with two different estimators (IPW and Double Robust) and two possibilities to estimate nuisance parameters (i.e., the propensity scores and the conditional response surfaces). A classical choice for nuisance parameter estimation in causal inference applications is the use of generalized linear models (in practice, mostly logistic and linear models), which we use with the `glm`

function. And more recently, more complex and flexible models became available, such as generalized regression forest (Athey, Tibshirani, and Wager 2019).

The (non-normalized) IPW estimator has the following expression:

\[\hat{\tau}_{IPW} = \frac{1}{n}\sum_{i=1}^n\left(\frac{W_iY_i}{\hat{e}(X_i)} - \frac{(1-X_i)Y_i}{1-\hat{e}(X_i)}\right),\] and the doubly robust AIPW estimator can be written as follows: \[\hat{\tau}_{DR} = \frac{1}{n}\sum_{i=1}^n \hat{\mu}_1(X_i) - \hat{\mu}_0(X_i) + W_i\frac{Y_i-\hat{\mu}_1(X_i)}{\hat{e}(X_i)} - (1-W_i)\frac{Y_i-\hat{\mu}_0(X_i)}{1-\hat{e}(X_i)}\]

When \(X\) is incomplete (i.e., we effectively work with \(X^* = X\odot R + \{NA\}\odot (1-R)\) where \(R\) is the binary response pattern of \(X\) indicating by \(1\) the observed values), and we assume the unconfoundedness despite missingness assumption, then the nuisance parameters \(e, \mu_0, \mu_1\) and their estimations in the above expressions are replaced by their generalized counterparts, e.g., \(e^*(x^*) = \mathbb{P}[W_i=1|X_i^*=x^*]\). For more details, see (Mayer et al. 2020)(http://dx.doi.org/10.1214/20-AOAS1356).

```
treatment_effect_estimates <- function(data, target="all",
confounder_names=NULL,
only_treatment_pred_names=NULL,
only_outcome_pred_names=NULL,
seed=NULL) {
if (is.null(confounder_names)) {
confounder_names <- setdiff(colnames(data), c("W","Y"))
}
results <- list()
results <- data.frame(matrix(ncol = 4, nrow = 0), row.names = NULL)
# If there are no missing values in the data
if (sum(is.na(data))==0){
# Compute IPW normalized or not with logistic regression to estimate the weights
tmp <- ipw(X=data[, confounder_names],
outcome=data$Y, treat=as.logical(data$W),
ps.method="glm", target=target,
seed=seed)
results <- rbind(results,
cbind("glm", "IPW_normalized_weights", tmp[[1]], tmp[[3]]))
results <- rbind(results,
cbind("glm", "IPW_unnormalized_weights", tmp[[2]], tmp[[4]]))
# Compute the Doubly Robust estimator using logistic regression to estimate the
# propensity score and generalized random forest for the outcome model
tmp <- dr(X=data[, confounder_names],
X.for.ps = data[, c(only_treatment_pred_names,confounder_names)],
X.for.outcome = data[, c(only_outcome_pred_names,confounder_names)],
outcome = data$Y, treat = as.logical(data$W),
ps.method = "glm.grf", out.method = "glm.grf",
target = target,
seed=seed)
results <- rbind(results,
cbind("glm", "DR", tmp[[1]], tmp[[2]]))
}
# Compute IPW (normalized or unnormalized) with generalized random forest
# (with Missing Incorporate in Attributes if data is incomplete)
# to estimate the weights
tmp <- ipw(X=data[, confounder_names],
outcome=data$Y, treat=as.logical(data$W),
ps.method="grf", target=target,
seed=seed)
results <- rbind(results,
cbind("grf", "IPW_normalized_weights", tmp[[1]], tmp[[3]]))
results <- rbind(results,
cbind("grf", "IPW_unnormalized_weights", tmp[[2]], tmp[[4]]))
# Compute the doubly robust estimator with generalized random forest
# (with Missing Incorporate in Attributes if data is incomplete)
# to estimate the propensity score and the outcome model
tmp <- dr(X=data[, confounder_names],
X.for.ps = data[, c(only_treatment_pred_names,confounder_names)],
X.for.outcome = data[, c(only_outcome_pred_names,confounder_names)],
outcome = data$Y, treat = as.logical(data$W),
ps.method = "grf.ate", out.method = "grf.ate",
target = target,
seed=seed)
results <- rbind(results,
cbind("grf", "DR", tmp[[1]], tmp[[2]]))
colnames(results) <- c("Nuisance.estimation",
"Estimate",
"Value",
"StandardError")
results$Value <- as.numeric(results$Value)
results$StandardError <- as.numeric(results$StandardError)
return(results)
}
```

Using the imputed data set imputed by principal component method, we compute the IPW (normalized and unnormalized version) estimator with propensity scores estimated via logistic regression or via generalized random forest and the doubly robust AIPW estimator with logistic regression (for propensity scores) and logistic/linear regression (depending on the outcome type) or with two generalized random forests.

`results_pc_imp <- treatment_effect_estimates(df.imp.pc, target = target, seed = seed)`

```
## Warning in average_treatment_effect(tau.forest, target.sample = target, :
## Estimated treatment propensities go as low as 0.009 which means that treatment
## effects for some controls may not be well identified. In this case, using
## `target.sample=treated` may be helpful.
```

Using the imputed data sets imputed by `mice`

, we compute the IPW and AIPW estimators on each data set like on the data set above. The different estimations are then aggregated using Rubin’s rules (Rubin 2004).

```
res_tmp <- complete(df.imp.mice, "all") %>% lapply(treatment_effect_estimates,target=target, seed=seed)
res_val <- as.data.frame(do.call(rbind, lapply(res_tmp, "[", , "Value")))
res_se <- as.data.frame(do.call(rbind, lapply(res_tmp, "[", , "StandardError")))
results_mice_imp <- cbind(res_tmp[[1]][,1:2],
apply(res_val,2, mean),
sapply(1:dim(res_tmp[[1]])[1],
function(j) sqrt(mean(res_se[,j]^2*n)+ (1+1/m)*sum((res_val[,j]-mean(res_val[,j]))^2)/(m-1))/sqrt(n)))
colnames(results_mice_imp) <- colnames(res_tmp[[1]])
```

Using the “raw” data without pre-processing of the missing values, i.e., using the data with `NA`

s in the covariates, we compute the IPW estimator with propensity scores estimated via generalized random forests with Missing Incorporated in Attributes (MIA) and the DR estimator with two generalized random forests with MIA.

`results_mia <- treatment_effect_estimates(df.na, target=target, seed=seed)`

NA.handling | Nuisance.estimation | Estimate | Value | StandardError |
---|---|---|---|---|

pc.imp | glm | IPW_normalized_weights | 1.27 | 0.09 |

pc.imp | glm | IPW_unnormalized_weights | 1.15 | 0.08 |

pc.imp | glm | DR | 1.13 | 0.06 |

pc.imp | grf | IPW_normalized_weights | 1.03 | 0.05 |

pc.imp | grf | IPW_unnormalized_weights | 1.10 | 0.05 |

pc.imp | grf | DR | 1.02 | 0.04 |

mice | glm | IPW_normalized_weights | 1.07 | 0.06 |

mice | glm | IPW_unnormalized_weights | 1.02 | 0.06 |

mice | glm | DR | 1.00 | 0.05 |

mice | grf | IPW_normalized_weights | 0.95 | 0.04 |

mice | grf | IPW_unnormalized_weights | 1.01 | 0.05 |

mice | grf | DR | 0.95 | 0.04 |

mia | grf | IPW_normalized_weights | 1.00 | 0.04 |

mia | grf | IPW_unnormalized_weights | 1.04 | 0.05 |

mia | grf | DR | 0.98 | 0.03 |

We will plot all the previously calculated results.