-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.Rmd
110 lines (83 loc) · 3.64 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
title: MLE Adjustment for High-Dimensional Logistic Regression
author: Koji Makiyama (@hoxo-m)
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r setup, include=FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
message = FALSE,
eval = TRUE
)
```
## Overview
The maximum likelihood estimator (MLE) of the logistic regression model is not an unbiased estimator.
Therefore, estimates calculated with `glm()` contain bias.
Since the MLE satisfies consistency and asymptotic normality, the bias can be disregarded when the sample size is large.
However, in the analysis of high-dimensional data, the sample size is sometimes relatively small compared to the dimension of input variables.
For example, let's consider a scenario where the number of input variables `p = 300`, and the sample size `n = 2000`.
Additionally, the true parameters `beta` consist of
- `beta = 10` for the first 1/3,
- `beta = -10` for the next 1/3, and
- `beta = 0` for the remaining 1/3.
In such a case, the MLE returned by `glm()` contains a non-negligible bias.
```{r biasedMLE}
p <- 300
n <- 2000
set.seed(314)
x <- rnorm(n * p, mean = 0, sd = sqrt(1/n))
X <- matrix(x, nrow = n, ncol = p)
beta <- matrix(rep(c(10, -10, 0), each = p/3), nrow = p, ncol = 1)
prob <- plogis(X %*% beta)
y <- rbinom(n, 1, prob)
fit <- glm(y ~ X, family = binomial, x = TRUE)
library(ggplot2)
theme_set(theme_bw())
df <- data.frame(index = seq_len(p), mle = coef(fit)[-1])
ggplot(df, aes(index, mle)) +
geom_point(color = "blue") +
annotate("segment", x = c(0, 100, 200), xend = c(100, 200, 300),
y = c(10, -10, 0), yend = c(10, -10, 0), linewidth = 1.5) +
scale_x_continuous(breaks = c(0, 100, 200, 300)) +
ylim(-30, 30) + xlab("Index of parameters") + ylab("MLE") +
ggtitle("True (black line) and MLE (blue point)")
```
You can see that the blue points (MLE) are significantly outside the perimeter of the black line (true).
The purpose of this package is to alleviate the bias by adjusting the MLE.
To achieve this, we implemented two methods:
- "ProbeFrontier," as proposed by Sur and Candès (2018), and
- "SLOE," as proposed by Yadlowsky et al. (2021).
The `adjustMLE` function in our package is designed to mitigate this bias.
```{r adjustedMLE, results='hide'}
library(adjustMLE)
fit_adj <- adjustMLE(fit)
df <- data.frame(index = seq_len(p), mle = coef(fit_adj)[-1])
ggplot(df, aes(index, mle)) +
geom_point(color = "blue") +
annotate("segment", x = c(0, 100, 200), xend = c(100, 200, 300),
y = c(10, -10, 0), yend = c(10, -10, 0), linewidth = 1.5) +
scale_x_continuous(breaks = c(0, 100, 200, 300)) +
ylim(-30, 30) + xlab("Index of parameters") + ylab("Adjusted MLE") +
ggtitle("True (black line) and adjusted MLE (blue point)")
```
### Estimated Parameters for Adjustment Methods
```{r parameter}
fit_adj$parameters
```
## Installation
You can install the package from GitHub.
```r
install.packages("remotes") # if you have not installed "remotes" package
remotes::install_github("hoxo-m/adjustMLE")
```
## Related Work
- glmhd (R package on GitHub)
- https://github.com/zq00/glmhd
- SLOE (Python code)
- https://github.com/google-research/sloe-logistic
## References
- Sur, P., & Candès, E.J. (2018). A modern maximum-likelihood theory for high-dimensional logistic regression. Proceedings of the National Academy of Sciences of the United States of America, 116, 14516 - 14525.
- Yadlowsky, S., Yun, T., McLean, C.Y., & D'Amour, A. (2021). SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Neural Information Processing Systems.