Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bayesian estimation versus t-test #3

Open
vishu-tyagi opened this issue Mar 30, 2023 · 0 comments
Open

Bayesian estimation versus t-test #3

vishu-tyagi opened this issue Mar 30, 2023 · 0 comments

Comments

@vishu-tyagi
Copy link

vishu-tyagi commented Mar 30, 2023

Hi devs

I'm going through the Frontiers In article, HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python, particularly the 'Simulations' section. I'm working on something similar, but I don't notice a huge difference in t-test detecting a difference vs Bayesian estimation.

I went through the repository, and it looks like you've used two-sided t-tests instead of one-sided, and that's why the t-test fails to detect a difference when the Bayesian estimation does. The current scipy.stats docs list two-sided as the default alternative for ttest_rel and ttest_1samp.

Also, it uses a threshold of 0.025, which means p_value_2_sided < 0.025 iff p_value_2_sided / 2 < 0.025/2 iff p_value_1_sided < 0.0125, which should be compared against Bayesian probability of 0.9875.

Could you please confirm this? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant