You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm going through the Frontiers In article, HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python, particularly the 'Simulations' section. I'm working on something similar, but I don't notice a huge difference in t-test detecting a difference vs Bayesian estimation.
I went through the repository, and it looks like you've used two-sided t-tests instead of one-sided, and that's why the t-test fails to detect a difference when the Bayesian estimation does. The current scipy.stats docs list two-sided as the default alternative for ttest_rel and ttest_1samp.
Also, it uses a threshold of 0.025, which means p_value_2_sided < 0.025 iff p_value_2_sided / 2 < 0.025/2 iff p_value_1_sided < 0.0125, which should be compared against Bayesian probability of 0.9875.
Could you please confirm this? Thank you!
The text was updated successfully, but these errors were encountered:
Hi devs
I'm going through the Frontiers In article, HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python, particularly the 'Simulations' section. I'm working on something similar, but I don't notice a huge difference in t-test detecting a difference vs Bayesian estimation.
I went through the repository, and it looks like you've used two-sided t-tests instead of one-sided, and that's why the t-test fails to detect a difference when the Bayesian estimation does. The current scipy.stats docs list two-sided as the default alternative for ttest_rel and ttest_1samp.
Also, it uses a threshold of 0.025, which means p_value_2_sided < 0.025 iff p_value_2_sided / 2 < 0.025/2 iff p_value_1_sided < 0.0125, which should be compared against Bayesian probability of 0.9875.
Could you please confirm this? Thank you!
The text was updated successfully, but these errors were encountered: