Rejoinder: More Limitations of Bayesian Leave-One-Out Cross-Validation

In a recent article for Computational Brain & Behavior, we discussed several limitations of Bayesian leave-one-out cross-validation (LOO) for model selection. Our contribution attracted three thought-provoking commentaries by (1) Vehtari, Simpson, Yao, and Gelman, (2) Navarro, and (3) Shiffrin and Chandramouli. We just submitted a rejoinder in which we address each of the commentaries and identify several additional limitations of…

read more

“Don’t Interfere with my Art”: On the Disputed Role of Preregistration in Exploratory Model Building

Recently the 59th annual meeting of the Psychonomic Society in New Orleans played host to an interesting series of talks on how statistical methods should interact with the practice of science. Some speakers discussed exploratory model building, suggesting that this activity may not benefit much, if any at all, from preregistration. On the Twitterverse, reports of these talks provoked an…

read more

Transparency and The Need for Short Sentences

Recently I came across an article by Morton Ann Gernsbacher, entitled “Writing empirical articles: Transparency, reproducibility, clarity, and memorability” (preprint). The author covers a lot of ground and makes a series of good points. Also, as one would hope and expect, the article itself is a joy to read. Here is a fragment from the section “Recommendations for Clarity” —…

read more

“Bayesian Inference Without Tears” at CIRM

Today I am presenting a lecture for the “Masterclass in Bayesian Statistics” that takes place from October 22 to October 26th 2018, at CIRM (Centre International de Rencontres Mathématiques) in Marseille, France. The slides of my talk,“Bayesian Inference Without Tears” are here. Unfortunately the slides cannot convey the JASP demo work, but the presentations are taped so I hope to…

read more

A Bayesian Perspective on the Proposed FDA Guidelines for Adaptive Clinical Trials

The frequentist food and drug administration (FDA) has circulated a draft version of new guidelines for adaptive designs, with the explicit purpose of soliciting comments. The draft is titled “Adaptive designs for clinical trials of drugs and biologics: Guidance for industry” and you can find it here. As summarized on the FDA webpage, this draft document        …

read more

Bayesian Advantages for the Pragmatic Researcher: Slides from a Talk in Frankfurt

This Monday in Frankfurt I presented a keynote lecture for the 51th Kongress der Deutschen Gesellschaft fuer Psychologie. I resisted the temptation to impress upon the audience the notion that they were all Statistical Sinners for not yet having renounced the p-value. Instead I outlined five concrete Bayesian data-analysis projects that my lab had conducted in recent years. So no…

read more

Redefine Statistical Significance XVII: William Rozeboom Destroys the “Justify Your Own Alpha” Argument…Back in 1960

Background: the recent paper “Redefine Statistical Significance” suggested that it is prudent to treat p-values just below .05 with a grain of salt, as such p-values provide only weak evidence against the null. The counterarguments to this proposal were varied, but in most cases the central claim (that p-just-below-.05 findings are evidentially weak) was not disputed; instead, one group of…

read more

Redefine Statistical Significance Part XVI: The Commentary by JP de Ruiter

Across virtually all of the empirical disciplines, the single most dominant procedure for drawing conclusions from data is “compare-your-p-value-to-.05-and-declare-victory-if-it-is-lower”. Remarkably, this common strategy appears to create about as much enthusiasm as forcefully stepping in a fresh pile of dog poo. For instance, In a recent critique of the “compare-your-p-value-to-.05-and-declare-victory-if-it-is-lower” procedure, 72 researchers argued that p-just-below-.05 results are evidentially weak, and…

read more