Extremely more confident. Mixed models are simple linear models with possibility to have random coefficients across clustering variables. P-values are just fine when the model is well-defined (see for instance
https://link.springer.com/article/10.37 ... 21-01546-0)
I wouldn't say that there is growing consensus about p-values being wrong in mixed models. There is, and there has always been, consensus on the fact that if one does not include random coefficients that are potentially varying across clusters, the inferential tests associated with the corresponding fixed effects are anti-conservative (so they are significant "too often") (you find this in the same literature you mentioned, but there are many other sources that agree with this point. see for instance
https://www.sciencedirect.com/science/a ... 6X12001180 )
However, this issue does not imply that one should always include random coefficients, so the fact that "including random slopes does not always works" is not a problem. The inflation of type I error occurs when a coefficient has variability across cluster levels but it is not allowed to vary, so it is not set as random effect in the model. On the other hand, if the coefficient does not show variability in the data, removing it from the random effects does not inflate the type I error. On the contrary, it may increase power without introducing biases. see
https://www.sciencedirect.com/science/a ... 6X17300013
Thus, the issue with mixed models is not whether the p-values are wrong (they are not), but whether one is setting the model up in the correct way. Let's say that the users should know what they are doing. However, this is basically true for any statistical technique.