Now, this seems very logical at first. No point in using data from sources you can't know for certain are reliable. But where, however, does this process lead us? I see several potential problems:
1) We've already seen on the Forge and post-Forge fora what happens when you need to use one paradigm's terminology and focus (such as "design") in order to be understood. Stagnation. Not to everyone, of course, but progress will be much slower when you have to pigeonhole all explanations to one linguistic model.
2) The great majority of academic rpg researchers are ludologists, mostly working with digital forms of role-playing. Following the rule above, any psychologist etc. providing deeper insight into mental states would have to dress up her findings in ludological terms in order to get anyone (besides the few other rpg psychologists) to use the data - never mind how accurate or useful that data is. We can't expect researchers to broaden their horizons beyond their primary fields, now, can we?
3) In the case of people not willing to change the way they explain things, I see a division in the field. If we're lucky, that will eventually happen in a good context (departments of Game Studies having chairs of "psychology of role-playing", "ludology", "character design", etc.), but if we're not, it will happen a lot sooner and in the form of a methodological schism. Even a quick look at the reference lists of Lifelike shows that we're already on our way there.
So, what to do? How about relying on refereeing processes? Never mind that you do not understand it all, if a couple of peer reviewers with a PhD in psychology say that the paper is OK, you can be quite sure that it's properly done - at least to some extent. And without the aforementioned specialist professors, that's the best we can get for now. The words "According to..." (vs. "shown", etc.) exist in academic context precisely for the purpose of pointing out that someone has said something, but we do not necessarily agree, or even understand. How about using them?