In the latest issue of IJMR, we are publishing three papers on the theme of measurement formats. The first is a comprehensive literature review, by Callegaro et al, that in addition to summarising 'best practice' in the search for 'truth' in data collection, also identifies gaps in current published knowledge in this field.
In particular, the authors discuss in detail the impact of using a forced-choice versus check-all formats. One major gap identified in the paper is that most research to date in this field covers research undertaken in English speaking countries, with limited cultural range.
However, our second paper on this theme by Revilla, starts to address this gap through research undertaken in Spanish speaking countries, comparing forced-choice and check-all methods in the search for 'truth'. The final paper by Rossiter and Dolnicar explores the theme from a brand-image measurement perspective, arguing the case for applying Level-free Forced-Choice Binary measures when undertaking research in that field.
I have to say that this was not a deliberate editorial choice, it happened that we had three papers covering these different perspectives on the theme ready for publishing, but it seemed to make sense to publish them together, rather than scattered across three separate issues.
But, I'm not simply referring to this content to promote the latest issue. As you might expect a lot of papers cross my desktop each year that never make it to, or beyond, peer review. There are obviously many reasons why papers get rejected, but one issue that frequently arises, especially in academic submissions, is that little thought seems to be given to measurement methods, especially the use of scales.
Time and time again it's a simple statement of using a Likert scale as the main measurement method, as if simply referring to 'Likert' implies some scientific thinking by the author, but there's no apparent consideration of the literature on measurement and whether a 5, 7, or whatever level of scale is the most appropriate method to use in the search for 'truth'.
And I think that this search for 'truth' is not really being considered at all in many cases, it's simply a route to collect some hopefully discriminatory data on the topic that can then be funnelled into whatever model the author is using to analyse the output in order to come to some new vision of the world.
But, as I, and others, have noted before, description of the research design often accounts for 5% or less of a total submission, the emphasis being on, firstly, the literature review of the field to identify a gap for new research from which hypotheses (or research questions) can be developed for testing, and secondly, on the analysis of the data, looking for relationships between variables to prove the hypotheses. The crucial bit between, arguing the case for applying a particular research method, sampling method, questionnaire design and data collection mode are described in as short a space as possible so the author can cut to the chase and the analysis stage.
So, I hope that these three papers, along with others we've published on this field over the years, are read and referenced in submissions as evidence to support the measurement process used in the research design. I shall be looking out for such citations in future submissions, in the same way that I expect authors to have cited other influential papers we've published on other key topics, especially our literature reviews.
Anyway, I hope you find these papers both interesting and useful. They also contain challenges to researchers about further expanding knowledge in this important field of study that perhaps you will be inspired to investigate.