• No results found

Limitations of the study

6. Discussion

6.3. Limitations of the study

In this section, the limitations of the study are assessed and discussed. The limitations are primarily connected to the reliability and the validity of both sub-studies, as well as how they together shed light on the research topic. Reliability concerns

the consistency of a measure, while validity concerns the accuracy of a measure

(Creswell & Creswell 2018). The reliability and validity are assessed in both sub-studies separately.

Reliability and validity in sub-study I

In sub-study I, the patterns of scholarly publishing in humanities in relation to other fields of science have been examined by analyzing publication data. The OA status of publications in different publication types have also been examined. The limitations mainly concern the reliability and consistency of the publication data and the

consequences for the results. When using data, which other actors have collected for other purposes, the reliability and validity of the obtained data is largely out of control for the researcher (Creswell & Creswell 2018). Taking this into consideration is highly relevant when assessing sub-study I, as the publication data has been collected as part of the national publication collection conducted by the MEC.

109

There are several factors concerning the publication data, which may affect the quality and the correctness of the results, but are independent of the researcher. To guarantee that the same definitions of OA have been deployed both when retrieving data and in the presentation of results, the handbook, which was in effect in 2018 (Tiedonkeruun käsikirja 2018), was used as a point of reference. More importantly, however, the publication data itself available through Juuli may contain errors. Despite the thorough validation processes conducted by the university libraries and the MEC, there is a probability that the definitions of OA, and other publication characteristics, provided by the MEC have not been consistently applied by the university libraries and individual validators, as observed by Ilva 2017 and Ilva 2019. For example, delayed OA is not considered proper OA according to the definitions of MEC (Tiedonkeruun käsikirja 2018), but publications which are delayed OA may still have been included as OA publications in the validation process (Ilva 2019). Some examples of validated publications, which contained mistakes in the publication data, were observed also when conducting this study.

Since the total numbers of reported publications can be considered small, errors in the publication data inevitably affect the results as presented in this thesis. As an additional consequence of small quantities, these errors may rather decisively affect the results presented as percentages. Manual corrections have not been made in the data retrieved for the study, since this would negatively affect the consistency of the data and the opportunities for comparison and replication of the study. Overall, the changeability of the data available in Virta should be taken into account when planning comparisons of different publication data or a replication of the study. The publication data in Virta is constantly being updated, which makes it difficult to check the correctness of search strategies and the exact number of publications after some time has passed (Ilva 2017, 2019). Therefore, the exact dates for retrieving the publication data have been

documented in the research process. There were changes in the publication data during the research process. For instance, in the last procedures of data retrieval and checking previously retrieved data, the situation for parallel publishing had improved in almost all fields of science.

Despite the limitations of the publication data, the publication data in Virta represent the official statistics of publications produced at institutions in higher education in Finland.

110

In that sense, the use and interpretation of the publication data, despite the probability of errors, is valid for assessing the OA status of publications and the patterns of scholarly publishing at institutions of higher education.

Since the fields of science, which are used to describe the discipline of a publication, do not necessarily correspond to the affiliation of the researcher, it is not appropriate to draw direct conclusions on a faculty level. For the same reason, comparisons between corresponding faculties at different universities is hardly motivated. It can, however, be assumed that most publications reported in humanities at ÅAU have been produced at FHPT.

With regards to validity, it should be observed that analysis of publication data of research institutions needs to be understood in their organizational and disciplinary context in order to be meaningful data for interpretation. In this study, an analysis of publication data of all disciplines has been presented to contextualize the patterns of scholarly publishing in humanities in particular. Since the analysis of publishing patterns is limited to one year only, the results provided only a limited view into the current state of scholarly publishing activities. The results cannot be seen as a tool for predicting future publishing activities. Neither can the results be directly compared to the publishing patterns internationally.

Reliability and validity in sub-study II

The limitations of sub-study II are discussed with regards to the design of the survey (measurement consistency), data collection or sampling (response bias), data

processing, and the analysis of survey data. The reliability and validity of the results are assessed.

The limitation concerning measurement reliability concerns the consistency of the survey to measure what it is intended to measure (Creswell & Creswell 2018). To avoid this kind of bias, a pilot survey was conducted to secure the consistency of the survey items before its wider distribution. It should be acknowledged that the topic of OA can be considered difficult, and far from all researchers are knowledgeable of OA issues.

Although most respondents reported a high level of awareness and knowledge of OA issues, it cannot be excluded that respondents have interpreted the survey questions in

111

other ways than the researcher have intended to. This means that confusion of

terminology and other misunderstandings would inevitably affect also the validity of the results. However, the results of items which investigate the same aspects of OA

publishing support each other (e.g. concerning green OA) and are in line with previous research.

With regards to the design of the survey, some improvements could be made in case a follow-up study is conducted. Some questions in the survey received a large proportion of responses for the “neither disagree not agree” alternative. The tendency that

respondents choose this option may indeed be indicative of that opinion, but it may also imply that respondents do not know what to answer. In other words, it could be useful to also include the option of responding “I do not know” to some question items.

Altogether, the tendency shows that those questions need further examination.

Similarly, the questions which consisted of forced ranking scales could have benefitted from providing a N/A alternative. Some respondents took the opportunity to explain their views in the open-ended questions at the end of the survey, which can be considered to enhance the validity of the responses.

As observed in the research literature on survey studies (Creswell & Creswell 2018), there is a central difference between self-reported awareness and knowledge and real-life awareness and knowledge. For instance, respondents may underestimate or overestimate their awareness and knowledge of OA issues, and act differently in real-life situations compared to how they have reported in the survey. There is also a

probability that respondents answer in a way they (consciously or unconsciously) think the researcher wants. This bias is almost inevitable in survey studies, and to cover it, other research strategies need to be applied.

The nonresponse rate affects the reliability of the results of the survey. In total 59 respondents can be considered a decent number of participants, but these represent only a minority of the total amount of researchers at FHPT. The representativeness of the respondents who decided to participate in the study is a central aspect to consider when assessing the results. For example, it could be assumed that those researchers, who are already familiar with OA and are positive about OA, are more likely to participate in the survey in the first place, compared to those who are not familiar with OA at all. A potential explanation to the large share of doctoral students among the participants is

112

that OA issues is more and more emphasized for early career researchers. In the cover letter, it was mentioned that also those who do not have experience of OA could participate in the survey. The purpose was to encourage researchers who did not have experience of OA publishing. Because of these response biases, the results do not represent the voice of all researchers at FHPT.

Assessment of the entire study

The limitations with regards to whether the results can be generalized (external validity), or directly compared to previous studies, are characteristic to case studies (Creswell & Creswell 2018). Despite the limitations and problem areas regarding the reliability and validity of the results, the overall results of the present study are in line with previous research.

Another central limitation relates to the mainly quantitative approach. The quantitative approach has provided a more extensive and versatile insight into the current situation in humanities at ÅAU than qualitative methods would have provided. However, it is not possible to know how respondents have reasoned when responding to the survey. At the same time, attitudes and motivations are constantly subject to change, and similar responses cannot be guaranteed.

Initially, the scope of this case study can be considered broad in nature, at the same time as it is impossible to study all aspects of OA publishing, even in a defined

organizational or institutional context. The empirical data retrieved for both sub-studies were rich and versatile, and opportunities for forthcoming analyses remain.The present multi-method study has been fruitful for mapping critical developments in the transition towards OA publishing in humanities, from the perspective of researcher.