• No results found

At the beginning of this chapter, I discussed the origin and trajectory of the dual concept of centre/periphery. The more recent concepts in par­

ticular carry a spatial, a geographic, distinction, semantically related to the

‘Global North/South’. As I argued above, this has a number of downsides.

Most importantly, it stabilises the accumulation of referentially produced communicative peripheries in the ‘Global South’, and of centres in ‘Global

North’; this is a way of (re)producing privilege, and therefore socially un­

just. I argue for replacing the spatial reference with a reference to the inner differentiation of social systems. For the research system, centres flexibly evolve around research results that are outstandingly often referred to in other research results which are interrelated via topic, theory or method.

What is peripheral to one communicative cluster can be central to another.

Peripheral communication is a realm in which epistemological risks can be taken, and where the centre is produced and stabilised as such.

The conceptual spatial distinction of centre/periphery is semantically mir­

rored in the distinction of ‘international/local journals’. While definitions are often implicit and various, the former is generally identified with pub­

lishing for career advancement and high global visibility, and the latter with relevance to the local community or questionable quality. What those definitions fail to consider is the capacity of journals operating independ­

ently from major publishers to design author guidelines that can deviate from established ‘Global North’ standards. Here, leeway is given for pro­

ductive irritation in scholarly communication, beyond technical innova­

tion. In order not to lose this capacity, research management urgently needs to review its paradigms; those gateways for evolutive impulses are already disappearing in large numbers. Libraries and researchers them­

selves can effectively support those ‘local’ structures, globally.

However, the problematic constellation concerning the development to­

wards open­access publishing in the previous chapter reappears here once more: supporting those local structures comes at a cost that most insti­

tutions in the ‘Global South’ can hardly afford. At the same time, the publishing infrastructure in the ‘Global North’ could diversify more, in­

cluding major commercial publishers as well as stand­alone journals hos­

ted by non­profit service providers, and small university presses.¹⁰² In the

‘Global South’, however, a possible scenario rather is that, on the one hand, even more research results are remodelled in order to pass the generalisa­

102 The movement to reclaim the publishing infrastructures by the scholarly community is exemplified by the Fair Open Access Alliance, https://www.fairopenaccess.org, and the Radical Open Access Collective, http://radicaloa.disruptivemedia.org.uk, both visited on 29 June 2020.

tion barrier of ‘international’ publication venues. Thereby, the research’s local visibility and relevance can be lost. On the other hand, locally relev­

ant results will tuck away from global visibility in commissioned reports, often lacking peer review, persistent accessibility and preservation. Both sides of this scenario are already strongly developed, and everything else will be hard to maintain under current conditions, as Chapter 4 will set out to illustrate for the case of Southeast African SSH.

Scientometrics often supports the reproduction of a spatial centre/peri­

phery distinction describing the research system, by identifying centres of research communication—countries or institutions—quantitatively, based on non­representative data. Scientometrics about the SSH do this less often, since the use of mainstream bibliometric databases is only one method of many, and studies rarely have global scope. However, the field cares little about the peripheries it still produces with each ranking or network map.

Exceptions started to surface only recently.¹⁰³

My intention is not so much to point at the limitations of single cita­

tion indexes, but rather to highlight the way they are used. The problem is not that journals are not included—either because they do not meet the inclusion criteria or because they are overlooked. The problem is rather the striving for standardisation of communication which observes the research system. Each inconsiderate use of so­called international bibliographic databases for bibliometrics reproduces the ‘standards’ that were set by a very small group of people from the same cultural background. There is nothing wrong, however, with this group setting up a strict inclusion pol­

icy and practice for their database. I am not interested in incriminating any database creators for not including any journals, whether they meet the in­

clusion criteria or not. The point is: as soon as the database gets to impact who receives tenure and grants, and how much public money is spent on research, and what reputation an entire workforce of a country receives, the database carries meaning beyond being a collection of bibliographic

103 For example, with the 21st International Conference on Science and Technology Indicators in València, 14­16 September 2016, under the theme ‘Peripheries, Frontiers and Beyond’.

records based on contingent inclusion criteria. Those who are responsible for the database are then also responsible for communicating to any user in an unmissable manner that this database is nothing more than a collection of bibliographic records based on contingent inclusion criteria.

In the case of WOS, an intra­organisational policy was indirectly moved to other contexts without assessing carefully if it was actually fit for the intended purpose. It is therefore very important to remember that this study is not about unequal participation in publishing, but about unequal participation in indexing. Mixing up levels leads to wrong conclusions. As I stated earlier (pp. 48 sqq.), third­ and fourth­order observations, such as bibliographic databases and studies about those databases, feed back into the scholarly communication system, and are part of it as environmental references, and there is no better example of that than WOS. Mixing up the levels, believing that a fourth­order observation truly is a direct observation of research excellence, has vouched for a series of systematic errors. First ISI, and then the subsequent owners of the database, profited from the growth effect which was based on this error—while a small group of insightful researchers (and isolated policy makers) are working hard to control the damage that seems irredeemable at the moment.

WOS’ competitors might have more balanced inclusion criteria or excel­

lence­ranking indicators. The fact that there is competition in providing

‘global rankings’ of research excellence actually contains the consequence that indicators are contingent. However, the idea of a ‘global ranking’ set up by a single (composite) indicator is widely accepted. The competition between indicators leads to the development of more complex algorithms, such as, for instance, Eigenfactor, or ‘altmetric’ approaches. This tends to obscure the initial error even further, because it can lead to a satisfied per­

ception of a fixed error (or, for that matter, levelling different citation dynamics between disciplines; impact can happen without citation), un­

troubled by remaining problems. The more complex an algorithm, or a list of criteria, the harder it is to review it, and therefore also the less likely.

It could very well be that the Impact Factor is in use so persistently because it is a simple formula created by a reputable institute. Furthermore, al­

ternative indicators, such as different ‘altmetrics’, could not convince the community of scientometricians and policy makers so far. Increasingly, the audit industry is competing with composite indicators which are even more of a black box than the journal inclusion process for the Impact Factor.

There is no simple solution to what I would call an underlying error of flattening levels of observation in scholarly communication. There seems to be no easy way out of reproducing this error, since audit culture heavily relies on it. Today, the obsession with benchmarks, growth, and competition—

in short, quantified communication—has struck the large majority of re­

search institutions already; actually, all kinds of organisation­based labour and education. Quantified communication is making it extremely hard to just point out the underlying error and be heard; although it would be quite helpful: companies whose business model it is to provide benchmarks and rankings would work against their own interest when suggesting a col­

lective misconception of their product. No action can be expected from their side. Policy makers and research managers could make a huge differ­

ence, but large parts of their occupation are provided by audit culture, so a counter­culture needs to be in reach before anything can be expected from this group. Due to their large number, the researchers themselves might be the most promising stakeholders here. However, they are divided into those who profit from the error, and those who are fighting to stay funded, to stay in the system.

Derived from the discussion of a communicative distinction between centre and periphery in Section 3.3, and from what was concluded in Chapter 2, the strategy that this thesis proposes is to burden the research system with a higher degree of complexity, in order to get rid of the colo­

nial difference which the system reproduces through accumulated spatial peripheries. The colonial difference is a shortcut serving the reduction of complexity in the research system and alternative means to that end are needed.¹⁰⁴ I agree with Keim (2016) that instead of a programme that is supposed to mend the system, first of all, ‘an adventure’ is needed; the

104 They might develop simultaneously or afterwards, though not in this thesis (but see Schmidt 2016c).

‘Global North’ measures of reducing complexity in the system, including all its criteria, need to be suspended and additive approaches avoided, in order to allow for more complexity to emerge, and to trigger the emergence of structures better suited to a global system, and to social justice.

‘Decolonial scientometrics’ sounds like a contradiction in terms, because measuring is one of the most important methods of colonisation (see e. g.

Dirks 2001). However, as mentioned before, decolonial thinking can be seen as a crossover strategy of irritating dominant ways of reasoning by confronting them with previously marginalised ideas. What I would like to suggest in this chapter, is to confront mainstream scientometrics with a culturally humble way of gathering and interpreting metrical clues which help to describe a certain section of the research system. Mainstream sci­

entometrics actually limits its scope to whatever database it is using, while it very often still claims to quantitatively describe research output as such.

By doing that, it is not sensitive to the limits of observability it sets by choosing the data source.

Contrarily, decolonial scientometrics would limit its scope to the section of the system it can feasibly describe by gathering a nearly complete data­

base, sampling with uttermost care. Decolonial scientometrics could, for instance, focus on a certain discipline, but without accepting that data de­

scribing this discipline as it is practised in certain regions in the world may not be included in the data source chosen. Simply acknowledging this limitation like it is usually done, and still claiming generalisability of the results for the discipline, cannot be tolerated by a decolonial scientometric approach. Decolonial scientometrics is a methodology that questions the use of regionally or linguistically biased databases for analyses that aim for generalised global results.

The sampling method for Southeast African journals, including some preliminary results laid out in this chapter, was presented at the 21st International Conference on Science and Technology Indicators, see Schmidt 2016b.

In this chapter, I will present a bibliometric approach that is sensitive to the conditions of academic publishing in the selected field, and concen­

trates on capturing what the mainstream bibliometric methods based on WOS data miss out on, at least by sample. As mentioned earlier, decolonial scientometrics only evolve when metric studies are embedded in a larger hermeneutic context. Without this context, these bibliometric studies are rather meaningless.

Therefore, Section 4.2 introduces the research environment of Southeast Africa. I will relate the number of WOS publications in the SSH, authored by scholars affiliated with Southeast African institutions, to the numbers of researchers based in the region. Then I will venture into one of the few comparisons in this thesis, which I deem necessary to arrive at a more appro­

priate conception of the Southeast African SSH environment: I will select countries situated on different continents that host approximately the same number of SSH researchers as Southeast Africa, and compare government research expenditure and ‘international’ citation database indexing.

Section 4.3 will then give an overview of journal publishing, major book publishers, and databases from, as well as publications from, (Southeast) Africa. In order to see more clearly how visible those sources are to Euro­

pean SSH researchers, in Section 4.4, firstly, literature on their preferred search tools is examined, and secondly, the presence of data on Southeast African publications in those tools is assessed.

Beside methodological reflections, from Section 4.5.1, this chapter gives a rough overview of Southeast African­published journals dedicated to ba­

sic SSH research, which are ceasing to exist at least since the financial crisis 2007­2008 (no claim of causality). This chapter adds explanations of this

‘journal dying’ to what has been said before about the situation of ‘local journals’, now focusing on the more specific case.

Feasibility prohibits a comprehensive publication count of Southeast Af­

rican scholars or publishers. I therefore start the exploration with a sample of ‘local journals’ which appear to be—or rather have been—well­estab­

lished in Southeast Africa. Based on a sample of papers in these journals, I find out the affiliations of their authors, limiting further investigations to Southeast African authors. Which publication venues do they employ

apart from ‘local’ journals? How discoverable is their work to me, to which extent did GS register citations of it, and where do those citations lead, geo­

graphically? Finally, a different sampling strategy based on affiliation with a single university, the University of Mauritius, is applied to find out if the results are somewhat generalisable for the field and region.

During these investigations, I do not differentiate between disciplinary research fields within the SSH for two reasons. Firstly, the primary sample of articles directly picked from Southeast African journals, while being quite balanced in terms of disciplines, is too small to create meaningful sub­samples. This is also true for the secondary sample consisting of pub­

lication lists by Southeast African authors. A further reason is my interest in establishing more broadly what type of publication venues SSH research­

ers based in Southeast Africa turn to. A more detailed view would be rather an overload to my broader argument, of which this scientometric study is just a piece. This broader argument is directed towards advancing global social justice in scholarly communication.

For the same reasons, I do not pay a lot of attention to co­authorships.

I investigated the affiliations of all co­authors from the sample of papers published in the selected Southeast African journals.