Julian Hamann | Veranstaltungsbericht |

What you see is what it's worth?

Report on the International Workshop “The Role of Visibility in Academic Evaluation. (E)Valuation Studies in Science and Higher Education” at the Humboldt University Berlin on 15–16 November 2018

Evaluation has always been a crucial element of scientific knowledge production. Most notably, the institution of peer review has a long-standing tradition. However, with the public funding of university research decreasing in many domains, and the distribution of resources becoming more competitive in general, the assessment of valuable academics, academic work, and academic institutions has become ever more important in the last decades. Evaluation in and of academia has become a topic that is drawing more and more attention.[1] This increased attention is due to both a new prevalence of evaluation and a heightened sociological interest in the phenomenon. Academia appears to be pervaded by all kinds of evaluation, from digital platforms like ResearchGate over policy instruments like the German Excellence Strategy or the British Research Excellence Framework (REF)[2], classrooms in which professors grade students only to be subsequently rated by them on ratemyprofessor.com, to journals that publish not only peer reviewed articles, but also book reviews and obituaries. While roots, core processes, and effects of evaluation have been studied from multiple angles (including issues of commodification and quantification,[3] classification and categorization,[4] as well as disciplining and subjectivation,[5]) the role of visibility in academic evaluation has received considerably less attention.

The organizers of the workshop invited a number of international scholars to discuss visibility as a core aspect of evaluation. The workshop was jointly organized by the Department of Sciences Studies and the research group “Reflexive Metrics – Retroaction and practices of quantified orders of worth in science” at the Humboldt University Berlin and by the research cluster “Evaluation Practices in Science and Higher Education” at the German Centre for Higher Education Research and Science Studies (DZHW). It seemed appropriate that the workshop venue was highly visible and symbolic: The sacred halls of the Humboldt University’s Senatssaal, where workshop participants could feel observed (Or even judged?) by the Humboldt brothers, whose life-sized portrays were hanging on the wall.

What is to be gained from discussing the nexus of visibility and evaluation? ANNE K. KRÜGER (Berlin) emphasized in her introductory remarks that visibility plays a double role in evaluation processes. First, visibility is a criterion for evaluation processes themselves. Questions of transparency, confidentiality, or anonymity are crucial for the conception of some evaluation processes, and also for these processes to be perceived as legitimate. Krüger also highlighted a second important role of visibility, namely, visibility as an outcome of evaluation. Evaluation produces visibility by directing attention towards the objects, ideas, organizations, persons, or practices that are evaluated. One wants to add that visibility is not only an outcome of, but often enough also an influence on evaluation: Entities are only evaluated in the first place if they are visible to the evaluators. The most visible academics are often deemed more prestigious, and, in fact, also better paid.[6]

The introductory remarks established a rather general and broad foundation for a discussion on conceptual perspectives on visibility and its empirical role in evaluation processes. They framed the workshop as an opportunity to exchange ideas and facilitate an explorative conversation between experts who study visibility and/or valuation and evaluation processes in but also outside academia. Accordingly, the subsequent contributions covered a wide range of theoretical and conceptual notions as well as empirical cases of visibility.

DAVID PONTILLE and DIDIER TORNY (Paris) emphasized the centrality of names in scholarly communication as they discussed the politics of (un-)naming. They argued that names on publications signal not only authorship, but also degrees of contribution. Vis-á-vis the visible names on publications that downright expose authors/contributors is the invisibility and anonymity of reviewers. Pontille and Torny sparked an interesting discussion: If naming in scholarly communication serves as an orientation for the attribution of credit, but also accountability, how can invisible reviewers gain the credit they deserve, but also be made accountable? For what should reviewers actually be made accountable in the first place? While research on authorship has produced a rich body of literature in science studies,[7] Pontille and Torny showed that it is worthwhile to re-consider this topic in the sociology of valuation and evaluation.

The second contribution by ANDREA BRIGHENTI (Trento) took up the relation between the visible and the invisible that had been introduced by Pontille and Torny. Brighenti can be considered an expert in questions on visibility as a general category for the social sciences.[8] In his talk, he conducted several exploratory movements to reflect on a number of conceptual pairings. Among these pairings were transitions between new and old measures, the ways in which measure and value are entangled, and processes of (in-)visibilization. While Brighenti’s conceptual reflections remained rather general in his speech, the following discussion connected his reflections more directly to the case of academia. In particular, the workshop participants were debating whether we are currently witnessing transitions of different sorts of measures in academia, taking place at different speeds, with the established measures dissolving while the new ones are emerging.

The following two talks attempted to gain new insights by introducing rather unusual perspectives on visibility. STEFANIE BÜCHNER (Hannover) proposed a perspective of organizational sociology. Her contribution proceeded from the question whether organizations make a difference to the study of visibility. Drawing on Niklas Luhmann’s systems theory, Büchner showed how organizations influence visibility in different ways: Organizations strengthen constellations of visibility, weaken or break constellations of visibility, or redefine and recode visibility. In all three cases, Büchner argued, organizations calibrate visibility and transform it into their own relevancies – a process which can be considered evaluative. Although the study of visibility is hardly a new undertaking in organizational sociology and organization studies more generally,[9] a Luhmannian organizational sociology could indeed offer new sensitizing concepts for studying how visibility is processed and relevance is assessed within organizations.

MARTIN REINHART and CORNELIA SCHENDZIELORZ (Berlin) suggested a more radical change of perspective. Comparing modes of governance in democracy and science, they discussed how the balance between transparency and intransparency in both fields is related to the legitimate exertion of power. From this original angle, the speakers were able to describe peer review as a dispositif of transparency that comprises manuscripts and applications, program officers and reviewers, as well as many activities like selecting, deliberating, and deciding. As a dispositif, Reinhart and Schendzielorz argued, peer review legitimizes academic self-government by providing both publicity and legibility. Although the speakers stated in the discussion that the concept of dispositif was hitherto merely a conceptual crutch for them, it seems worthwhile to pursue this promising analytical take on peer review as a dispositif further.

The last talk of the day took a more general, and indeed, critical perspective: UWE VORMBUSCH (Hagen) discussed how current capitalism yields lifeforms that are based not only on living with, but actually identifying oneself with numbers. One can agree with Vormbusch that this sounds all too familiar for academics, who are not only constantly evaluated and evaluating, but who run danger of becoming invisible altogether if they are not evaluated. Asking whether this makes academics forerunners of capitalistic regimes of visibility, Vormbusch distinguished three such regimes: (1) superimposed total visibility, (2) scopic visibility, which is found within markets where everything is analyzed and calculated, and (3) explorative visibility, which is about new practices of self-quantification that put value on human competences. The discussion revealed that a clear distinction between the three ideal type regimes is not easy to make. Nonetheless, one important contribution of this talk was to link evaluation and visibility in academia to broader issues in and of capitalist societies.

This angle proved to be a recurrent motif for the workshop’s second day. Employing a perspective that could be labelled discursive capitalism, JOHANNES ANGERMULLER (Warwick) described the discursive dynamics that explain how scholars become visible and can claim an existence in academia.[10] In these dynamics, bundles of categories (“full professor”, “applied linguist”) are collected and ascribed to individual scholars. The corresponding practices involve many people over a long period of time. Ultimately, these discursive dynamics amount to what Angermuller coined “hyperinequalities” in visibility. The fact that only very few researchers are very visible in scholarly discourse, while the vast majority is virtually invisible, can be understood as an oligopolization of the symbolic resource of citations. Angermuller showed that this highly unequal distribution pertains for Germany, France, and the UK alike, which is remarkable given the very different higher education systems in those three countries.

TILMAN REITZ (Jena) took up Angermuller’s empirical insights with a more conceptual attempt to relate the spheres of academic capitalism with visibility. Reitz offered a well-structured distinction of different meanings of visibility in academia. He concentrated on those forms of visibility that have a competitive element: (1) fame and reputation (for example in media rankings), (2) surveillance and control (for example the already mentioned REF), and (3) advertising and impression management (for example on university websites). These variants of competitive visibility have not only a scientific core (which is where Angermuller’s citations would come in), but also include a monetary sphere due to their relation to funding, a public sphere that is connected to media rankings and student attraction, as well as a non-scientific layer in the form of public relations and academic management. Reitz concluded that the functions of these varieties of competitive visibility are not only the signaling of quality, but also the allocation of resources via a justification of public spending, and not least the control of academic work and the self-control of the academic profession (which referred back to Vormbusch’s argument).

With JÖRG POTTHAST (Siegen), the focus turned from conceptual distinctions back to academic everyday life. He asked how conventions and practices of everyday testing for scientific quality relate to peer review, and in what way visibility can be specific as an element of testing for quality. Potthast pursued these questions by discussing cases in which the testing has flipped and the tester became the tested while the reviewer became the assessed. Suggesting that reviewees cover uncertainties of the peer review process by avoiding to display joy and pride (about positive reviews) as well as anger and frustration (about negative reviews), Potthast argued that face-work in Goffman’s sense is a functional element of peer review. The discussion revealed that face-work is perceived in different ways in academia: Some scholars take face-work serious and market themselves, others perceive face-work as a necessary professional practice that has nothing to do with their actual work, yet others are even repelled by it.

The workshop’s last two talks took up a topic that had popped up several times throughout the two days without being addressed systematically: the role metrics play for evaluation and visibility. SARAH DE RIJCKE (Leiden) focused on altmetrics, which represent the algorithmic assessments of scholary impact on social media, online news media, or online reference managers. She argued that platforms like ResearchGate, and altmetrics in general, perform a gamification of academia by applying gaming features like points and even specific aesthetics. De Rijcke was keen to point out that gamification should not only be seen through the lens of neoliberalism, through which it contributes to surveillance and self-improvement. Integrated in this lens is another perspective that regards altmetrics-driven gamification as a way to make meaning of academic everyday life, and, indeed, a matter of play.[11] This attempt of complementing the gloomy narrative of the neoliberal marketplace with counter narratives of playfulness sparked a vivid discussion relating back to several other talks. For example, the making of academic personae on ResearchGate and other platforms related back to Vormbusch’s three visibility regimes (total, scopic, explorative), and it highlighted a gift economy that is embedded in Reitz’ academic capitalism. Most workshop participants, including the speaker herself, agreed that the gamification of academia results in a peculiar type of game, and that actual games are way more fun.

Complementing the previous talk on altmetrics, STEPHAN GAUCH (Berlin) reflected on ways in which research is made visible. Drawing on Marshall McLuhan, Gauch contrasted bibliometrics and altmetrics. He showed how the former operate within a small media ecosystem (usually: the article) and are oriented either towards productivity, which means counting the publications, or towards quality, which means counting references. Gauch argued that altmetrics represent a shift away from this traditional logic of bibliometrics. Altmetrics do not operate within a small media ecosystem of articles and journals, but in an open universe. Furthermore, they do not produce metrics on productivity and quality. While they are also based on counting, altmetrics rather produce information on visibility and attention. Although the discussion highlighted that the dichotomy between bibliometrics and altmetrics may be exaggerated, the “incumbent altmetrics”, as the speaker coined it, could provide an opportunity for the sociology of valuation and evaluation to study the struggles for different meaning and relevance systems.

The closing discussion revealed what readers of this conference report might have suspected already: Visibility allows for very different conceptual takes and empirical angles. In some cases, the concept of visibility just “vanished into thin air”, as one discussant put it. Another discussant claimed that, if visibility was supposed to be more than a buzzword, it would have to be defined more rigorously. One could not help but wonder whether the concept of visibility could have had more analytical leverage had it been defined more rigorously ahead of the workshop. Yet, maybe rigor was not the point altogether. The organizers framed their workshop as an “experiment”, and indeed, this conceptual openness allowed for substantive, and perhaps unplanned, common themes to emerge somewhat organically between the contributions. The closing discussion got to the heart of some of these commonalities: Many talks throughout the workshop drew on analytical concepts that are closely connected to issues of power. Among these concepts were, for example, regimes, dispositifs, competition, hyperinequalities, surveillance, or subjectivation. Several discussants emphasized that questions of power, domination, and critique are central to the study of visibility and evaluation. What are the power dynamics behind (in-)visibilities? Is visibility a new mode of domination? How are regimes of visibility linked to social inequalities? One may hope that these overarching questions do not represent the end of the conversation on visibility and evaluation, but, rather, the starting point for the next workshop. If this was the case, the future could not only hold more thorough theoretical foundations for the sociology of valuation and evaluation,[12] but also a genuine political impetus.

Zum vollständigen Programm (PDF)

  1. See the overviews in: Sarah de Rijcke, et al., Evaluation practices and effects of indicator use – a literature review. Research Evaluation 25 (2015), 2, S. 161–169; Julian Hamann / Stefan Beljean, Academic evaluation in higher education, in: Pedro Teixeira / Jung Cheol Shin (eds.), Encyclopedia of International Higher Education Systems and Institutions, Dordrecht 2017.
  2. The Research Excellence Framework is the United Kingdom's system for assessing the research performance of higher education institutions.
  3. Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton 1995; Wendy N. Espeland / Mitchell L. Stevens, Commensuration as a Social Process, in: Annual Review of Sociology 24 (1998), S. 313–343.
  4. Wolff Michael Roth, Making Classifications (at) Work: Ordering Practices in Science, in: Social Studies of Science 35 (2005), 4, S. 581–621; Linda Wedlin, The role of rankings in codifying a business school template: classifications, diffusion and mediated isomorphism in organizational fields, in: European Management Review  (2007), 4, S. 24–39.
  5. Michael Sauder / Wendy N. Espeland, The Discipline of Rankings: Tight Coupling and Organizational Change. American Sociological Review 74 (2009), 1, S. 63–82; Johannes Angermuller, Academic careers and the valuation of academics. A discursive perspective on status categories and academic salaries in France as compared to the U.S., Germany and Great Britain, in: Higher Education 73 (2017), 6, S. 963–980.
  6. Erin Leahey, Not by Productivity Alone: How Visibility and Specialization Contribute to Academic Earnings, in: American Sociological Review 72 (2007), 4, S. 533–561.
  7. Mario Biagioli / Peter Galison (eds.), Scientific Authorship. Credit and Intellectual Property in Science, London / New York 2003, David Pontille, La signature scientifique, Paris 2004; Vincent Larivière et al., Contributorship and division of labor in knowledge production, in: Social Studies of Science 46 (2016), 3, S. 417–435; Carla Mara Hilário et al., Authorship in science: A critical analysis from a Foucauldian perspective, in: Research Evaluation 27 (2018), 2, S. 63–72.
  8. Andrea Brighenti, Visibility. A Category for the Social Sciences, in: Current Sociology 55 (2007), 3, S. 323–342; Andrea Brighenti, Visibility in social theory and research, Basingstoke / New York 2010; Andrea Brighenti, The Social Life of Measures. Conceptualizing Measure-Value Environments, in: Theory, Culture & Society 35 (2017), 1, S. 23–44.
  9. For example Mikkel Flyverbom / Juliane Reinecke, The Spectacle and Organization Studies, in: Organization Studies 38 (2017), 11, S. 1625–1643; Leopold Ringel, Unpacking the Transparency-Secrecy Nexus: Frontstage and backstage behaviour in a political party. Organization Studies (2018).
  10. A discursive capitalism perspective has been developed elsewhere: Johannes Angermuller, Accumulating discursive capital, valuating subject positions. From Marx to Foucault, in: Critical Discourse Studies 15 (2018), 4, S. 414–425.
  11. An earlier version of this argument can be found in Björn Hammarfelt / Sarah de Rijcke / Alex D. Rushforth, Quantified academic selves: The gamification of science through social networking services, in: Information Research 21 (2016), 2.
  12. Frank Meier / Thorsten Peetz / Désirée Waibel, Bewertungskonstellationen. Theoretische Überlegungen zur Soziologie der Bewertung, in: Berliner Journal für Soziologie 26 (2016), 3/4, S. 307–328; Anne K. Krüger / Martin Reinhart, Theories of Valuation – Building Blocks for Conceptualizing Valuation between Practice and Structure, in: Historical Social Research 42 (2017), 1, S. 263–285.

Dieser Beitrag wurde redaktionell betreut von Stephanie Kappacher.

Kategorien: Wissenschaft Universität Macht

Julian Hamann

Julian Hamann ist Postdoc am Leibniz Center for Science and Society an der Leibniz Universität Hannover. Er ist in der soziologischen Wissenschafts- und Hochschulforschung sowie der Wissens- und Kultursoziologie tätig. Aktuelle arbeiten befassen sich mit Bewertungen und sozialen Grenzen, Subjektivität und Performativität, akademischem Wissen und Karrieren sowie Macht und sozialen Ungleichheiten.

Alle Artikel

PDF

Zur PDF-Datei dieses Artikels im Social Science Open Access Repository (SSOAR) der GESIS – Leibniz-Institut für Sozialwissenschaften gelangen Sie hier.

Empfehlungen

Michael Huber

Schriftlichkeit in der Wissenschaft zwischen Universität und Staat

Akademische Textproduktion aus organisationssoziologischer Perspektive

Artikel lesen

Christa Binswanger

Kalkül und Kritik in der Akademie

Rezension zu „Vermessene Räume, gespannte Beziehungen. Unternehmerische Universität und Geschlechterdynamiken“ von Sabine Hark und Johanna Hofbauer (Hg.)

Artikel lesen

Newsletter