Este título foi extraido de uma publicação no the guardian, sobre um comentário da revista Nature (http://www.theguardian.com/science/occams-corner/2014/jan/09/1). vale a pena a leitura abaixo reproduzida:
Does it matter that there aren't more women in science?
A bibliometric analysis in Nature purports to confirm that women scientists are discriminated against. But the full picture might be much more interesting
Despite years of effort almost all scientific fields are still dominated by men. There are exceptions: you are more likely to find equal ratios in cellular and molecular biology, and a predominance of women in fields including education, language, nursing and midwifery, for example. Even so, at more senior levels, thanks to the so-called 'leaky pipeline', even disciplines with parity at the undergraduate and postgraduate level end up retaining relatively few women.
It is generally assumed this is a Bad Thing. It is tacitly if not explicitly assumed that science, even society as a whole, is missing out on the intellectual contributions of the women who do not get on or continue to climb the scientific career ladder. As if other careers are neither intellectually satisfying nor useful to society. Naturally, if women are being actively discriminated against, being disproportionately denied opportunity, then that surely is a Bad Thing (regardless of what an "ideal" or "desirable" sex ratio would look like).
None of this is new. You only have to look at graphs such as this one from Eurostat to see that in Europe there are twice as many men doing research as women:
So it was no surprise to read that a Commentary piece in Nature before Christmas, (Larivière et al., Nature 2013;504:211–213 doi:10.1038/504211a) reporting on research output as measured by bibliometrics, came to the same conclusion. If there are twice as many men in science (globally, across all disciplines) as there are women, one might expect twice as many male authorships – as indeed, appears to be the case.
The authors of the Commentary find that women "are similarly underrepresented when it comes to first authorships. For every article with a female first author, there are nearly two (1.93) articles first-authored by men." As we've seen, there are twice as many men doing science so this is what we might expect – although I'll return to this point. The importance of first author varies by field: in many fields this is the person (often someone quite junior) who did most of the work; in other (medicine for example) it is the senior or 'lead' author – the one who conceived and led the project; and in some it is the person whose name is closest to the start of the alphabet.
The authors go on to describe a citation analysis, asking whether papers with women in "key positions" (usually first and/or last author) are cited disproportionately compared with papers with men in those positions. There are some differences between the sexes, although they are relatively small and there's no indication of uncertainty, nor that any statistical analysis was performed. Error bars and p values are things that happen to other people, evidently.
So far, as I say, no surprises.
But let's take a closer look at this analysis.
If it is true that there are twice as many men in science, overall, as women, and that they – as we expect – are first author twice as often, then what can we infer as a result of the leaky pipeline? If women disproportionately leave science just as they are getting to the position of writing first or senior author papers then we should be surprised that the proportion of male first author papers is as low as it is. We might also suppose that women in many fields are disproportionately represented in technical roles; still in the puke-green part of the Eurostat graph but never likely to be in the first (or last) author position (unless their name is Aagard, perhaps), in which case the pool of women eligible to be first authors is even lower.
Which implies that, within science globally, women are more productive than men.
And what about citations? (Ignore for a moment that citations are a pretty lousy way of measuring research output or impact.)
We know that women tend to be overrepresented in certain fields, and missing from others (physical and computer sciences, for example). What if those other fields have a stronger citing tradition? That is, what could we infer from these data if the fields where women are overrepresented are those very fields where papers just do not get cited all that frequently anyway?
For example, nursing journals have very low impact factors – papers published in them don't get cited very often. (They certainly get read; they just don't get cited. And of course, what nurses do, and discover, has a much bigger effect on far more people than isolating the Sec61 translocation complex, to pick a random example.) A 2011 paper inNursing Outlook finds that of 70-odd nursing journals the median impact factor is less than one (Table 1, Polit & Northam, Nurs Outlook2011:59:18–28 doi:10.1016/j.outlook.2010.11.001), and that only 27 have an impact factor great than 1. None have an impact factor of 2 or higher. Compare this with just about any other field, where most scientists only submit papers to journals with impact factors below 3 in extreme desperation (or to make a point).
I don't have the numbers but it would not surprise me at all to find that adjusting for such anomalies across the board would even out Larivière et al's global citation analysis, and maybe even reverse the ratios. That is, in fields where women are underrepresented we could well find that papers with women in the first or last author position are cited morefrequently.
Unfortunately the authors did not think to provide a per-discipline breakdown, but we might conclude, not unreasonably, that despite being underrepresented overall, women scientists (a) are more productive, and (b) have a greater scientific and/or societal impact then men.
The implications are left as an exercise for the reader.
Note added after publication: it's been brought to my attention that the authors of the Commentary did normalize disciplinary citations within each country. Unfortunately, without raw data or summary statistics it's difficult to know the significance of the graphs as shown.
Richard P Grant used to be a research scientist but himself 'leaked' a few years ago, and has never looked back.
Nenhum comentário:
Postar um comentário