Mandyam Srinivasan (Fig. 1) is well known for his pioneering work that unravelled the visual mechanisms used by flying insects (particularly honeybees) to measure the distance to nearby objects. These mechanisms are essential for avoiding obstacles, flying safely through narrow passages, regulating flight speed, and for executing smooth landings. These visual mechanisms are also essential for measuring the distance travelled during a journey (a requirement for navigating between the nest and a distant foraging site). Central to this work was his discovery that either the amount, or the rate, of optic flow (image motion) detected by the visual system is key to these measurements of distance (reviewed in Srinivasan 2011). The elegant behavioural experiments that led to these fundamental new principles of visual flight control resulted in a large body of published work in the most prestigious journals in the field. Not surprisingly, Srinivasan was honoured for these contributions, being the recipient of a number of notable prizes (including the Order of Australia) and being elected as a Fellow of the Royal Society, and a Fellow of the Australian and German Academies of Science.
Fig. 1MV Srinivasan conducting experiments on bees in 2006. Credit: Nine/Andrew Taylor
The controversyIn the middle of 2024, a manuscript was published on a preprint server entitled “The miscalibration of the honeybee odometer” (Luebbert and Pachter 2024). The “odometer” in the title refers to the mechanism honeybees use to measure the distance from the hive to a food source, which Srinivasan and colleagues discovered was based on the amount of visual optic flow honeybees encountered on their journey (and which was published in Science– Srinivasan et al. 2000). Exactly how bees measure this distance had remained unresolved ever since Karl von Frisch, over 70 years earlier, first discovered that a forager bee communicates food-source distance to other bees via the duration of the waggle phase of its waggle dance inside the hive (von Frisch 1946). Over the decades that followed, von Frisch and colleagues hypothesised that this distance estimation might be based on energy expenditure (Heran 1956; von Frisch 1965), however the experimental evidence in support of this hypothesis remained weak. Thus Srinivasan’s paper was a signifcant advance in the field, for the first time quantitatively determining how much image motion (i.e. optic flow) is coded by each millisecond of the waggle phase (which turns out to be 17.7° of image motion per millisecond, a measure that is independent of the richness of the optic flow experienced by the bee, which strongly depends on the bee’s habitat).
The visual principles that Srinivasan discovered in honeybees have been confirmed many times in the laboratories of other researchers. For example, using another experimental approach, Harald Esch and John Burns (1995) had already deduced that bees must be using visual odometry to estimate flight distance, thus supporting Srinivasan’s more extensive findings five years later (Srinivasan et al. 2000). In a collaboration with Srinivasan that was published the following year (Esch et al. 2001), Esch further reinforced the 2000 results (Srinivasan et al. 2000). Srinivasan’s findings have also been largely confirmed in later work on honeybees (e.g. Chittka and Tautz 2003; De Marco and Menzel 2005; Menzel and Galicia 2024). And his earlier studies (Srinivasan et al. 1996, 1997), which show that honeybees overestimate the distance they fly in narrow tunnels and underestimate it in wider tunnels (results that paved the way for the work in Srinivasan et al. 2000), have also been replicated, both in honeybees (Shafir and Barron 2010) and in stingless bees (Hrncir et al. 2003. Srinivasan’s work showing that optic flow is used for controlling locomotion and avoiding obstacles has likewise been confirmed in other species of flying insects (e.g. Fry et al. 2009; Linander et al. 2017; Lecoeur et al. 2019), as well as in birds (e.g. Bhagavatula et al. 2011) and even humans (e.g. Redlick et al. 2001; Duchon and Warren 2002). In addition, these principles have been applied in guidance circuitry for controlling robot navigation and the flight of autonomous flying vehicles– with considerable success (e.g. Santos-Victor et al. 1993; Duchon and Warren 2002; Barrows et al. 2003; Chahl et al. 2004; Ruffier and Franceschini 2004).
The Luebbert and Pachter preprint manuscript alleges that data duplication and manipulation, and/or calculation errors, occur not only in Srinivasan et al. (2000) but also in a number of Srinivasan’s other published papers between 1996 and 2010Footnote 1. The two authors of the preprint manuscript, Laura Luebbert and Lior Pachter, are bioinformaticians from the California Institute of Technology (Caltech) in the United States, and neither have research experience in Srinivasan’s fields of insect navigation and honeybee sensory biology.
In their preprint, Luebbert and Pachter (2024) identified a number of irregularities in Srinivasan’s published papers, and some of these are certainly bona fide. For instance, they point out that in two papers (Srinivasan et al. 1996, 1997) the same data appears in more than one figure to describe the results of different experiments (with different n-values and experimental conditions). They also notice that tunnel widths and lengths used in some of the odometry experiments were inconsistent within and between papers. These errors are lamentable, and annoying, and reveal an embarrassing level of sloppiness. When alerted to these problems, Srinivasan willingly provided a clarification of these errors in “expressions of concern” that are now permanently attached to these papers. Srinivasan also acknowledges these errors and takes sole responsibility for them. But far from being deliberate malevolent manipulations designed to make the results somehow seem more spectacular than they are, these errors do quite the opposite. They are embarrassing mistakes of the type all of us are capable of if we don’t remain alert. These errors, however, have not altered the conclusions of Srinivasan’s studies– the evidence they present for visual odometry relies on a number of experiments that generated clearly different search distributions, as predicted. Moreover, as mentioned above, Hrncir et al. (2003) reproduced the specific effects of tunnel width on the search distributions reported by Srinivasan, and Shafir and Barron (2010) confirmed that honeybees over- or underestimate distance flown depending on tunnel width.
More seriously however, Luebbert and Pachter (2024) accuse Srinivasan of serious calculation errors in his 2000 paper (Srinivasan et al. 2000), which they claim brings into question not only the results of this paper, but also the results of all of Srinivasan’s work over a 14 year period. Firstly, Luebbert and Pachter argue that the analyses made in the 2000 paper to determine an absolute calibration of the honeybee’s odometer (17.7°/ms, see above) are flawed. Their own re-analyses of Srinivasan’s data claim a substantially lower value (13.9°/ms). Secondly, Luebbert and Pachter argue that the R2 values obtained from regressions in this paper (and in several other papers by Srinivasan and colleagues) are impossibly high (R2 ≥ 0.98)– they assert that their own analyses of the 2000 data indicate a lower R2 value of 0.918.
Srinivasan and his colleagues have made detailed responses to Luebbert’s and Pachter’s claims (Srinivasan et al. 2024; Stuart 2024; Stuart et al. 2024), and Luebbert and Pachter have replied to a number of theseFootnote 2, although for the most part they simply repeat their original allegations. However, their most serious claims– those that Luebbert and Pachter were convinced indicated scientific misconduct– were shown to be flawed, and to a large extent the result of their own calculation errors and their ignorance of Srinivasan’s field. For instance, Luebbert’s and Pachter’s lower calibration value for the honeybee odometer clearly arose from a conceptual error in calculating the slope of the linear relationship between the duration of the waggle phase of the waggle dance and the distance flown by the bee when returning to its nest. Moreover, Geoffrey Stuart, an expert in biostatistics who was not involved in Srinivasan’s original studies, has pointed out that there are errors in the assumptions and statistical computations made by Luebbert and Pachter in their allegations of impossibly high R2 values (Stuart 2024; Stuart et al. 2024).
The paper where the absolute calibration of the visual odometer is first reported– the honeybee odometer paper (Srinivasan et al. 2000) at the centre of the controversy– has now been reevaluated by the Executive Editor of Science (Valda Vinson), who wrote “Science editorial has evaluated the concerns and stands by the published paper, at this time.”Footnote 3. Had this entire episode occurred as it should have– via a calm and reasoned correspondence between peers– this lamentable injustice would never have occurred.
The unfolding of a scientific injusticeFollowing publication of their preprint, Luebbert and Pachter also published their allegations on their various social media channels (including X, where collectively they have over 50,000 followers, presumably mostly scientists). Literally minutes later, Srinivasan’s Wikipedia pageFootnote 4 was altered with a damning new section entitled “Accusations of scientific misconduct”. While the author of this Wikipedia insertion is unknown, its timing is interesting. These allegations were further amplified via in-depth interviews with Luebbert and Pachter conducted by Science (Quaglia 2024) and, in Australia, by the widely-read The Saturday Paper (Ryan 2024). In these interviews (and on their social media channels), Luebbert and Pachter accuse Srinivasan of serious scientific misconduct, even implying that his research grants (and by extension his scientific honours) were obtained dishonestly and “probably sucked away resources from others” (Quaglia 2024). On X, Pachter describes Srinivasan’s work as “junk”, and some of Luebbert’s and Pachter’s followers go further, referring to Srinivasan’s data as “fake” and a “fraud”Footnote 5. Such language, mildly put, is unfair and unjustified. I suspect, however, that only a small fraction of those who voiced these opinions had actually read either Luebbert’s and Pachter’s original preprint manuscript, or the detailed replies of Srinivasan and his colleagues that subsequently followed (Srinivasan et al. 2024; Stuart 2024; Stuart et al. 2024).
Some of the most important standards of scientific enquiry are honesty and truthfulness in reporting data, reproducibility of experimental outcomes and caution in interpretation. Individual scientists (or scientific studies) have sometimes not reached these standards, and the reasons for this range from careless errors in calculations or preparations of figures (that all of us regrettably can and do make, as was likely the case with Srinivasan) to deliberate and premeditated attempts to fabricate and falsify data, in other words, to commit criminal fraud (typically to increase personal prestige or to obtain financial reward). It is thus extremely important for scientists to remain sceptical– a chief tenet of our education– and to not blindly accept what is stated in a published scientific study. Thus, when Luebbert and Pachter discovered what they considered to be irregularities in two of Srinivasan’s papers, they were correct to be concerned. However, irrespective of the seriousness or otherwise of these irregularities, how they dealt with these concerns is regrettable.
Had I found such irregularities I would have first contacted the author of the paper to discuss the issues, not because I have to, but as a matter of decency and courtesy (see footnote 5). Depending on the response, I may have privately contacted experts in the area to ask for their informed opinion. I am humble enough to understand that I don’t know everything (by far). I may have misunderstood something, or lacked knowledge of a procedure or an analysis method that may have led me to misinterpret the results. Luebbert and Pachter did not do this. Instead, seemingly convinced that these errors were part of a wider problem with Srinivasan’s work, they produced manipulated overlays of graphs taken from his papers to imply premeditated data manipulation and duplication. They uploaded their preprint manuscript to arXiv, advertised their allegations to their vast numbers of followers on social media (to great applause) and allowed themselves to be interviewed in the press.
As a result, Srinivasan, with no knowledge of the accusations, the article on the preprint server or of the social media storm that was erupting, had no time to defend himself against the claims nor to explain which (if any) of the claims held any water (for instance by being given the chance to write a point-by-point response to Luebbert’s and Pachter’s claims prior to being hung out on social media and in the press). Thus, as is the nature of social media in our time, the accusations of misconduct against Srinivasan went viral. In the space of a few short weeks, Srinivasan, one of Australia’s most decorated scientists, was now suddenly a fraudster. Fait accompli.
The injustice of this is breathtaking. A manuscript deposited by its authors on a preprint server has not been peer reviewed by anyone. The claims of any such manuscript– including that of Luebbert and Pachter– are therefore highly preliminary until peer review has ensured they are sound enough to be published. Due to the nature of Luebbert’s and Pachter’s manuscript, peer review by experts in the field of the accusations would have been especially important, particularly when the authors have no history of work in this field. For a fair and balanced scientific discourse, the publisher of such a manuscript would moreover be obliged to invite the accused to make a written response to the claims of their accuser(s). None of this happened. Yet Luebbert’s and Pachter’s claims were broadcast as if they were proven solid facts. They were not.
A sad outcome and a lesson for the futureSrinivasan and his colleagues have clearly made mistakes and can be accused of sloppiness, and we should be grateful to Luebbert and Pachter for pointing these out. But based on the evidence, most of which came far too late, these mistakes fall vastly short of the full-blown premeditated misconduct for which Srinivasan and his colleagues were initially accused, and because of the media storm, still stand convicted and condemned. Sadly though, I doubt this evidence will reverse the unjust damage that Pachter and Luebbert have inflicted. In a world where a viral and sensational story gains far more traction than a balanced discussion between colleagues (which in this case never happened), the chance for Srinivasan to clear his name in the public domain will be limited. And that, sadly, is a tragedy.
I think there are three very important take-home messages one can draw from all this. The first is that it is extremely important to check, double-check and triple-check every graph and every data point for accidental errors prior to publication. It is incredibly damaging when mistakes slip into the literature, as these can potentially hamper or misdirect scientific progress. For a group leader of younger and less experienced staff this is particularly important. Group leaders often don’t acquire and analyse the primary data, especially those at more senior levels distracted by other duties, but instead implicitly trust the abilities (and integrity) of younger researchers (as I have always done). Thus, the importance of carefully reviewing the methods, analyses and data-driven figures of each manuscript prior to submission can’t be overstated. There is also of course an onus on reviewers to be vigilant when reviewing manuscripts, to spot errors before they become permanent fixtures in the literature.
The second take-home message is that we should be encouraging more journals to allow a forum for respectful disagreement between peers. Some journals already offer this possibility (e.g. Nature), but far from all journals allow this. In their defence, Luebbert and Pachter did try, but failed, to air their concerns via established journals. But such a failure should in no way be interpreted as a green-light to instead use social media.
The third take-home message is possibly the most important– never resort to a viral internet campaign to expose or bring down a fellow scientist, particularly before you have engaged in a careful, considered and respectful exchange with the person(s) in question and have gathered all the facts. It might be tempting to resort to the internet– being applauded by tens of thousands of followers for apparently outing a scientific fraudster must have been gratifying for Luebbert and Pachter, but it doesn’t make it right. Certainly, in Srinivasan’s case it led to a major injustice. While these allegations have done serious damage in the short term, the self-correcting nature of science will hopefully set the record straight in due course.
In a world where the tone of public discourse among certain political leaders and media personalities continues to decline, it is my hope that scientists can maintain decency and respect in their interactions with their peers, even if they disagree with them. There is nothing wrong with disagreement– this comes from maintaining a healthy scepticism, an important part of the scientific process. The fact that we may disagree is not the issue here. It is how we disagree that matters. I hope we can at least learn that particular lesson from this sorry saga.
Comments (0)