The Roman Philosopher Lucius Anneaus Seneca (4 BCE-65 CE) was perhaps the first to note the universal trend that growth is slow but ruin is rapid. I call this tendency the "Seneca Effect."
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Monday, May 1, 2023

When Science Fails: Surrogate endpoints and wrong conclusions

 


Galileo Galilei and Anthony Fauci are linked to each other by a chain of events that started at the beginning of modern science, during the 17th century. But the Science that Fauci claimed to represent is very different from that of Galileo. While Galileo studied simple linear systems, modern science attempts to study complex, multi-parameter systems, where the rigid Galilean method just cannot work. The problem is that, while it is obvious that we can measure only what we can measure, that's not necessarily what we want, or need, to measure. Tests based on "surrogate endpoints" may well be the best we can do in medicine and other fields, but we should understand that the results are not, and cannot be, a source of absolute scientific truth.


The Scientific Method

Galileo Galilei is correctly remembered as the father of modern science because he invented what we call today the "scientific method," sometimes still called the "Galilean method." It is supposed to be the basis of modern science; the feature that makes it able to be called "Science" with a capital first letter, as we were told over and over during the Covid pandemic. But what is really this scientific method that's supposed to lead us to the truth? 

Galileo's paradigmatic idea was an experiment about the speed of falling objects. It is said that he took two solid metal balls of different weights and dropped them from the top of the Pisa Tower. He then noted that they arrived at the ground at about the same time. That allowed him to lampoon an ancient authority such as Aristotle for having said that heavier objects fall faster than lighter ones (*). There followed an avalanche of insults to Aristotle that continues to this day. Even Bertrand Russel fell into the trap of poking fun at Aristotle, accused of having said that women have fewer teeth than men. Too bad that he never said anything like that.

It may well be that Galileo was not the first to perform this experiment, and it is not even clear that he actually performed it, but that's a detail. The point is that the result was evident, clear-cut, and irrefutable. Later, Newton started from this result to arrive to the assumption that the same force that acted on an apple falling from a tree in his garden was acting on the Moon and the planets. From then on, science was supposed to be largely based on laboratory experiments or, anyway, experiments performed in tightly controlled conditions. It was a major change of paradigm: the basis of the scientific method as we understood it today.

The Pisa Tower experiment succeeded in separating the two parameters that affect a falling body: the force of gravity and the air drag. That was relatively easy, but what about systems that have many parameters affecting each other? Here, let me start with the case of health care, which is supposed to be a scientific field, but where the problem of separating the parameters is nearly impossible to overcome.


The surrogate endpoint in medicine

How can you apply the scientific method in medicine? Dropping a sick person and a healthy one from the top of the Pisa Tower won't help you so much. The problem is the large number of parameters that affect the nebulous entity called "health" and the fact that they all strongly interact with each other. 

So, imagine you were sick, and then you feel much better. Why exactly? Was it because you took some pills? Or would you have recovered anyway? And can you say that you wouldn't have recovered faster hadn't you taken the pill? A lot of quackery in medicine arises from these basic uncertainties: how do you determine what is the specific cause of a certain effect? In other words, is a certain medical treatment really curing people, or is it just their imagination that makes them think so?

Medical researchers have worked hard at developing reliable methods for drug testing, and you probably know that the "gold standard" in medicine is the "Randomized Controlled Test" (RCT). The idea of RCTs is that you test a drug or a treatment by keeping all the parameters constant except one: taking or not taking the drug. It is designed to avoid the effect called "placebo" (the patient gets better because she believes that the drug works, even though she is not receiving it) and the one called "nocebo" (the patient gets worse because he believes that the drug is harmful, even though he is not receiving it). 

An RCT involves a complex procedure that starts with separating the patients into two similar groups, making sure that none of them knows to which group she belongs (the test is "blinded"). Then, the members of one of the two groups are given the drug, say, in the form of a pill. The others are given a sugar pill (the "placebo"). After a certain time, it is possible to examine if the treatment group did better than the control group. There are statistical methods used to determine whether the observed differences are significant or not. Then, if they are, and if you did everything well, you know if the treatment is effective, or does nothing, or maybe it causes bad effects.  

For limited purposes, the RCT approach works, but it has enormous problems. A correctly performed RCT is expensive and complex, its results are often uncertain and, sometimes, turn out to be plain wrong. Do you remember the case of "Thalidomide"? It was tested, found to work as a tranquilizer, and approved for general use in the 1960s in Europe. It was later discovered that it had teratogenic effects on fetuses, and some 10.000 babies in Europe were born without arms and legs before the drug was removed from the market. Tests on animals would have shown the problem, but they were not performed or were not performed correctly. 

Of course, the rules have been considerably tightened after the Thalidomide disaster and, nowadays, testing on animals is required before a new drug is tested on humans. But let's note, in passing, that in the case of the mRNA Covid vaccines, tests on animals were performed in parallel (and not before) testing on humans. This procedure exposed volunteers to risks that normally would not be considered acceptable with drug testing. Fortunately, it does not appear that mRNA vaccines have teratogenic effects. 

Even assuming that the tests are complete, and performed according to the rules, there is another gigantic problem with RCT: What do you measure during the test?  Ideally, drugs are aimed at improving people's health, but how do you quantify "health"? There are definitions of health in terms of indices called QALY (quality-adjusted life years) or QoL (quality of life). But both are difficult to measure and, if you want long-term data, you have to wait for a long time. So, in practice, "surrogate endpoints" are used in drug testing.  

A surrogate endpoint aims at defining measurable parameters that approximate the true endpoint -- a patient's health. A typical surrogate endpoint is, for instance, blood pressure as an indicator of cardiovascular health. The problem is that a surrogate endpoint is not necessarily related to a person's health and that you always face the possibility of negative effects. In the case of drugs used to treat hypertension, negative effects exist and are well known, but it is normally believed that the positive effects of the drug on the patient's health overcome the negative ones. But that's not always the case. A recent example is how, in 2008, the drug bevacizumab was approved in the US by FDA for the treatment of breast cancer on the basis of surrogate endpoint testing. It was withdrawn in 2011, when it was discovered that it was toxic and that it didn't lead to improvements in cancer progression (you can read the whole story in "Malignant" by Vinayak Prasad).  

Consider now another basic problem. Not only the number of parameters affecting people's health are many, but they strongly interact with each other, as is typical of complex systems. The problem may take the form called "polydrug use," and it especially affects old people who accumulate drugs on their bedstands, just like old cars accumulate dents on their bodies. An RCT test that evaluates one drug is already expensive and lengthy; evaluating all the possible combinations of several drugs is a nightmare. If you have two drugs, A and B, you have to go through at least three tests: A alone, B alone, and the combination of A+B. If you have three drugs, you have seven tests to do (A, B, C, AB, BC, AC and ABC). And the numbers grow rapidly. In practice, nobody knows the effects of these multiple drug uses, and, likely, nobody ever will. But a common observation is that when the elderly reduce the number of medicines they take, their health immediately improves (this effect is not validated by RCTs, but that does not mean it is not true. I noted it for my mother-in-law who died at 101). 


The case of Face Masks 

Some medical interventions have specific problems that make RCTs especially difficult. An example is that of face masks to prevent the spreading of an airborne pathogen. Evidently, there is no way to perform a blind test with face masks, but the real problem is what to use as a surrogate end-point. At the beginning of the Covid pandemic, several studies were performed using cameras to detect liquid droplets emitted by people breathing or sneezing with or without face masks. That was a typical "Galilean," laboratory approach, but what does it demonstrate? Assuming that you can determine if and how much a mask reduces the emission of droplets, is this relevant in terms of stopping the transmission of an airborne pathogen? As a surrogate endpoint, droplets are at best poor, at worst misleading.  

A much better endpoint is the PCR (polymerase chain reaction) test that can directly detect an infection. But even here, there are many problems. As an example, consider an often touted study performed in Pakistan that claimed to have demonstrated the effectiveness of face masks. Let's assume that the results of the study are statistically significant (really?) and that nobody tampered with the data (and we can never be sure of that in such a heavily politicized matter). Then, the best you can say is that if you live in a village in Pakistan, if there is a Covid wave ongoing, if the PCR tests are reliable, if the people who wore masks behave exactly like those who don't, and if random noise didn't affect the study too much, then by wearing a mask you can delay being infected for some time, and maybe even avoid infection altogether. Does the same result apply to you if you live in New York? Maybe. Is it valid for different conditions of viral diffusion and epidemic intensity? Almost certainly not. Does it ensure that you don't suffer adverse effects from wearing face masks? Duh! Would that make you healthier in the long run? We have no idea.

The Pakistan study is just one example of a series of studies on face masks that were found to be ill-conceived, poorly performed, inconclusive, or useless in a recent rigorous review published in the Cochrane Network. The final result is that no one has been able to detect a significant effect of face masks on the diffusion of an airborne disease, although we cannot say that the effect is actually zero. 

The confusion about face masks reached stellar levels during the COVID-19 pandemic. In 2020, Tony Fauci, director of the NIAID, first advised against wearing masks, then he reversed his position and publicly declared that face masks are effective and even that two masks are better than just one. Additionally, he declared that the effectiveness of masks is "science" and, therefore, cannot be doubted. But, nowadays, Fauci has reversed his position, at least in terms of mask effectiveness at the population level. He still maintains that they can be useful for an individual "who religiously wears a mask." Now, imagine an RCT dedicated to demonstrating the different results of "religiously" and "non-religiously" wearing a mask. So much for science as a pillar of certainty. 


Surrogate endpoints everywhere

Medicine is a field that may be defined as "science" since it is based (or should be based) on data and measurements. But you see how difficult it is to apply the scientific method to it. Other fields of science suffer from similar problems. Climate science, ecosystem science, biological evolution, economics, management, policies, and others are cases in which you cannot reproduce the main features of the system in a laboratory and, at the same time, involve a large number of parameters interacting with each other in a non-linear manner. You could say, for instance, that the purpose of politics is to improve people's well-being. But how could that be measured? In general, it is believed that the Gross Domestic Product (GDP) is a measure of the well-being of the economy and, hence, of all citizens. Then, it is concluded that economic growth is always good, and that it should be stimulated by all possible interventions. But is it true? GDP growth is another kind of surrogate endpoint used simply because we know how to measure it. But people's well-being is something that we don't know how to measure. 

Is a non-Galilean science possible? We have to start considering this possibility without turning to discard the need for good data and good measurements. But, for complex systems, we have to move away from the rigid Galilean method and use dynamic models. We are moving in that direction, but we still have to learn a lot about how to use models and, incidentally, the Covid19 pandemic showed us how models can be misused and lead to various kinds of disasters. But we need to move on, and I'll discuss this matter in detail in an upcoming post. 


______________________________________________

(*) Aristotle's "Physics" (Book VIII, chapter 10) where he discusses the relationship between the weight of an object and its speed of fall:

"Heavier things fall more quickly than lighter ones if they are not hindered, and this is natural, since they have a greater tendency towards the place that is natural to them. For the whole expanse that surrounds the earth is full of air, and all heavy things are borne up by the air because they are surrounded and pressed upon by it. But the air is not able to support a weight equal to itself, and therefore the heavier bodies, as having a greater proportion of weight, press more strongly upon and sink more quickly through the air than do the lighter bodies."





  




Monday, April 17, 2023

One step away from the Library of Babel : How Science is Becoming Random Noise


It is said that if you have a monkey pounding at the keys of a typewriter, by mere chance, eventually it will produce all the works of Shakespeare. The Library of Babel, a story by Jorge Luis Borges, is another version of the same idea: a nearly infinite repository of books formed by all the possible combinations of characters. Most of these books are just random combinations of characters that make no sense but, somewhere in the library, there is a book unraveling the mysteries of the universe, the secrets of creation, and providing the true catalog of the library itself. Unfortunately, this book is impossible to find and even if you could find it you would not be able to separate it from the infinite number of books that claim to be it but are not. 

The Library of Babel (or a large number of typing monkeys) may be a fitting description of the sad state of "science" as it is nowadays. An immense machine that mostly produces nonsense and, perhaps, some gems of knowledge, unfortunately nearly impossible to find. 

Below, I am translating a post that appeared in Italian on the "Laterum" ("bricks") blog, with the signature of “Birbo Luddynski.” Before getting to this text, a few comments of mine. Obviously, "science" is a huge enterprise formed by, maybe, 10 million scientists. There exist a large number of different fields, different cultures, different languages, and these differences surely affect the way science works. So, you have to take the statements by Mr. "Luddynski" with a certain caution. The way he describes science is approximately valid for the Western world, and the Western scientific rules are spilling to the rest of the world, just like McDonald's fast food joints do. Today, if it is not Western Science, it is not science -- but is Western Science really science?

Mr. Luddyinski is mostly correct in his description, but he is missing some facets of the story that are even more damning than others. For instance, in his critique of science publishing, he does not mention that the scientists working as editors are paid by the publishers. So, they have an (often undeclared) conflict of interest in supporting a huge organization that siphons public money into the pockets of private organizations. 

On the other hand, Luddyinski is too pessimistic about the capability of individual scientists to do something good despite all the odds. In science, there holds the general rule that things done illegally are done most efficiently. So, scientists must obey the rules if they want to be successful but, once they attain a certain degree of success, they can bend the rules a little -- sometimes a lot -- and try to rock the boat by doing something truly innovative. It is mainly in this way that science still manages to progress and produce knowledge. Some fields like astronomy, artificial intelligence, ecosystem science, biophysical economics, and several others are alive and well, only marginally affected by corruption. 

Of course, the bureaucrats that govern these things are working hard at eliminating all the possible spaces where creative escapades are possible. Even assuming that they won't be completely successful, there remains the problem that an organization that works only when it ignores its own rules is hugely inefficient. For the time being, the public and the decision-makers haven't yet realized what kind of beast they are feeding, but certain things are percolating outside the ivory tower and are becoming known. Mr. Luddynski's paper is a symptom of the gradual dissemination of this knowledge. Eventually, the public will stop being mesmerized by the word "science" and may want something done to make sure that their tax money is spent on something useful rather than on a prestige competition among rock-star scientists.  

Here is the text by Luddynsky. I tried to translate it into English the best I could, maintaining its ironic and scathing (and a little scatologic) style.


Science is a Mountain of Shit

by Birbo Luddynsky - Feb 8, 2023

(translated by Ugo Bardi)

Foreword

Science is not the scientific method. "Science" - capitalized in quotation marks - is now an institution, stateless and transnational, that has fraudulently appropriated the scientific method, made it its exclusive monopoly, and uses it to extort money from society - on its own or on behalf of third parties - after having self-proclaimed itself as the new Church of the Certification of Truth [1].

An electrician trying to locate a fault or a cook aiming to improve a recipe are applying the scientific method without even knowing what it is, and it works! It has always worked for thousands of years, that is, before anyone codified its algorithm, and it will continue to do so, despite modern inquisitors.

This writer is not a scholar, he does not cite sources out of ideological conviction, but he learned at an early age how to spell "epistemology." He then wallowed for years in the sewage of the aforementioned crime syndicate until, without having succeeded in habituating himself to the mephitic stench, he found an honorable way out. Popper, Kuhn, and Feyerabend, were all thinkers who fully understood the perversion of certain institutional mechanisms but who could not even have imagined that the rot would run so unchecked and tyrannical through society, to the point where a scientist could achieve the power to prevent you from leaving your house, or owning a car, or forcing you to eat worms. Except for TK, he had it all figured out, but that is another matter.

This paper will not discuss the role that science plays in contemporary society, its gradual becoming a cult, the gradual eroding of spaces of freedom and civic participation in its name. It will not discuss the relationship with the media and the power they have to pass off the rants of a mediocre bumptious professor as established truths. Thus, there will be no talk about the highest offices of the state declaring war on anti-science, declaring victories at plebiscites that were never called, where people are taken to the polls by blackmail and thrashing.

There will be no mention of how "real science"- that is, that which pertains to respect for the scientific method-is being raped by the international virologists and scientific and technical committees of the world, who make incoherent decisions, literally at the drop of a hat, and demand that anyone comply without question.

Nor will we discuss the tendency to engage in sterile Internet debates about the article that proves us right in the Lancet or the Brianza Medical Journal or the Manure Magazine. Nor about the obvious tendency to cherry-pick by the deboonker of the moment. "Eh, but Professor Giannetti of Fortestano is a known waffler, lol." The office cultists of the ipse dixit are now meticulous enforcers of a strict hierarchy of sources and opinions, based on improbable as rigorous qualitative prestige rankings, or worse, equally improbable as arbitrary quantitative bibliometric indices.

Here you will be told why science is not what it says it is and precisely why it has assumed such a role in society.

Everything you will find written here is widely known and documented in books, longforms, articles in popular weeklies, and even articles in peer-reviewed journals (LOL). Do a Google search for such terms as  "publish or perish," "reproducibility crisis," "p-hacking," "publication bias," and dozens of other related terms. I don't provide any sources because, as I said, it is contrary to my ideology. The added value of what I am going to write is an immediate, certainly partisan and uncensored, description of the obscene mechanisms that pervade the entire scientific world, from Ph.D. student recruitment to publications.

There is also no "real science" to save, as opposed to "lasciviousness." The thesis of this paper is that science is structurally corrupt and that the conflicts of interest that grip it are so deep and pervasive, that only a radical reconstruction -after its demolition- of the university institution can save the credibility of a small circle of scholars, those who are trying painstakingly, honestly and humbly to add pieces to the knowledge of Creation.

The scientist and his career

"Scientist" today means a researcher employed in various capacities at universities, public or private research centers, or think tanks. That of the scientist is an entirely different career from that of any worker and is reminiscent of that of the cursus honorum of the Roman senatorial class. Cursus honorum my ass. A scientist's CV is different from that of any other worker. Incidentally, the hardest thing for a scientist is to compile a CV intelligible to the real world, should he or she want to return to earning an honest living among ordinary mortals. Try doing a Europass after six years of post-doc. LOL.

The scientist is a model student throughout his or her school career first and college career later. Model student means disciplined, with grades in the last percentiles of standardized tests, and also good interpersonal skills. He manages to get noticed during the last years of university (so-called "undergraduate," which however strangely, in Italy is called the magistrale or specialistica or whatever the fuck it is called nowadays), until he finds a recommendation (intended in the bad sense of the term) from a professor to enroll in a Ph.D. program, or doctorate, as it is called outside the Anglosphere. The Ph.D. is an elite university program, three to six years long, depending on the country and discipline, where one is trained to be a "scientist."

During the Ph.D. in the first few years, the would-be scientist takes courses at the highest technical level, taught by the best professors in his department, which are complemented by an initial phase of initiation into research, in the form of becoming a "research assistant," or RA. This phase, which marks the transition from "established knowledge" of textbooks to the "frontier of scientific research," takes place explicitly under the guidance of a professor-mentor, who will most likely follow the doctoral student through the remainder of the doctoral course, and almost always throughout his or her career. The education of crap. Not all doctoral students will, of course, be paired with a "winning" mentor, but only those who are more determined, ambitious, and show vibrancy in their courses.

During this RA phase, the student is weaned into what are the real practices of scientific research. It is a really crucial phase; it is the time when the student goes from the Romantic-Promethean-Faustian ideals of discovering the Truth, to nights spent fiddling with data that make no sense, amid errors in the code and general bewilderment at the misplaced meaning. Why am I doing all this? Why do I have to find just THAT result? Doesn't this go against all the rationalist schemata I was sharing on Facebook two years ago when I was pissing off the flat-earthers on duty? What do you mean if the estimate does not support the hypothesis, then I have to change the specification of the model? It is at this point that the young scientist usually asks the mentor some timid questions with a vague "epistemological" flavor, which are usually evaded with positivist browbeating about science proceeding by trial and error, or with a speech that sounds something like this: "You're going to get it wrong. There are two ways of not understanding: not understanding because you are trying to understand, or not understanding by pissing off others. Take your pick. In October, your contract runs out."

The purest, at this point, fall into depression, from which they will emerge by finding an honest job, with or without a piece of paper. The careerists, psychopaths, and naive fachidioten, on the other hand, will pick up speed and run along the tracks like bullet trains, unstoppable in their blazing careers devoted to prestige.

After the first phase of RA, during which the novice contributes to the mentor's publications-with or without mention among the authors, he or she moves on to the next, more mature phase of building his/her own research pipeline, collaborating as a co-author with the mentor and his other co-authors, attending conferences, weaving a relational network with other universities. The culmination of this phase is the attainment of the Ph.D. degree, which, however, at this point, is nothing more than a formality, since the doctoral student's achievements speak for themselves. The "dissertation," or "defense," in fact, will be nothing more than a seminar where the student presents his or her major work, which is already publishable.

But now the student is already launched on a meteoric path: depending on discipline and luck, he will have already signed a contract as an "assistant professor" in a "tenure track," or as a "post-doc." His future career will thus be determined solely by his ability to choose the right horses, that is, the research projects to bet on. Indeed, he has entered the hellish world of "publish or perish," in which his probability of being confirmed "for life" depends solely on the number of articles he manages to publish in "good" peer-reviewed journals. In many hard sciences, the limbo of post-docs, during which the scientist moves from contract to contract like a wandering monk, unable to experience stable relationships, can last as long as ten years. The "tenure track," or the period after which they can decide whether or not to confirm you, lasts more or less six years. A researcher who manages to become an employee in a permanent position at a university before the age of 35-38 can consider himself lucky. And this is by no means a peculiarity of the Italian system.

After the coveted "permanent position" as a professor, can the researcher perhaps relax and get down to work only on sensible, quality projects that require time and patience and may not even lead anywhere? Can he work on projects that may even challenge the famous consensus? Does he have time to study so that he can regain some insight? Although the institution of "tenure," i.e., the permanent faculty position, was designed to do just that, the answer is no. Some people will even do that, aware that they end up marginalized and ignored by their peers, in the department and outside. Not fired, but he/she ends up in the last room in the back by the toilet, sees not a penny of research funding, is not invited to conferences, and is saddled with the lamest PhD students to do research. If he is liked, he is treated as an eccentric uncle and good-naturedly teased, if disliked he is first hated and then ignored.

But in general, if you've survived ten years of publish or perish -- and not everyone can -- it's because you like doing it. So you will keep doing it, partly because the rewards are coveted. Salary increases, committee chairs, journal editor positions, research funds, consultancies, invitations to conferences, million-dollar grants as principal investigator, interviews in newspapers and on television. Literally, money and prestige. The race for publications does not end; in fact, at this point, it becomes more and more ruthless. And the scientist, now a de facto manager of research, with dozens of Ph.D. students and postdocs available as workforce, will increasingly lose touch with the subject of research to become interested in academic politics, money, and public relations. It will be the doctoral student now who will "get the code wrong" for him until he finds the desired result. He won't even have to ask explicitly, in most cases. Nobody will notice anyway, because nobody gives a damn. But what about peer review?

The peer-review

Thousands of pages have been written about this institution, its merits, its flaws, and how to improve it. It has been ridiculed and trolled to death, but the conclusion recited by the standard bearers of science is always the same: it is the best system we have, and we should keep it. Which is partly true, but its supposed aura of infallibility is the main cause of the sorry state the academy is in. Let's see how it works.

Our researcher Anon has his nice pdf written in Latex titled "The Cardinal Problem in Manzoni: is it Betrothed or Behoofed?" (boomer quote) and submits it to the Journal of Liquid Bullshit via its web platform. The JoLB chief editor takes the pdf, takes a quick look, and decides which associate editor to send it to, depending on the topic. The chosen editor takes another quick look and decides whether to 1) send a short letter where he tells Anon that his work is definitely nice and interesting but, unfortunately it is not a good fit for the Journal of Liquid Bullshit, and he recommends similar outlets such as Journal of Bovine Diarrhea, or Liquid Bovine Stool Review or 2) send it to the referees. But let's pause for a moment.

What is the Journal of Liquid Bullshit?

The Journal of Liquid Bullshit is a "field journal" within the general discipline of "bullshit" (Medicine, Physics, Statistics, Political Science, Economics, Psychology, etc.), dealing with the more specialized field of Liquid Bullshit (Virology, Astrophysics, Machine Learning, Regime Change, Labor Economics, etc.). It is not a glorious journal like Science or Nature, and it is not a prestigious journal in the discipline (Lancet, JASA, AER, etc). But it is a good journal in which important names in the field of liquid crap nonetheless publish regularly. In contrast to the more prestigious ones, which are generally linked to specific bodies or university publishers, most of these journals are published (online, that is) by for-profit publishers, who make a living by selling subscription packages to universities at great expense.

Who is the chief editor? The editor is a prominent personality in the Liquid Bullshit field. Definitely, a Full Professor, minimum of 55 years old, dozens of prominent publications, highly cited, has been keynote speaker at several annual conferences of the American Association of Liquid Bovine Stools. During conferences, he constantly has a huddle of people around him.


Who are the Associate editors?

They are associate or full professors with numerous publications, albeit relatively young. Well connected, well launched. The associate editor's job is to manage the manuscript at all stages, from choosing referees to decisions about revisions. The AE is the person who has all the power over the paper, not only of course for the final decision, but also for the choice of referees.


Who are the referees

Referees are academic researchers formally contacted by the AE to evaluate the paper and write a short report. Separately, they also send a private recommendation to the AE on the fate of the paper: acceptance, revision, or rejection. The work of the referees is unpaid; it is time taken away from research or very often from free time. The referee receives only the paper; he or she has no way to evaluate the data or codes, unless (which is very rare) the authors make them available on their personal web pages before publication. It is difficult in any case for the referee to waste time fiddling around. In fact, the evaluation required of the referee is only of a methodological-qualitative nature, and not of merit. In fact, the "merit" evaluation would be up to the "scientific community," which, by replicating the authors' work, under the same or other experimental conditions, would judge its validity. Even a child would understand that there is a conflict of interest as big as a house. In fact, the referee can actively sabotage the paper, and the author can quietly sabotage the referees when it is their turn to judge. 

In fact, when papers were typed and shipped by mail, the system used was "double-blind" review. The title page was removed from the manuscript: the referee did not know who was judging, and the author did not know who the referee was. Since the Internet has existed, however, authors have found it more and more convenient to publicize their work in the "workings paper" format. There are many reasons for doing so, and I won't go into them here, but it is now so widespread that referees need only do a very brief google search to find the working paper and, with it, the authors' names. By now, more and more journals have given up pretending to believe in double-blind reviewing, and they send the referees the pdf complete with the title page. Thus referees are no longer anonymous referees, especially since they have strong conflicts of interest. For example, the referee may get hold of the paper of a friend or colleague of his or her, or a stranger who gives right - or wrong - to his or her research. A referee may also "reveal" himself years later to the author, e.g. over a drink, during alcoholic events at a conference, obviously in case of a favorable report. You know, I was a referee of your paper at Journal X. It can come in handy if there is some confidence. Let's not forget that referees are people who are themselves trying to get their papers published in other journals, or even the same ones. And not all referees are the same. A "good" referee is a rare commodity for an editor. The good referee is the one who responds on time, and writes predictable reports. I have known established professors who boasted that they receive dozens of papers a month to referee "because they know I reject them all right away."

So how does this peer review process work?

We should not imagine editors as people who are removed from the real world, just with their hands extremely full. An editor, in fact, has enormous power, and one does not become an editor unless he or she shows that he or she covets this power, as well as having earned it, according to the perverse logic of the cartel. The editor's enormous power lies in influencing the fate of a paper at all stages of revision. He can choose referees favorable or unfavorable to a paper: scientific feuds and their participants are known, and are especially known to editors. It can also side with the minority referee and ask for a review (or rejection).

Every top researcher knows which editors are friendly, and knows which potential referees can be the most dangerous enemies. Articles are often calibrated, trying to suck up to the editor or potential referees by winking at their research agendas. Indeed, it is in this context that the citation cow market develops: if I am afraid of referee Titius, I will cite as many of his works as possible. Or another strategy (valid for smaller papers and niche works) is to ignore him completely, otherwise the neutral editor might choose him as referee simply by scrolling through the bibliography. Many referees also go out of their way at the review stage to pump up their citations essentially pushing authors to cite their work, even if it is irrelevant. But this happens in smaller journals.  [2]

The most relevant thing to understand about the nature of peer review is how it is a process in which individuals who are consciously and structurally conflicted participate. The funny thing is that this same system is also used to allocate public research funds, as we shall see.

The fact that the occasional "inconvenient" article manages to break through the peer review wall should not, of course, deceive: the process, while well-guided, still remains partly random. A referee may recommend acceptance unexpectedly, and the editor can do little about it. In any case, little harm: an awkward article now and then will never contribute to the creation of consensus, the one that suits the ultimate funding apparatus, and indeed gives the outside observer the illusion that there is frank debate.


But so can one really rig research results and get away with it?

Generally yes, partly because you can cheat at various levels, generally leaving behind various screens of plausible deniability, i.e., gimmicks that allow you to say that you were wrong, that you didn't know, that you didn't do it on purpose, that it was the Ph.D. student, that the dog ate your raw data. Of course, in the rare case that you get caught you don't look good, and the paper will generally end up "retracted." A fair mark of infamy on a researcher's career, but if he can hide the evidence of his bad faith he will still retain his professorship, and in a few years everyone will have forgotten. After all, these are things that everyone does: sin, stones, etc.

Every now and then, more and more frequently, retraction of some important paper happens, where it turns out that the data of some well-published work was completely made up, and that the results were too good to be true. This happens when some young up-and-coming psychopath exaggerates, and draws too much attention to himself. These extreme cases of outright fraud are certainly more numerous than those discovered, but as mentioned, you don't need to invent the data to come up with a publishable result, just pay a monkey to try all possible models on all possible subsets. The a posteriori justification of why you excluded subset B can always be found. You can even omit it if you want to, because if you haven't stepped on anyone's toes, there is no way that anyone is going to start fleecing your work. Even in the experimental sciences it can happen that no one has been able to replicate the experiments of very important papers, and it took them fifteen years to discover that maybe they had made it all up. Go google "Alzheimer scandal" LOL! There is no real incentive in the academy to uncover bogus papers, other than literally bragging about them on twitter.


Research funding

Doing research requires money. There is not only laboratory equipment, which is not needed in many disciplines anyway. There is also, and more importantly, the workforce. Laboratories and research in general are run by underpaid people, namely Ph.D. students and post-docs (known in Italy as "assegnisti"). But not only that, there are also expenses for software, dataset purchase, travel to conferences, seminars, and workshops to be organized, with people to be invited and to whom you have to pay travel, hotel, and dinner at a decent restaurant. Consider the important PR side of this: inviting an important professor, perhaps an editor, for a two-day vacation and making a "friend" of him or her can always come in handy.

In addition to all this, there is the fact that universities generally keep a share of the grants won by each professor, and with this share, they do various things. If the professor who does not win grants needs to change his chair, or his computer, or wants to go present at a loser conference where he is not invited, he will have to go hat in hand to the department head to ask for the money, which will largely come from the grants won by others. I don't know how it works in Italy, but in the rest of the world, it certainly does. This obviously means that a professor who wins grants will have more power and prestige in the department than one who does not win them.

In short, winning grants is a key goal for any ambitious researcher. Not winning grants, even for a professor with tenure, implies becoming something of an outcast. The one who does not win grants is the odd one out, with shaggy hair, who has the office down the hall by the toilet. But how do you win grants?

Grants are given by special agencies -- public or private -- for the disbursement of research funds, which publish calls, periodic or special. The professor, or research group, writes a research project in which he or she says what he or she wants to do, why it is important, how he or she intends to do it, with what resources, and how much money he or she needs. The committee, at this point, appoints "anonymous" referees who are experts in the field, and peer review the project. It doesn't take long to realize that if you are an expert in the field, you know very well who you are going to evaluate. If you have read this far, you will also know full well that referees have a gigantic conflict of interest. In fact, all it takes is one supercilious remark to scuttle the rival band's project, or to favor the friend with whom you share the same research agenda, while no one will have the courage to scuttle the project of the true "raìs" of the field. Finally, the committee will evaluate the applicant's academic profile, obviously counting the number and prestige of publications, as well as the totemic H-index.

So we have a system where those who get grants publish, and those who publish get grants. All is  governed by an inextricable web of conflicts of interest, where it is the informal, and ultimately self-interested, connections of the individual researcher that win out. Informal connections that, let us remember, start with the Ph.D. What is presented as an aseptic, objective, and informal system of meritocratic evaluation resembles at best the system for awarding the contract to resurface the bus parking lot of a small town in the Pontine countryside in the 1980s.


The research agenda

We have mentioned it several times, but what really is a research agenda? Synthetically we can say that the research agenda is a line of research in a certain sub-field of a sub-discipline, linked to a starting hypothesis, and/or a particular methodology. This starting hypothesis will always tend to be confirmed by those pursuing the agenda. The methodology, on the other hand, will always be presented as decisive and far superior to the alternatives. A research agenda, to continue with the example from before, could be the relationship between color and specific gravity of cattle excrement and milk production. Or non-parametric methods to estimate excreta production given diet composition.

Careers are built or blocked around the research agenda: a researcher with a "hot" agenda, perhaps in a relatively new field, will be much more likely to publish well, and thus obtain grants, and thus publish well. You are usually launched on the hot agenda during your doctoral program, if the advisor advises you well. For example, it may be that the advisor has a strand in mind, but he doesn't feel like setting out to learn a new methodology at age 50, so he sends the young co-author ahead. Often, then, the 50-year-old prof, now a "full professor," finds himself becoming among the leaders of a "cutting-edge" research strand without ever really having mastered its methodologies and technicalities, thus limiting himself to the managerial and marketing side of the issue.

As already explained, real gangs form around the agendas, acting to monopolize the debate on the topic, giving rise to a real controlled pseudo-debate. The bosses' "seminal" articles can never be totally demolished; if anything, they can be enriched, and expanded. One will be able to explore the issues from another point of view, under other dimensions, using new analytical techniques, different data, which will lead to increasingly different conclusions. The key thing is that no one will ever say "sorry guys but we are wasting time here." The only time wasted in the academy is time that does not lead to publications and, therefore, grants.

But then who dictates the agenda? "They" dictate it - directly - the big players in research, that is, the professors at the top of their careers internationally, editors of the most prestigious journals. Indirectly, then, the ultimate funders of research, direct and indirect, dictate it: multinational corporations and governments, both directly and indirectly, through the actions of lobbies and various international potentates.  

The big misconception underlying science, and thus the justification of its funding in the face of public opinion, is that this incessant and chaotic "rush to publish" nonetheless succeeds in adding building blocks to knowledge. This is largely false.

In fact, research agendas never talk to each other, and above all, they never seem to come to a conclusion. After years of pseudo-debate the agenda will have been so "enriched" by hundreds of published articles that trying to make sense of it would be hard and thankless work. Thankless because no one is interested in doing this work. Funds have been spent, and chairs have been taken. In fact, the strand sooner or later dries up: the topic stops being fashionable, thus desirable to top journals, it gradually moves on to smaller journals, new Ph.D. students will stop caring about it, and if the leaders of the strand care about their careers, they will move on to something else. And the merry-go-round begins again.

The fundamental questions posed at the beginning of the agenda will not have been satisfactorily answered, and the conceptual and methodological issues raised during the pseudo-academic debate will certainly not have been resolved. An outside observer who were to study the entire literature of a research strand could not help but notice that there are very few "take-home" results. Between inconclusive results, flawed or misapplied methodologies, the net contribution brought to knowledge is almost always zero, and the qualitative, or "common sense" answer always remains the best.


Conclusions

We have thus far described science. Its functioning, the actors involved in it and their recruitment. We have described how conflicts of interest - and moral and substantive corruption - govern every aspect of the academic profession, which is thus unable, as a whole, to offer any objective, unbiased viewpoint, on anything.

The product of science is a giant pile of crap: wrong, incomplete, blatantly false and/or irrelevant answers. Answers to questions that in most cases no one in their right mind would want to ask. A mountain of shit where no one among those who wallow in it knows a shit. No one understands shit, and poorly asked questions are given in return for payment-piloted answers. A mountain of shit within which feverish hands seek-and find-the mystical-religious justification for contemporary power, arbitrariness and tyranny.

Sure, there are exceptions. Sure, there is the prof. who has gone through the net of the system, and now thanks to the position he gained, he has a platform to say something interesting. But he is an isolated voice, ridiculed, used to stir up the usual two minutes of TV or social hatred. There is no baby to be saved in the dirty water. The child is dead. Drowned in the sewage.

The "academic debate" is now a totally self-referential process, leading to no tangible quantitative (or qualitative) results. All academic research is nothing but a giant Ponzi scheme to increase citations, which serve to get grants and pump up the egos and bank accounts -- but mostly egos -- of professional careerists.

Science as an institution is an elephantine, hypertrophied apparatus, corrupt to the core, whose only function - besides incestuously reproducing itself - is to provide legitimacy for power. Surely at one time it was also capable of providing the technical knowledge base necessary for the reproduction and maintenance of power itself. No longer today, the unstoppable production of the shit mountain makes this impossible. At most, it manages to select, nominally and by simply assigning an attendance token through the most elite schools, the scions of the new ruling class.

When someone magnifies the latest scientific breakthrough to you, the only possible response you can give is, "I'm not buying anything, thank you." If someone tells you that they are acting "according to science," run away.

[1] Advising Galilei to talk about "hypothesis" was Bellarmine. In response, Galilei published the ridiculous "dialogue," where the evidence brought to support his claims about heliocentrism was nothing more than conjecture, completely wrong, and without any empirical basis. The Holy Office's position was literally, "say whatever you like as long as you don't pass it off as Truth." Galilei got his revenge: now they say whatever they like and pass it off as Truth. Sources? Look them up. 

[2] Paradoxically, in the lower-middle tier of journals, where cutthroat competition is minimal, one can still find a few rare examples of peer review done well. For example, I was once asked to referee a paper for a smaller journal, with which I had never had anything to do and whose editor I did not know even indirectly. The paper was sent without a title page therefore anonymous, and strangely I could not find the working paper online. It was objectively rubbish, and I recommended rejection.

________________________________________________________________





Thursday, April 13, 2023

What's Wrong With Science? Mostly, it is how we Mismanage it

 


"A scientist, Dr. Hans Zarkov, works night and day, perfecting the tool he hopes to save the world... His great mind is strained by the tremendous effort" (From Alex Raymond's "Flash Gordon")


We tend to see science as the work of individual scientists, maybe of the "mad scientist" kind. Great minds fighting to unravel the mysteries of nature with the raw power of their minds. But, of course, it is not the way science works. Science is a large network of people, institutions, and facilities. It consumes huge amounts of money from government budgets and private enterprises. And most of this money, today, is wasted on useless research that benefits no one. Science has become a great paper-churning machine whose purpose seems to be mainly the glorification of a few superstar scientists. Their main role seems to be to speak glibly and portentously about how what they are doing will one day benefit humankind, provided that more money is poured into their research projects.

Adam Mastroianni makes a few simple and well-thought considerations in his blog about why science has become the disastrous waste of money and human energy it is today. The problem is not with science itself: the problem is how we manage large organizations. 

You may have experienced the problem in your career. Organizations seem to work nicely for the purpose they were built up to when they include a few tens of people, maybe up to a hundred members. Then, they devolve into conventicles whose main purpose seems to be to gather resources for themselves, even at the cost of damaging the enterprise as a whole. 

Is it unavoidable? Probably yes. It is part of the way Complex Adaptive Systems (CAS) work, and, by all means, human organizations are CASs. These systems are evolution-driven: if they exist, it means they are stable. So, the existing ones are those who managed to attain a certain degree of stability. They do that by ruthlessly eliminating the inefficient parts of the system. The best example is Earth's ecosystem: You may have heard that evolution means the "survival of the fittest." But no, it is not like that. It is the system that must survive, not individual creatures. The "fittest" creatures are nothing if the system they are part of does not survive. So, ecosystems survive by eliminating the unfit. Gorshkov and Makarieva call them "decay individuals." You can find these considerations in their book "Biotic Regulation of the Environment."

It is the same for the CAS we call "Science." It has evolved in a way that maximizes its own survival and stability. That's evident if you know just a little about how Science works. It is a rigid, inflexible, self-referencing organization refractory to all attempts to reform from the inside. It is a point that Mastroianni makes very clear in his post. A huge amount of resources and human efforts are spent by the scientific enterprise to weed out what's defined as "bad science," seen as anything that threatens the stability of the whole system. That includes the baroque organization of scientific journals, the gatekeeping control by the disastrously inefficient "peer review" system, the distribution of research funds by rigid old-boy networks, the beastly exploitation of young researchers, and more. All this tends to destroy both the very bad (which is a good thing) and the very good (which is not a good thing at all). But both the very good and the very bad threaten the stability of the entrenched scientific establishment. Truly revolutionary discoveries that really could change the world would reverberate through the established hierarchies and make the system collapse. 

Matroianni makes these points from a different viewpoint that he calls the "weak links -- strong links" problem. It is a correct way if you frame Science not as a self-referencing system but as a subsystem of a wider system which is human society. In this sense, Science exists to serve useful purposes and not just to pay salaries to scientists. What Mastroianni says is that we should strive to encourage good science instead of discouraging bad science. What we are doing is settling on mediocrity, and we just waste money in the process. Here is how he summarizes his idea. 

I strongly encourage you to read the whole Mastroianni's post because it is very well argumented and convincing. It is what we should do to turn science into something useful that we badly need in this difficult moment for humankind. But the fact that we should do that doesn't mean it will be done. Note in Mastroianni's post the box that says "Accept Risk." This is anathema for bureaucrats, and the need for it nearly guarantees that it will not be done. 

Yet, we might at least try to push science into doing something useful. Prizes could be a good idea: by offering prices, you pay only for success, but not for failure. But in Science, prizes are rare; apart from the Nobel prize and a few others, scientists do not compete for prizes. That's something we could work on. And, who knows, we might succeed in improving science, at least a little! 





 



Monday, February 27, 2023

The Return of Oracles. A New Epistemic Revolution is Coming

 


Why would people trust the Pithoness of the Oracle of Delphi? For us, it looks like a naive or silly idea, but the ancient were neither naive nor silly. They understood that oracles were sophisticated information management systems, very advanced for their times. Today, we have something similar with the new, AI-based, oracles. But the consequences on our way to see the world are all to be seen. 


The first epistemic system: Paganism

For people living in Classical times, the world was seen according to traditions consolidated over centuries. It was what it was because of the will of the Gods, and people could do little or nothing to change it. But humans could seek the favor of the Gods -- in a sense, "bribing" them --  by performing sacrifices and respecting the Gods' altars and shrines. It was called "piety," in the sense of being "pious." In ancient times, a pious man didn't need to have a strong faith, or moral sentiments, or be an especially good person. He followed the rules and obeyed the laws, that's what was required to carry on a respected and fruitful life (*). 

The Pagan system involved the use of oracles to have a glimpse of the Gods' will. We often tend to see our ancestors as naive and ignorant, but oracles were far from being a primitive system. They were a sophisticated data-collection epistemological system that continuously communicated with society to build and manage knowledge. So, if King A asked the Oracle whether he would be successful in attacking King B, then the Oracle obtained a precious element of information about the intentions of King A that could be very useful (and lucrative) when King B came to ask a question. Much of the human communication system still works in this way. You always pay for information with information.


The epistemic revolution: Christianity. 

With the decline of the Roman Empire, Pagan epistemology lost most of its appeal. The Romans hadn't stopped being pious; they kept making sacrifices, respecting shrines and altars, maniacally, even forcing people to be pious on pain of death. But the Gods didn't seem to care. The Empire was crumbling, justice had become oppression, the government was tyranny, and corruption was rampant. What sense was there in being pious? Why should the Gods care if a priest killed a goat for them, and then ate it himself? And those silly oracles, nobody trusted them anymore.

Christianity offered a different kind of epistemology. The Christian God could not be bought on the cheap with the blood of a few goats on an altar. There was a special relationship of God with his people, to the point that He had sent his own son to suffer and die for humankind. Now, humans needed to repay this great kindness by behaving well toward each other, helping each other, and building society together. In this way, a benevolent and merciful God could be trusted much more than the capricious and often malevolent Pagan Gods.

It was a completely new concept that generated the flowering of that creative and sophisticated civilization we call the "Middle Ages" and that, for some silly reason, we tend to denigrate as a "dark age." The Christian epistemological system was suspicious of people speaking directly with God. According to Christianity (and Islam, as well), God had already said everything there was to be said in the holy books. That didn't prevent searching for new knowledge in marginal areas but, if something important was unclear, the problem was to be solved by consulting the wise men versed in interpreting the scriptures. 


The new epistemic revolution: science

With the new millennium, Europeans started expanding in non-Christian lands. Christianity, like all epistemic systems, was based on a set of shared principles, but how to deal with people who were not Christian and who stubbornly refused to convert to such an obviously good idea as Christianity? Should they be exterminated for this evident lack of understanding? (much later, the same problem occurred with democracy). It was a major problem that Christianity tried to solve by the disputatio of Valladolid (1550–1551). The result was clear: the holy books said that Christians had to respect the natives of the new lands, and could not enslave them, nor force them to convert to Christianity. From a theological viewpoint, it was correct, but it didn't work in economic and political terms. The European states were expanding overseas, and that implied the ruthless exploitation of the natives as slaves, or -- simply -- their extermination. If that contrasted with the Christian principles, then the hell with the Christian principles. 

For a period, European intellectuals flirted with the idea of returning to Paganism, but that never worked out. Instead, an epistemic system compatible with the new needs was found with the doctrine called "science." It was not based anymore on the words of God, but on experiments, in turn based on the scientific method. The rules were often nebulous and unclear, but the method was said to be a magic tool able to determine the laws of the universe. It was a success and, starting in the 17th century, science gradually took over as the standard epistemic system of Western culture. Christianity survived as a Sunday thing, a set of recommendations on how to be nice, but not to be taken too seriously.  

Conveniently, science had no moral strings attached -- a good scientist could be a bad person; it didn't matter, provided that the rules of the scientific method were respected. That allowed Science to "solve" the problem of non-European populations by "proving" that they were inferior races. That looks aberrant to us, but it was the standard knowledge that "Science" provided on the subject up to the mid-20th century in most Western cultures. 


The rise of propaganda. 

The 19th and 20th centuries saw the rise of powerful nation-states, which developed an effective epistemic system called "propaganda," in turn made possible by the development of a new set of communication tools called "media" or "mass media." Propaganda, in itself, is not an epistemic system. It has no rules to find universal laws. At most, it is loosely based on science, but on a bowdlerized version of science that only produces statements that suit the state. Science turned out to be easily bent to the needs of the state: scientists were easily corrupted by money or by promises of career and prestige. 

The paradigmatic form of how propaganda works is the slogan "Mussolini is always right," fashionable in Italy during the Fascist era. It was a stark expression of the basic principle of propaganda: Mussolini was right not so much because he was especially clever, but because whatever he said was the voice of the state, and hence it is truth in its purest form. At that time, Italian scientists were all too happy to find scientific proof that, indeed, Mussolini was right in whatever he said. 

More than an epistemic system, propaganda is a communication system. It is repeated over and over in simplified forms that leave no space for alternatives. In military terms, you would call propaganda as a "full spectrum dominance" of people's minds. As such, it is extremely effective, and it has come to define the way of thinking and of behaving in Western Society.  


The new epistemic revolution: the Web and the return of oracles.

With the second millennium, society became more and more complex, and the state propaganda system started becoming too rigid and oversimplified. The development of the World Wide Web was an existential challenge for the mass media: people didn't need anymore to be told what they had to know in a one-size-fits-all, form. They could actively search for knowledge using general-purpose search engines. 

The epistemic battle rapidly moved to the Web, where states tried to crack down on independent thought by using the tools they know best. Demonization, using terms such as "fake news," "disinformation," and "Russian trolls," was extensively and successfully used to censor and eliminate non-standard sources. It was not possible to completely eliminate independent communication, but the search engines could be bent to suit the needs of the state without the need for direct censorship. Those sites that provide independent data could be simply "soft-banned" or "shadowed." They are still there, but they are nearly impossible to be found. 

And now, there came the new oracles. They came with the name of artificial intelligence-based "chatbots."(**) A new epistemic revolution, they bypass the search engines, seeking for an answer to direct questions, just like the old oracles did. And they are flexible, adaptable, and changing as a function of the questions they receive. In principle, chatbots are the death knell for search engines, which were the earlier death knell for propaganda. 

We are in the midst of this new epistemic change, and it is unclear what AI chatbots can and cannot do. For the time being, a good chatbot acts like a good (albeit a little dull) librarian and, at the same time, a meticulous editorial assistant (again, a rather dull one). But, in principle, a good chatbot can bypass the shadow-banning of the search engines: they are much more powerful and sophisticated tools for searching information on the Web. They work on the same principle as the old oracles: they pay for information with information. Every time you ask the bot a question, you provide it with a precious element of information.

Eventually, the question is always the same: who controls the flow of information? Can chatbots be controlled by the state? Maybe but, again, they are very sophisticated tools; they are not easily manageable by the brute force that can be used with the old mass media. They have already scared the elites, who may soon crack down on them. Or, on the other hand, they may be used for even stricter control of people. For one thing, be careful with what you say to your chatbot. A general-purpose chatbot such as "chatGPT" does not keep a record of who asked it what. But others, such as "Replika," do build a database of your questions that may be used to profile you, or much worse. 

History teaches us that epistemic revolutions cannot be stopped for a long time. And also that their consequences are always unexpected, and not always pleasant. And, as always, we march into the future with blinds on our eyes, in the darkness, not realizing that both sides of the paths are deep chasms. But we keep walking onward.



Here is what chatGPT told me about how it could be used by the elites. It seems clear that even the poor AI is unsure of what could happen! 

"Could a chatbot be used as a tool to maintain the power of the elites?"

Yes, a chatbot could potentially be used as a tool to maintain the power of the elites if it is programmed to promote certain ideas or beliefs that serve the interests of the elite class. For example, a chatbot could be programmed to downplay or dismiss criticisms of the status quo, to promote certain political ideologies or parties, or to spread misinformation or propaganda.

In addition, chatbots could also be used to gather data on users' opinions and preferences, which could then be used to inform marketing strategies or political campaigns that favor the interests of the elites. By collecting and analyzing data on users' behaviors and attitudes, chatbots could be used to influence and shape public opinion in ways that benefit the elite class.To prevent chatbots from being used as tools to maintain the power of the elites, it is important to ensure that they are transparent and unbiased in their programming and interactions. Chatbots should be programmed with ethical and inclusive values, and should be regularly monitored and audited to ensure that they are not promoting harmful or discriminatory content. Additionally, chatbots should be designed to empower and uplift marginalized communities, rather than reinforce existing power structures.

(*) The paradigmatic example of the virtuous man in ancient times was Aeneas, the Trojan warrior who traveled to Italy to start the Latin civilization. In the Aeneid by Virgil, Aeneas seduces the Queen of Carthage, Dido, then callously abandons her destiny, leading her to commit suicide. But Aeneas is described as a positive character who always act in accordance with the will of the Gods.  

(**) Stephen Wolfram has an excellent summary of how chatbots work at https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/


Sunday, February 5, 2023

The Failure of Scientific Journals: the Failure of Science

 


Scientific, "peer-reviewed" journals are rapidly becoming a major stumbling block to scientific innovation. Here, I tell the story of one of these journals that I myself helped create. From this, I argue that the loose network that science used to be (an excellent example of a "social holobiont") has degenerated into a rigid, hierarchical structure that allows no changes and no innovations. And of what use is science if it doesn't innovate anymore?


"Biophysical Economics and Sustainability" is a scientific journal I helped create back in 2016. I still think it was a good idea, but it didn't work as expected. So, I resigned from my position as journal editor this December (1). But let me tell the story from the beginning. 

The journal was the brainchild of Charles W. Hall and David Packer. About Charlie Hall, he was the developer of the fundamental concept of EROI (energy return on energy invested). Dave Packer was a senior editor at Springer (now retired). The idea was to create a high-quality journal that could offer a publishing outlet in the field called "biophysical economics" or "econophysics." You may have heard about this field: it is an approach to economics based on the same models used in biology. The idea was to examine the essential elements of an economic system: an entity that transforms resources into products, then waste. The main difference with traditional economics is that biophysical economics is focused on material things that can be measured: energy, mass, materials, and the like. In contrast, economics is heavily focused on money and prices and often loses contact with the physical world. 

For instance, it is often said in the mining industry that "prices create resources." The idea is that when a mineral resource becomes scarce because of depletion, prices become higher, making it possible to extract resources that were not profitable before. It is a magic trick supposed to create something out of nothing. No need to say that it doesn't work in the real world. And it doesn't work in the biophysical approach, either. The concept of EROI (Energy Return for Energy Invested) is fundamental to understanding this point. It tells you what's possible to do with energy technologies and what's not possible. But it just does not exist in traditional economics: it is ignored, and, as a consequence, plenty of resources are wasted in non-viable energy technologies, for instance, biofuels and hydrogen. 

It may be time to replace the obsolete approach of traditional economics with the more rigorous one of biophysical economics. But it is just not happening. If you look at the number of publications in scientific journals, you see that growth has stalled during the past 10 years, and now it is going down. A search of the term "Biophysical Economics" on "Scopus" shows that not only growth stopped about 10 years ago, but the number of published studies remains small, a minor fraction of the publications in economics.  


Could a small group of dedicated people change this situation? We did our best with "Biophysical Economics and Sustainability," but if you peruse the list of publications, you see that the journal attracted mainly medium-quality, only marginally interesting publications. As a result, it never really impacted the field it was supposed to innovate. 

The main problem was the high cost of publication. If you want your article published in an "open access" format in "Biophysical Economics," you have to place $3,390 on the table. It is a lot of money for the strained budget of a scientist who is not part of the global scientific elite. One consequence was that I found myself as the editor of a journal where I could not afford to publish my research papers (one of the reasons why I resigned). Of course, publishing in the "paywalled" format will cost you nothing, but it will require about $40 for readers to access your article. And that guarantees that nobody will read it unless they have access to an academic library that subscribes to the journal. In the latter case, the paper will be read by a small number of specialists (maybe) but will have no impact on decision-makers and on a wider circle of scientists. No wonder the journal does not attract high-quality papers. If scientists have a paper they care about and want others to read, they'll publish it open-access in journals that charge a lower fee or none. 

Why does a publisher pursue a pricing policy guaranteed to throttle the flow of good papers to death? It is not a bug; it is a feature of the scientific publication process. It is well-known that consumers rely on prices to determine the quality of products. So, by making specific journals very expensive, publishers make them desirable, even though publishing in them means sacrificing a significant fraction of one's research budget. But why don't scientists rebel against this policy? It is because they are embedded in a Nash equilibrium and have no individual advantage in changing the system. 

You probably know that "science" is supposed to be formed of a bunch of disinterested truth-seekers who spend their lives investigating Nature and her ways. It is a good definition if you apply it to what science was. At the time of the great pioneers, say, Galileo, Newton, Darwin, and many others, science could change the way we perceived the universe with the work of individuals whose primary tool was a pencil (or a quill). Up to the times of Einstein, Bohr, Planck, and others, about one century ago, this feature of science had not changed so much. 

Of course, no scientist ever worked alone. All of them were part of a network of people who continuously communicated with each other and shared ideas and methods. Newton understood this point perfectly well when he said that he owed his successes to having been standing "on the shoulder of giants." But science was a peculiar organization: it had no leaders, no governing bodies, no "kings," and no "popes." Some scientists had much more prestige than others, but science was an egalitarian organization where ideas flowed freely from one scientist to another. In principle, all scientists had the right to propose new ideas and to be heard by their peers. At that time, there was no such thing as the rigid hierarchy of scientific journals that exists nowadays. And journals didn't charge such outrageous fees for the privilege of publishing in them.

Allow me to use the term "holobiont" to describe science as a network. A holobiont is a complex system that arises by self-organization based on local interactions. The term is used mainly in biology, but the definition can be extended to human social systems; science is one example. Up to recent times, science has been exactly fitting the definition of holobiont: it was a loose network of independent nodes interacting with each other at a level of near equality

One characteristic of holobionts as networks is that they can evolve and change. It is because when an element of the network changes, it can transmit the change to all the other elements using a chain reaction of local interactions. In this way, new ideas diffused in science: a good idea had a chance to make itself heard and affect the whole network. Of course, it took some time and, usually, the disappearance of an older generation of scientists, but generally, it worked. Just think how quantum mechanics could radically change the very basis of how we understood the nature of matter, back around the first decades of the 20th century. It was rabidly contrasted at the beginning, but gradually, it imposed itself. And that radical change took just a few decades to be globally accepted. 

Things are different now. Nowadays, new ideas need help finding a space in a scientific environment that has become rigid and static. The example of biophysical economics is just one of several cases where new paradigms remain marginalized. That it is a general phenomenon in science can be seen in a recent paper published in Nature. Here are the main results. 



As you see, the innovative content of new papers, measured in terms of the "CD" (conservative/disruptive) index, has declined over the past 60 years. Even more worrisome is that, despite these data, nobody, nowhere, seems to have been publicly expressing the idea that some radical changes are needed in science. Nobody wants to rock the boat, fearing they would be the ones dumped overboard. 

Now, a fundamental point. All this does not mean that science as we know it is wrong. Science remains grounded on a solid knowledge base built over centuries of hard work. Thermodynamics, quantum mechanics, microbiology, and atmospheric physics are just examples of fields that generated profound and valuable knowledge. Within some limits, they are still generating it. But, recently, science seems to have undergone a process of "hierarchization." Hierarchical structures are rigid. They change only if the central vertex changes. And if the central vertex resists change (as it usually does), the network remains as nimble as a beached whale. Until it rots away. In a certain sense, it was unavoidable. Most human organizations tend to evolve by turning into rigid hierarchies that resist change. 

In the case of science, it was the result of the classic combination of the carrot and the stick. The carrot is the research funding: right now, you can obtain funds for your research only if you follow the extremely detailed rules provided by the funders -- private industries or state agencies. This is why immense efforts are spent searching for solutions for the wrong problems (for instance, creating a "hydrogen economy"). The search for funds is competitive, and you must comply with the rules to ensure you are allowed to continue. 

The other cause of the hierarchization of science is the stick. It is here that science publishers play a fundamental role. This is a subtle point: publishers do not select what is to be published (2). They only select prices. Because publishing is so expensive, only those scientists who can control large research grants can publish in the best (i.e., more expensive) scientific journals. That, in turn, ensures they gain more prestige and can access more grants. With more grants, they can publish more papers in high-ranking journals. Scientists who don't belong to the inner circle of financing are forced to publish in second or third-rank journals and are marginalized and ignored (3, 4). Innovative work cannot simply move out of the swamp where it is confined, so it cannot influence the top layer of scientific research. 

So, what is left of science if it cannot produce innovation? Little more than a giant machine dedicated to grinding pure air (or, as we say in Italy, "frying with water"). Little can be done to reform this fossilized structure from the inside. Every attempt to change something is met with a rearranging of the network in such a way as to maintain its earlier structure. It is what happened to "Biophysical Economics and Sustainability,"  a nice try, but it couldn't have worked. So, the only way to get rid of an ancient hierarchical structure is to let it crash down and then replace it with a new one. It is the mechanism that generates the Seneca Collapse. 

It happens, usually as the result of an external perturbation that makes it impossible for the whole network to maintain the links that keep it together. The powers that be could simply decide that they don't need science anymore and simply cut financing to it. A starved holobiont is a dead holobiont, so it would be the end of science as we know it. It is difficult to say what can arise in its place but, in principle, it might be something better than the science as we know it today.  

For a while, many of us thought we could find truth in a nearly-deified form of "science," only to discover that all-too-human scientists had corrupted the idea, turning it into a giant circus where funny-looking beasts run and run in a circle, but arrive nowhere. So we remain facing Pilate's question: Τί ἐστιν ἀλήθεια? What is truth? Maybe one day we'll know. 


________________________________________________________________

(1) If you go to the website of "Biophysical Economics and Sustainability), you still find my name among the editors. Sometimes, Springer is as reactive as a sleeping hippo.

(2) Recently, a new trend has been developing in science. It is the classic censorship, in this case, taking the shape of the "paper retraction" mechanism. So far, it has been rarely used, but it is becoming popular, as you can see on the "retraction watch" site. As a subset of the ordinary "fact checkers" who censor social media, a group of specialized science fact-checkers has appeared, possibly paid by the powers that be. They are engaged in finding mistakes in published papers, then pressing the editors to retract them. In principle, getting rid of those bad papers that survive the often sloppy reviewing mechanism of scientific journals is not a bad idea. However, in practice, it has a great potential for direct censorship of politically incorrect results. For example, during the Covid crisis, hundreds of papers on the subject were retracted. There is no doubt that many were bad papers that deserved retraction, but I could tell you stories about a few that were retracted simply for ideological reasons. 

(3) Here is an example of how impermeable the hierarchy of science can be. In 2015, two Turkish physicists, Ibrahim Semiz and Salim Ogur published a paper exploring the possibility of a Dyson sphere built around a white dwarf star. In 2022, B. Zuckerman of the University of California LA published a paper on the same subject: Dyson Spheres around white dwarves. It was not plagiarism because the two papers approached the subject in different ways. Still, it is remarkable how Zuckerman did not cite the two Turkish physicists, even though he had published it in the same paper repository. You can also see the different resonance of the two studies: the paper from California was discussed in the mainstream press, while the Turkish one was ignored. It is the hierarchical structure of science at work. Provincial scientists are marginalized. 




(4) Another recent case of censoring innovative ideas is that of a group of Italian scientists, Loredana Frasca, Giuseppe Ocone, and Raffaella Palazzo, who published an article where they evaluated the cost/benefit ratio of COVID-19 vaccines. They concluded that mass vaccination was not justified in many cases, particularly in view of the adverse effect on people with cardiac issues. It generated a strong backlash from their employer, ISS (Istituto Superiore di Sanità), which officially and publicly castigated them for having said things that the institute's leaders didn't approve of (there was once something called "academic freedom," alas....). The interesting point is that in the debate that ensued, some scientists took sides with the ISS by arguing that since the paper was published in a second-tier journal (MDPI's "Pathogens"), then it just didn't deserve any attention. Now, I can tell you that MDPI may not have the same prestige as "Nature" or "Science," but that doesn't mean the papers it publishes are not good. Snubbing a perfectly valid work just based on in which journal it had appeared is a good illustration of how elitarian science has become.