The Roman Philosopher Lucius Anneaus Seneca (4 BCE-65 CE) was perhaps the first to note the universal trend that growth is slow but ruin is rapid. I call this tendency the "Seneca Effect."

Friday, May 5, 2023

The Rise of Elly Schlein: How a Young, Woke, and Fashionable Politician is Shaking up Politics in Italy, and Perhaps Worldwide

 


Many times, Italy was a political laboratory that influenced the rest of the world. Just think of Mussolini and, more recently, how a government led by an obscure bureaucrat named Giuseppe Conte started the trend of nationwide lockdowns, then adopted everywhere in the world. Italy may be a backwater country, but it is a murky memetic pool brewing memetic microbes. Above, you see Ms. Elly Schlein, recently elected as secretary of the Italian "Partito Democratico," (PD) as shown in a recent interview in the Italian edition of Vogue magazine. I think you'll hear a lot about this lady in the future. 


When Elly Schlein was elected secretary of the Democratic Party (PD) in Italy, two months ago, I thought it was just a desperate attempt to revive a party that had nothing more to say in politics. But I was wrong. Elly Schlein is not the result of the convulsions of a dying organization. She is a major innovation in public relations, designed to revolutionize the Italian, and perhaps the world's, political landscape. 

Up to not long ago, politicians tended to project the image of the strong man, the "father of the country" whose decisions were always wise. That's past and gone, perhaps forever. The levers of political power have moved to the obscure lobbies that control governments, while the job of politicians is now mainly to maintain a semblance of popular participation in the governing process. In short, they all image and no substance. 

Ms. Schlein is part of this evolution. She is the tip of an innovative PR campaign launched by the PD and their sponsors, and she is using the same strategy that Silvio Berlusconi, former Italian PM, used for decades: it doesn't matter how many people hate you: what matters is how many people vote for you. 

So, Berlusconi targeted the least cultured sections of the Italian population with a personal image of a rich man who could do whatever he wanted. If you are poor, it is a figure that you may dream of imitating. Plenty of people hated Berlusconi for his image, but he consistently won elections over a political career of a few decades. 

Elly Schlein is doing something similar. She is not trying to appear to her potential voters as "one of us," but, rather, "what every one of us would like to be," at least for the target she is aiming at; that of young, left-oriented people in the West. So, she projects her image as young, independent, bisexual, globalist, feminist, and, more than all, a successful woman who can manage herself and her sexual preferences the way she wants. Among other things, she had no qualms in disclosing that she employs a "harmochromist" a sort of assistant buyer at Eur 300/hour to take care of the color combinations of the dresses she wears. In short, the perfect image of  "radical chic," now better known under the name of "woke." And the fact that she does not look like a fashion model shows that her success is the result of her skills, not her looks. 

The PR strategy of Elly Schlein has been very successful, at least up to now. Huge numbers of "leftists" rushed to their keyboards to defame her on all social media for betraying the working class because of her interview with Vogue, her fashionable dresses, and her harmochromist assistant. Remarkably, none of them realized that they were doing exactly what Schlein's PR managers wanted them to do. They wanted her to gain the attention of the media; and avoid repeating the mistake they had made with the lackluster former secretary, Enrico Letta. These good leftists didn't realize that they were making the same mistake they made with Berlusconi: the more they attacked him, the more they made him popular. Again, it doesn't matter how many people hate you; what matters is how many people vote for you. 

Of course, politics is not just a question of physical image; you have to have opinions, programs, and platforms. In this field, Schlein seems to have understood the critical point of modern politics. You may be criticized for what you said but not for what you didn't say. So, the skill of a modern politician is to be able to speak a lot while saying nothing. Schlein appears to have mastered this skill, at least from what we can read in her recent interview with Vogue Magazine. (excerpts in English). If you ever heard terms such as "cliché fest," "banality bonanza," or "vapid verbiage," consider this article as a good example of these concepts. It is all part of the image: it is the way politics works nowadays. 

So, I think we are seeing a trend. Note how Schlein's image is remarkably similar to that of the former New Zealand Prime Minister, Jacinda Arden. 


Since politicians are a product, the industry that produces them (the PR industry) tends to imitate and repropose successful products. In a previous post, I noted how the Ukrainian leader Volodymyr Zelensky adopted a dress code very similar to that of the Italian right-wing leader Matteo Salvini. About Schlein and Arden, note how both women have relatively elongated faces, a feature that is often associated with a "masculine" appearance. These ladies tend to produce an image of independence, self-reliance, and assertiveness. At present, there is no exact equivalent in the US political landscape, so far, although Alexandra Ocasio-Cortez has some elements of similarity with them. Perhaps the US politician who looks like Schlein the most is Barack Obama, at least in the sense of being another expert in talking a lot without saying much.   

My impression is that starting from Italy, this kind of heavily promoted female political figures may soon spread all over the Western World. Not that anything will change; we'll just have "front persons" rather than "front men" at the top. And we keep marching toward the future, whatever it will be.

__________________________________________________________

As a further note, here is Schlein's adversary in Italy, Giorgia Meloni, leader of the right. 


She is a more traditional kind of politician: a classical "populist." She is aggressive and outspoken, but overall she projects a more "feminine" image than Schlein, and it would be hard to imagine her employing a personal armochromist. My impression is that one of the purposes of the creation of Elly Schlein's image was to prepare an anti-Meloni memetic weapon. In my opinion, if push comes to shove, Schlein will easily trash Meloni by making her look like a fruit vendor in a provincial market. But that we'll have to see.

Monday, May 1, 2023

When Science Fails: Surrogate endpoints and wrong conclusions

 


Galileo Galilei and Anthony Fauci are linked to each other by a chain of events that started at the beginning of modern science, during the 17th century. But the Science that Fauci claimed to represent is very different from that of Galileo. While Galileo studied simple linear systems, modern science attempts to study complex, multi-parameter systems, where the rigid Galilean method just cannot work. The problem is that, while it is obvious that we can measure only what we can measure, that's not necessarily what we want, or need, to measure. Tests based on "surrogate endpoints" may well be the best we can do in medicine and other fields, but we should understand that the results are not, and cannot be, a source of absolute scientific truth.


The Scientific Method

Galileo Galilei is correctly remembered as the father of modern science because he invented what we call today the "scientific method," sometimes still called the "Galilean method." It is supposed to be the basis of modern science; the feature that makes it able to be called "Science" with a capital first letter, as we were told over and over during the Covid pandemic. But what is really this scientific method that's supposed to lead us to the truth? 

Galileo's paradigmatic idea was an experiment about the speed of falling objects. It is said that he took two solid metal balls of different weights and dropped them from the top of the Pisa Tower. He then noted that they arrived at the ground at about the same time. That allowed him to lampoon an ancient authority such as Aristotle for having said that heavier objects fall faster than lighter ones (*). There followed an avalanche of insults to Aristotle that continues to this day. Even Bertrand Russel fell into the trap of poking fun at Aristotle, accused of having said that women have fewer teeth than men. Too bad that he never said anything like that.

It may well be that Galileo was not the first to perform this experiment, and it is not even clear that he actually performed it, but that's a detail. The point is that the result was evident, clear-cut, and irrefutable. Later, Newton started from this result to arrive to the assumption that the same force that acted on an apple falling from a tree in his garden was acting on the Moon and the planets. From then on, science was supposed to be largely based on laboratory experiments or, anyway, experiments performed in tightly controlled conditions. It was a major change of paradigm: the basis of the scientific method as we understood it today.

The Pisa Tower experiment succeeded in separating the two parameters that affect a falling body: the force of gravity and the air drag. That was relatively easy, but what about systems that have many parameters affecting each other? Here, let me start with the case of health care, which is supposed to be a scientific field, but where the problem of separating the parameters is nearly impossible to overcome.


The surrogate endpoint in medicine

How can you apply the scientific method in medicine? Dropping a sick person and a healthy one from the top of the Pisa Tower won't help you so much. The problem is the large number of parameters that affect the nebulous entity called "health" and the fact that they all strongly interact with each other. 

So, imagine you were sick, and then you feel much better. Why exactly? Was it because you took some pills? Or would you have recovered anyway? And can you say that you wouldn't have recovered faster hadn't you taken the pill? A lot of quackery in medicine arises from these basic uncertainties: how do you determine what is the specific cause of a certain effect? In other words, is a certain medical treatment really curing people, or is it just their imagination that makes them think so?

Medical researchers have worked hard at developing reliable methods for drug testing, and you probably know that the "gold standard" in medicine is the "Randomized Controlled Test" (RCT). The idea of RCTs is that you test a drug or a treatment by keeping all the parameters constant except one: taking or not taking the drug. It is designed to avoid the effect called "placebo" (the patient gets better because she believes that the drug works, even though she is not receiving it) and the one called "nocebo" (the patient gets worse because he believes that the drug is harmful, even though he is not receiving it). 

An RCT involves a complex procedure that starts with separating the patients into two similar groups, making sure that none of them knows to which group she belongs (the test is "blinded"). Then, the members of one of the two groups are given the drug, say, in the form of a pill. The others are given a sugar pill (the "placebo"). After a certain time, it is possible to examine if the treatment group did better than the control group. There are statistical methods used to determine whether the observed differences are significant or not. Then, if they are, and if you did everything well, you know if the treatment is effective, or does nothing, or maybe it causes bad effects.  

For limited purposes, the RCT approach works, but it has enormous problems. A correctly performed RCT is expensive and complex, its results are often uncertain and, sometimes, turn out to be plain wrong. Do you remember the case of "Thalidomide"? It was tested, found to work as a tranquilizer, and approved for general use in the 1960s in Europe. It was later discovered that it had teratogenic effects on fetuses, and some 10.000 babies in Europe were born without arms and legs before the drug was removed from the market. Tests on animals would have shown the problem, but they were not performed or were not performed correctly. 

Of course, the rules have been considerably tightened after the Thalidomide disaster and, nowadays, testing on animals is required before a new drug is tested on humans. But let's note, in passing, that in the case of the mRNA Covid vaccines, tests on animals were performed in parallel (and not before) testing on humans. This procedure exposed volunteers to risks that normally would not be considered acceptable with drug testing. Fortunately, it does not appear that mRNA vaccines have teratogenic effects. 

Even assuming that the tests are complete, and performed according to the rules, there is another gigantic problem with RCT: What do you measure during the test?  Ideally, drugs are aimed at improving people's health, but how do you quantify "health"? There are definitions of health in terms of indices called QALY (quality-adjusted life years) or QoL (quality of life). But both are difficult to measure and, if you want long-term data, you have to wait for a long time. So, in practice, "surrogate endpoints" are used in drug testing.  

A surrogate endpoint aims at defining measurable parameters that approximate the true endpoint -- a patient's health. A typical surrogate endpoint is, for instance, blood pressure as an indicator of cardiovascular health. The problem is that a surrogate endpoint is not necessarily related to a person's health and that you always face the possibility of negative effects. In the case of drugs used to treat hypertension, negative effects exist and are well known, but it is normally believed that the positive effects of the drug on the patient's health overcome the negative ones. But that's not always the case. A recent example is how, in 2008, the drug bevacizumab was approved in the US by FDA for the treatment of breast cancer on the basis of surrogate endpoint testing. It was withdrawn in 2011, when it was discovered that it was toxic and that it didn't lead to improvements in cancer progression (you can read the whole story in "Malignant" by Vinayak Prasad).  

Consider now another basic problem. Not only the number of parameters affecting people's health are many, but they strongly interact with each other, as is typical of complex systems. The problem may take the form called "polydrug use," and it especially affects old people who accumulate drugs on their bedstands, just like old cars accumulate dents on their bodies. An RCT test that evaluates one drug is already expensive and lengthy; evaluating all the possible combinations of several drugs is a nightmare. If you have two drugs, A and B, you have to go through at least three tests: A alone, B alone, and the combination of A+B. If you have three drugs, you have seven tests to do (A, B, C, AB, BC, AC and ABC). And the numbers grow rapidly. In practice, nobody knows the effects of these multiple drug uses, and, likely, nobody ever will. But a common observation is that when the elderly reduce the number of medicines they take, their health immediately improves (this effect is not validated by RCTs, but that does not mean it is not true. I noted it for my mother-in-law who died at 101). 


The case of Face Masks 

Some medical interventions have specific problems that make RCTs especially difficult. An example is that of face masks to prevent the spreading of an airborne pathogen. Evidently, there is no way to perform a blind test with face masks, but the real problem is what to use as a surrogate end-point. At the beginning of the Covid pandemic, several studies were performed using cameras to detect liquid droplets emitted by people breathing or sneezing with or without face masks. That was a typical "Galilean," laboratory approach, but what does it demonstrate? Assuming that you can determine if and how much a mask reduces the emission of droplets, is this relevant in terms of stopping the transmission of an airborne pathogen? As a surrogate endpoint, droplets are at best poor, at worst misleading.  

A much better endpoint is the PCR (polymerase chain reaction) test that can directly detect an infection. But even here, there are many problems. As an example, consider an often touted study performed in Pakistan that claimed to have demonstrated the effectiveness of face masks. Let's assume that the results of the study are statistically significant (really?) and that nobody tampered with the data (and we can never be sure of that in such a heavily politicized matter). Then, the best you can say is that if you live in a village in Pakistan, if there is a Covid wave ongoing, if the PCR tests are reliable, if the people who wore masks behave exactly like those who don't, and if random noise didn't affect the study too much, then by wearing a mask you can delay being infected for some time, and maybe even avoid infection altogether. Does the same result apply to you if you live in New York? Maybe. Is it valid for different conditions of viral diffusion and epidemic intensity? Almost certainly not. Does it ensure that you don't suffer adverse effects from wearing face masks? Duh! Would that make you healthier in the long run? We have no idea.

The Pakistan study is just one example of a series of studies on face masks that were found to be ill-conceived, poorly performed, inconclusive, or useless in a recent rigorous review published in the Cochrane Network. The final result is that no one has been able to detect a significant effect of face masks on the diffusion of an airborne disease, although we cannot say that the effect is actually zero. 

The confusion about face masks reached stellar levels during the COVID-19 pandemic. In 2020, Tony Fauci, director of the NIAID, first advised against wearing masks, then he reversed his position and publicly declared that face masks are effective and even that two masks are better than just one. Additionally, he declared that the effectiveness of masks is "science" and, therefore, cannot be doubted. But, nowadays, Fauci has reversed his position, at least in terms of mask effectiveness at the population level. He still maintains that they can be useful for an individual "who religiously wears a mask." Now, imagine an RCT dedicated to demonstrating the different results of "religiously" and "non-religiously" wearing a mask. So much for science as a pillar of certainty. 


Surrogate endpoints everywhere

Medicine is a field that may be defined as "science" since it is based (or should be based) on data and measurements. But you see how difficult it is to apply the scientific method to it. Other fields of science suffer from similar problems. Climate science, ecosystem science, biological evolution, economics, management, policies, and others are cases in which you cannot reproduce the main features of the system in a laboratory and, at the same time, involve a large number of parameters interacting with each other in a non-linear manner. You could say, for instance, that the purpose of politics is to improve people's well-being. But how could that be measured? In general, it is believed that the Gross Domestic Product (GDP) is a measure of the well-being of the economy and, hence, of all citizens. Then, it is concluded that economic growth is always good, and that it should be stimulated by all possible interventions. But is it true? GDP growth is another kind of surrogate endpoint used simply because we know how to measure it. But people's well-being is something that we don't know how to measure. 

Is a non-Galilean science possible? We have to start considering this possibility without turning to discard the need for good data and good measurements. But, for complex systems, we have to move away from the rigid Galilean method and use dynamic models. We are moving in that direction, but we still have to learn a lot about how to use models and, incidentally, the Covid19 pandemic showed us how models can be misused and lead to various kinds of disasters. But we need to move on, and I'll discuss this matter in detail in an upcoming post. 


______________________________________________

(*) Aristotle's "Physics" (Book VIII, chapter 10) where he discusses the relationship between the weight of an object and its speed of fall:

"Heavier things fall more quickly than lighter ones if they are not hindered, and this is natural, since they have a greater tendency towards the place that is natural to them. For the whole expanse that surrounds the earth is full of air, and all heavy things are borne up by the air because they are surrounded and pressed upon by it. But the air is not able to support a weight equal to itself, and therefore the heavier bodies, as having a greater proportion of weight, press more strongly upon and sink more quickly through the air than do the lighter bodies."





  




Friday, April 28, 2023

America, the Collapsed. Why Mommy Knows Best

 


I am still reading paper books. I think they favor concentration in a way that on-screen books cannot provide. Books are also one of the few places left in the memesphere where you can say what you think without being aggressed by a band of zombies or censored by brain-drained idiots. So, during the past few years, I have been reading a series of books, most of which, I think, deserve a comment. And I'll see to publish a few; not really book reviews, but just ideas derived from the books I read. 


Why do people do things that harm themselves? It is a good question that nobody seems to know how to answer -- yet, it happens all the time. Scott Erikson tackles the question in his book, "Mommy, Why did America Collapse?" A nice addition to the catastrophist's bookshelf. 

Erikson writes in a light mood; his analyses do not pretend to be quantitative models of the collapse of complex systems, but he tells a fascinating tale in the form of a bedtime dialog between a mother and her daughter. The fascination is not in the story itself: for those of us who are hard-core catastrophists, there is not much that we didn't know before (mommy is one of us!). For those who are not catastrophists, the book will probably fail to appear convincing.

Yet, the problem is there, and many past stories show how exactly people can and do harm themselves by doing things that may appear intelligent in the short term but lethal in the long term. Throwing oneself naked into a thornbush to collect berries is a paradigmatic example, but, in a more pedantic kind of analysis, Ilaria Perissi and I noted how the fishing industry has consistently destroyed the fish stocks that made them live in our book "The Empty Sea." How could it happen? We cite several examples of how people simply trade long-term survival for short-term gains. It is the way the human mind works. We need just one word to describe it: "greed."

That's exactly the point that Erikson makes over and over in his book. It is not so much an analysis of the reasons why America did (will) collapse but how a profound failure in communication caused the collapse. More than a problem of communication, it is a problem of trust. A mother can say things at bedtime that her daughter has no reason to mistrust (unless mommy wants to sell the munchkin to the bad witch for cooking in a cauldron).  

But, apart for bedtime stories, we are always dealing with people who are trying to sell us to the canned meat industry -- often figuratively, sometimes perhaps even for real. So, if there is no trust, there is no truth. And if there is no truth, there is nothing at all. Only noise. And who has the time for bedtime stories, anyway?

No wonder America Collapsed.


Monday, April 24, 2023

Another Epochal Failure of Humankind. How people are losing interest in the things that threaten them the most.

 



In 2018, I published on my "Cassandra's Legacy" blog a post titled, "Why, in a Few Years, Nobody Will be Talking About Climate Change Anymore." It turned out to be remarkably prophetic. 


From Gallup News, April 17, 2023

The percentages of Americans expressing a great deal of worry about air pollution and the loss of tropical rain forests have each fallen seven points since 2022, while worry about extinction of plant and animal species has declined five points, and the pollution of natural waterways and global warming or climate change are down four points each. Meanwhile, last year’s 57% high-level worry about polluted drinking water is statistically similar to this year’s 55%. Each of the current readings is at or tied for its lowest point since 2015 or 2016.

People are losing interest in the things that threaten them the most. This Gallup report is not the only evidence. News and comments about climate change, pollution, resource depletion, and the like have disappeared from the major news channels, as I had predicted in a post published four years ago, in 2018

Sometimes, I am scared of my own predictions, but this one requires some corrections to my interpretation. When I published my post, in 2018, I was convinced that the decline in interest in environmental matters was mostly created by governments applying the propaganda technique called "deception by omission." That is, governments actively intervened to keep these issues away from the news. 

Nowadays, I think that active omission is just one of the causes of the trend. Another one, probably more important, is the current economic situation. People's everyday life is becoming more and more difficult in terms of money, health, safety, and sheer survival. Most of us are unable to link our individual troubles with large-scale planetary problems. And, even if we were able to, it would be correct to conclude that there is nothing we can do about these problems. So, why worry about things we can't change? 

That generates a negative loop: if a subject is not interesting to people, then the media tend to ignore it. If the media ignore a subject, then it becomes less and less interesting. So, when Gallup goes interviewing people, they answer that they don't see the big problems as worrisome. There is little need for governments to intervene to keep some problems hidden, they tend to disappear from the news by themselves. 

The apathy of the public is just one of the facets of the way the perception of global problems has evolved. An even worse side of the problem is how the collapse of the prestige of science during the Covid epidemic has led to a disastrous loss of trust in anything that has to do with science. It is true that science has become corrupt, biased, elitist, unable to innovate, a tool for scaring people, and more of the like. But it is still a shock to see people whom I esteemed as intelligent and competent in their fields going into a full-spectrum refusal of anything that has to do with "official" science. 

Plenty of people I know refuse to accept even simple things that could help them in their everyday life. Insulate my apartment? It is part of a plan to dispossess the middle class of their homes. Install PV panels on my roof? They require more energy than they will produce. Replace my gas stove with an induction one? They want us to starve!  Reduce traffic pollution? It is a trick to take our cars away from us. Saving energy? It is because they want to enslave us!

To say nothing about those who went full bonkers with the idea that the Moon landing never happened, chemtrails are real, climate change is a hoax, the virus does not exist, oil and gas are infinite, and it is all a plot to exterminate humankind. This kind of things. 

Depressing? Of course, it is depressing. The term "depressing," alone, may not be adequate. (click here for 307 synonyms of "depressing" according to the Merriam-Webster dictionary). Call it the way you like, but the fact is that in three years of Covid panic, we lost 50 years of efforts to convince people to do something about keeping this poor planet (and us, living on its surface) in decent conditions. 

Perhaps it was unavoidable, but the question is,



Friday, April 21, 2023

A new dichotomy: nation states vs. the "one planet" movement







A new dichotomy is emerging in the current debate: the contrast between the view of the world as composed of nation-states in relentless competition with one another and the “one planet” movement, which emphasizes human solidarity. These two trends are on a collision course. Up to now, the "one planet" approach seemed to be the only chance to set up an effective strategy against the degradation of the planetary ecosystem. But now, it seems that we'll have to adapt to the new vision that's emerging: for good or for bad, the world is back in the hands of nation-states, with all their limits and their idiosyncrasies, including their large-scale homicidal tendencies. Can we find survival strategies with at least a fighting chance to succeed? Daniele Conversi, researcher at the University of the Basque Country, has been among the first to pose the question in his recent book "Cambiamenti Climatici" (UB)




By Daniele Conversi (University of the Basque Country)




The climate crisis is one of the nine "planetary boundaries" identified in the Earth Sciences since 2009. The critical threshold for climate change (350 ppm) has only recently been passed. Other boundaries, such as biodiversity loss, have been overrun — and we are reaching other critical thresholds as well. The global environmental crisis signals the likely entrance into the most turbulent period in human history, requiring unprecedented creativity, force and adaptive skills to act quickly and radically in order to curb the global crisis. But which are the main obstacles arising in front of us?

Historians of science and investigative journalists plus some social and political scientists have studied in detail the way the fossil fuel lobbies hampered governmental action via disinformation, misinformation, and the "denial industry". However, these studies do not generally consider in detail the institutional scenario where lobbyists act, namely the nation-state.

My research explores a different set of variables originating in the current division of the world into nation-states powered by their own ideology, nationalism. In an age in which boundaries cannot halt climate change, nationalism fully engages in erecting ever higher boundaries.

Thus, we need to ask: if nationalism is the core ideological framework around which contemporary political relations are articulated, is it possible to involve it in the fight against climate change? I explore this answer via a few case studies arising within both stateless nations and nation-states. Riding the wave of nationalism, however, makes only senses if, at the same time, non-national solutions are also simultaneously considered, as condensed in the concept of "survival cosmopolitanism": Effective results can only be achieved when considering the plurality of possible solutions and avoiding fideistic responses such as 'techno-fixes' centred on the magic-irrational faith in technological innovation as the ultimate Holy Grail, which can easily be appropriated by nationalists. Salvation may come, not as much from technology, as from the abandonment of an economic system ruthlessly based on environmental destruction and the expansion of mass consumption.

Monday, April 17, 2023

One step away from the Library of Babel : How Science is Becoming Random Noise


It is said that if you have a monkey pounding at the keys of a typewriter, by mere chance, eventually it will produce all the works of Shakespeare. The Library of Babel, a story by Jorge Luis Borges, is another version of the same idea: a nearly infinite repository of books formed by all the possible combinations of characters. Most of these books are just random combinations of characters that make no sense but, somewhere in the library, there is a book unraveling the mysteries of the universe, the secrets of creation, and providing the true catalog of the library itself. Unfortunately, this book is impossible to find and even if you could find it you would not be able to separate it from the infinite number of books that claim to be it but are not. 

The Library of Babel (or a large number of typing monkeys) may be a fitting description of the sad state of "science" as it is nowadays. An immense machine that mostly produces nonsense and, perhaps, some gems of knowledge, unfortunately nearly impossible to find. 

Below, I am translating a post that appeared in Italian on the "Laterum" ("bricks") blog, with the signature of “Birbo Luddynski.” Before getting to this text, a few comments of mine. Obviously, "science" is a huge enterprise formed by, maybe, 10 million scientists. There exist a large number of different fields, different cultures, different languages, and these differences surely affect the way science works. So, you have to take the statements by Mr. "Luddynski" with a certain caution. The way he describes science is approximately valid for the Western world, and the Western scientific rules are spilling to the rest of the world, just like McDonald's fast food joints do. Today, if it is not Western Science, it is not science -- but is Western Science really science?

Mr. Luddyinski is mostly correct in his description, but he is missing some facets of the story that are even more damning than others. For instance, in his critique of science publishing, he does not mention that the scientists working as editors are paid by the publishers. So, they have an (often undeclared) conflict of interest in supporting a huge organization that siphons public money into the pockets of private organizations. 

On the other hand, Luddyinski is too pessimistic about the capability of individual scientists to do something good despite all the odds. In science, there holds the general rule that things done illegally are done most efficiently. So, scientists must obey the rules if they want to be successful but, once they attain a certain degree of success, they can bend the rules a little -- sometimes a lot -- and try to rock the boat by doing something truly innovative. It is mainly in this way that science still manages to progress and produce knowledge. Some fields like astronomy, artificial intelligence, ecosystem science, biophysical economics, and several others are alive and well, only marginally affected by corruption. 

Of course, the bureaucrats that govern these things are working hard at eliminating all the possible spaces where creative escapades are possible. Even assuming that they won't be completely successful, there remains the problem that an organization that works only when it ignores its own rules is hugely inefficient. For the time being, the public and the decision-makers haven't yet realized what kind of beast they are feeding, but certain things are percolating outside the ivory tower and are becoming known. Mr. Luddynski's paper is a symptom of the gradual dissemination of this knowledge. Eventually, the public will stop being mesmerized by the word "science" and may want something done to make sure that their tax money is spent on something useful rather than on a prestige competition among rock-star scientists.  

Here is the text by Luddynsky. I tried to translate it into English the best I could, maintaining its ironic and scathing (and a little scatologic) style.


Science is a Mountain of Shit

by Birbo Luddynsky - Feb 8, 2023

(translated by Ugo Bardi)

Foreword

Science is not the scientific method. "Science" - capitalized in quotation marks - is now an institution, stateless and transnational, that has fraudulently appropriated the scientific method, made it its exclusive monopoly, and uses it to extort money from society - on its own or on behalf of third parties - after having self-proclaimed itself as the new Church of the Certification of Truth [1].

An electrician trying to locate a fault or a cook aiming to improve a recipe are applying the scientific method without even knowing what it is, and it works! It has always worked for thousands of years, that is, before anyone codified its algorithm, and it will continue to do so, despite modern inquisitors.

This writer is not a scholar, he does not cite sources out of ideological conviction, but he learned at an early age how to spell "epistemology." He then wallowed for years in the sewage of the aforementioned crime syndicate until, without having succeeded in habituating himself to the mephitic stench, he found an honorable way out. Popper, Kuhn, and Feyerabend, were all thinkers who fully understood the perversion of certain institutional mechanisms but who could not even have imagined that the rot would run so unchecked and tyrannical through society, to the point where a scientist could achieve the power to prevent you from leaving your house, or owning a car, or forcing you to eat worms. Except for TK, he had it all figured out, but that is another matter.

This paper will not discuss the role that science plays in contemporary society, its gradual becoming a cult, the gradual eroding of spaces of freedom and civic participation in its name. It will not discuss the relationship with the media and the power they have to pass off the rants of a mediocre bumptious professor as established truths. Thus, there will be no talk about the highest offices of the state declaring war on anti-science, declaring victories at plebiscites that were never called, where people are taken to the polls by blackmail and thrashing.

There will be no mention of how "real science"- that is, that which pertains to respect for the scientific method-is being raped by the international virologists and scientific and technical committees of the world, who make incoherent decisions, literally at the drop of a hat, and demand that anyone comply without question.

Nor will we discuss the tendency to engage in sterile Internet debates about the article that proves us right in the Lancet or the Brianza Medical Journal or the Manure Magazine. Nor about the obvious tendency to cherry-pick by the deboonker of the moment. "Eh, but Professor Giannetti of Fortestano is a known waffler, lol." The office cultists of the ipse dixit are now meticulous enforcers of a strict hierarchy of sources and opinions, based on improbable as rigorous qualitative prestige rankings, or worse, equally improbable as arbitrary quantitative bibliometric indices.

Here you will be told why science is not what it says it is and precisely why it has assumed such a role in society.

Everything you will find written here is widely known and documented in books, longforms, articles in popular weeklies, and even articles in peer-reviewed journals (LOL). Do a Google search for such terms as  "publish or perish," "reproducibility crisis," "p-hacking," "publication bias," and dozens of other related terms. I don't provide any sources because, as I said, it is contrary to my ideology. The added value of what I am going to write is an immediate, certainly partisan and uncensored, description of the obscene mechanisms that pervade the entire scientific world, from Ph.D. student recruitment to publications.

There is also no "real science" to save, as opposed to "lasciviousness." The thesis of this paper is that science is structurally corrupt and that the conflicts of interest that grip it are so deep and pervasive, that only a radical reconstruction -after its demolition- of the university institution can save the credibility of a small circle of scholars, those who are trying painstakingly, honestly and humbly to add pieces to the knowledge of Creation.

The scientist and his career

"Scientist" today means a researcher employed in various capacities at universities, public or private research centers, or think tanks. That of the scientist is an entirely different career from that of any worker and is reminiscent of that of the cursus honorum of the Roman senatorial class. Cursus honorum my ass. A scientist's CV is different from that of any other worker. Incidentally, the hardest thing for a scientist is to compile a CV intelligible to the real world, should he or she want to return to earning an honest living among ordinary mortals. Try doing a Europass after six years of post-doc. LOL.

The scientist is a model student throughout his or her school career first and college career later. Model student means disciplined, with grades in the last percentiles of standardized tests, and also good interpersonal skills. He manages to get noticed during the last years of university (so-called "undergraduate," which however strangely, in Italy is called the magistrale or specialistica or whatever the fuck it is called nowadays), until he finds a recommendation (intended in the bad sense of the term) from a professor to enroll in a Ph.D. program, or doctorate, as it is called outside the Anglosphere. The Ph.D. is an elite university program, three to six years long, depending on the country and discipline, where one is trained to be a "scientist."

During the Ph.D. in the first few years, the would-be scientist takes courses at the highest technical level, taught by the best professors in his department, which are complemented by an initial phase of initiation into research, in the form of becoming a "research assistant," or RA. This phase, which marks the transition from "established knowledge" of textbooks to the "frontier of scientific research," takes place explicitly under the guidance of a professor-mentor, who will most likely follow the doctoral student through the remainder of the doctoral course, and almost always throughout his or her career. The education of crap. Not all doctoral students will, of course, be paired with a "winning" mentor, but only those who are more determined, ambitious, and show vibrancy in their courses.

During this RA phase, the student is weaned into what are the real practices of scientific research. It is a really crucial phase; it is the time when the student goes from the Romantic-Promethean-Faustian ideals of discovering the Truth, to nights spent fiddling with data that make no sense, amid errors in the code and general bewilderment at the misplaced meaning. Why am I doing all this? Why do I have to find just THAT result? Doesn't this go against all the rationalist schemata I was sharing on Facebook two years ago when I was pissing off the flat-earthers on duty? What do you mean if the estimate does not support the hypothesis, then I have to change the specification of the model? It is at this point that the young scientist usually asks the mentor some timid questions with a vague "epistemological" flavor, which are usually evaded with positivist browbeating about science proceeding by trial and error, or with a speech that sounds something like this: "You're going to get it wrong. There are two ways of not understanding: not understanding because you are trying to understand, or not understanding by pissing off others. Take your pick. In October, your contract runs out."

The purest, at this point, fall into depression, from which they will emerge by finding an honest job, with or without a piece of paper. The careerists, psychopaths, and naive fachidioten, on the other hand, will pick up speed and run along the tracks like bullet trains, unstoppable in their blazing careers devoted to prestige.

After the first phase of RA, during which the novice contributes to the mentor's publications-with or without mention among the authors, he or she moves on to the next, more mature phase of building his/her own research pipeline, collaborating as a co-author with the mentor and his other co-authors, attending conferences, weaving a relational network with other universities. The culmination of this phase is the attainment of the Ph.D. degree, which, however, at this point, is nothing more than a formality, since the doctoral student's achievements speak for themselves. The "dissertation," or "defense," in fact, will be nothing more than a seminar where the student presents his or her major work, which is already publishable.

But now the student is already launched on a meteoric path: depending on discipline and luck, he will have already signed a contract as an "assistant professor" in a "tenure track," or as a "post-doc." His future career will thus be determined solely by his ability to choose the right horses, that is, the research projects to bet on. Indeed, he has entered the hellish world of "publish or perish," in which his probability of being confirmed "for life" depends solely on the number of articles he manages to publish in "good" peer-reviewed journals. In many hard sciences, the limbo of post-docs, during which the scientist moves from contract to contract like a wandering monk, unable to experience stable relationships, can last as long as ten years. The "tenure track," or the period after which they can decide whether or not to confirm you, lasts more or less six years. A researcher who manages to become an employee in a permanent position at a university before the age of 35-38 can consider himself lucky. And this is by no means a peculiarity of the Italian system.

After the coveted "permanent position" as a professor, can the researcher perhaps relax and get down to work only on sensible, quality projects that require time and patience and may not even lead anywhere? Can he work on projects that may even challenge the famous consensus? Does he have time to study so that he can regain some insight? Although the institution of "tenure," i.e., the permanent faculty position, was designed to do just that, the answer is no. Some people will even do that, aware that they end up marginalized and ignored by their peers, in the department and outside. Not fired, but he/she ends up in the last room in the back by the toilet, sees not a penny of research funding, is not invited to conferences, and is saddled with the lamest PhD students to do research. If he is liked, he is treated as an eccentric uncle and good-naturedly teased, if disliked he is first hated and then ignored.

But in general, if you've survived ten years of publish or perish -- and not everyone can -- it's because you like doing it. So you will keep doing it, partly because the rewards are coveted. Salary increases, committee chairs, journal editor positions, research funds, consultancies, invitations to conferences, million-dollar grants as principal investigator, interviews in newspapers and on television. Literally, money and prestige. The race for publications does not end; in fact, at this point, it becomes more and more ruthless. And the scientist, now a de facto manager of research, with dozens of Ph.D. students and postdocs available as workforce, will increasingly lose touch with the subject of research to become interested in academic politics, money, and public relations. It will be the doctoral student now who will "get the code wrong" for him until he finds the desired result. He won't even have to ask explicitly, in most cases. Nobody will notice anyway, because nobody gives a damn. But what about peer review?

The peer-review

Thousands of pages have been written about this institution, its merits, its flaws, and how to improve it. It has been ridiculed and trolled to death, but the conclusion recited by the standard bearers of science is always the same: it is the best system we have, and we should keep it. Which is partly true, but its supposed aura of infallibility is the main cause of the sorry state the academy is in. Let's see how it works.

Our researcher Anon has his nice pdf written in Latex titled "The Cardinal Problem in Manzoni: is it Betrothed or Behoofed?" (boomer quote) and submits it to the Journal of Liquid Bullshit via its web platform. The JoLB chief editor takes the pdf, takes a quick look, and decides which associate editor to send it to, depending on the topic. The chosen editor takes another quick look and decides whether to 1) send a short letter where he tells Anon that his work is definitely nice and interesting but, unfortunately it is not a good fit for the Journal of Liquid Bullshit, and he recommends similar outlets such as Journal of Bovine Diarrhea, or Liquid Bovine Stool Review or 2) send it to the referees. But let's pause for a moment.

What is the Journal of Liquid Bullshit?

The Journal of Liquid Bullshit is a "field journal" within the general discipline of "bullshit" (Medicine, Physics, Statistics, Political Science, Economics, Psychology, etc.), dealing with the more specialized field of Liquid Bullshit (Virology, Astrophysics, Machine Learning, Regime Change, Labor Economics, etc.). It is not a glorious journal like Science or Nature, and it is not a prestigious journal in the discipline (Lancet, JASA, AER, etc). But it is a good journal in which important names in the field of liquid crap nonetheless publish regularly. In contrast to the more prestigious ones, which are generally linked to specific bodies or university publishers, most of these journals are published (online, that is) by for-profit publishers, who make a living by selling subscription packages to universities at great expense.

Who is the chief editor? The editor is a prominent personality in the Liquid Bullshit field. Definitely, a Full Professor, minimum of 55 years old, dozens of prominent publications, highly cited, has been keynote speaker at several annual conferences of the American Association of Liquid Bovine Stools. During conferences, he constantly has a huddle of people around him.


Who are the Associate editors?

They are associate or full professors with numerous publications, albeit relatively young. Well connected, well launched. The associate editor's job is to manage the manuscript at all stages, from choosing referees to decisions about revisions. The AE is the person who has all the power over the paper, not only of course for the final decision, but also for the choice of referees.


Who are the referees

Referees are academic researchers formally contacted by the AE to evaluate the paper and write a short report. Separately, they also send a private recommendation to the AE on the fate of the paper: acceptance, revision, or rejection. The work of the referees is unpaid; it is time taken away from research or very often from free time. The referee receives only the paper; he or she has no way to evaluate the data or codes, unless (which is very rare) the authors make them available on their personal web pages before publication. It is difficult in any case for the referee to waste time fiddling around. In fact, the evaluation required of the referee is only of a methodological-qualitative nature, and not of merit. In fact, the "merit" evaluation would be up to the "scientific community," which, by replicating the authors' work, under the same or other experimental conditions, would judge its validity. Even a child would understand that there is a conflict of interest as big as a house. In fact, the referee can actively sabotage the paper, and the author can quietly sabotage the referees when it is their turn to judge. 

In fact, when papers were typed and shipped by mail, the system used was "double-blind" review. The title page was removed from the manuscript: the referee did not know who was judging, and the author did not know who the referee was. Since the Internet has existed, however, authors have found it more and more convenient to publicize their work in the "workings paper" format. There are many reasons for doing so, and I won't go into them here, but it is now so widespread that referees need only do a very brief google search to find the working paper and, with it, the authors' names. By now, more and more journals have given up pretending to believe in double-blind reviewing, and they send the referees the pdf complete with the title page. Thus referees are no longer anonymous referees, especially since they have strong conflicts of interest. For example, the referee may get hold of the paper of a friend or colleague of his or her, or a stranger who gives right - or wrong - to his or her research. A referee may also "reveal" himself years later to the author, e.g. over a drink, during alcoholic events at a conference, obviously in case of a favorable report. You know, I was a referee of your paper at Journal X. It can come in handy if there is some confidence. Let's not forget that referees are people who are themselves trying to get their papers published in other journals, or even the same ones. And not all referees are the same. A "good" referee is a rare commodity for an editor. The good referee is the one who responds on time, and writes predictable reports. I have known established professors who boasted that they receive dozens of papers a month to referee "because they know I reject them all right away."

So how does this peer review process work?

We should not imagine editors as people who are removed from the real world, just with their hands extremely full. An editor, in fact, has enormous power, and one does not become an editor unless he or she shows that he or she covets this power, as well as having earned it, according to the perverse logic of the cartel. The editor's enormous power lies in influencing the fate of a paper at all stages of revision. He can choose referees favorable or unfavorable to a paper: scientific feuds and their participants are known, and are especially known to editors. It can also side with the minority referee and ask for a review (or rejection).

Every top researcher knows which editors are friendly, and knows which potential referees can be the most dangerous enemies. Articles are often calibrated, trying to suck up to the editor or potential referees by winking at their research agendas. Indeed, it is in this context that the citation cow market develops: if I am afraid of referee Titius, I will cite as many of his works as possible. Or another strategy (valid for smaller papers and niche works) is to ignore him completely, otherwise the neutral editor might choose him as referee simply by scrolling through the bibliography. Many referees also go out of their way at the review stage to pump up their citations essentially pushing authors to cite their work, even if it is irrelevant. But this happens in smaller journals.  [2]

The most relevant thing to understand about the nature of peer review is how it is a process in which individuals who are consciously and structurally conflicted participate. The funny thing is that this same system is also used to allocate public research funds, as we shall see.

The fact that the occasional "inconvenient" article manages to break through the peer review wall should not, of course, deceive: the process, while well-guided, still remains partly random. A referee may recommend acceptance unexpectedly, and the editor can do little about it. In any case, little harm: an awkward article now and then will never contribute to the creation of consensus, the one that suits the ultimate funding apparatus, and indeed gives the outside observer the illusion that there is frank debate.


But so can one really rig research results and get away with it?

Generally yes, partly because you can cheat at various levels, generally leaving behind various screens of plausible deniability, i.e., gimmicks that allow you to say that you were wrong, that you didn't know, that you didn't do it on purpose, that it was the Ph.D. student, that the dog ate your raw data. Of course, in the rare case that you get caught you don't look good, and the paper will generally end up "retracted." A fair mark of infamy on a researcher's career, but if he can hide the evidence of his bad faith he will still retain his professorship, and in a few years everyone will have forgotten. After all, these are things that everyone does: sin, stones, etc.

Every now and then, more and more frequently, retraction of some important paper happens, where it turns out that the data of some well-published work was completely made up, and that the results were too good to be true. This happens when some young up-and-coming psychopath exaggerates, and draws too much attention to himself. These extreme cases of outright fraud are certainly more numerous than those discovered, but as mentioned, you don't need to invent the data to come up with a publishable result, just pay a monkey to try all possible models on all possible subsets. The a posteriori justification of why you excluded subset B can always be found. You can even omit it if you want to, because if you haven't stepped on anyone's toes, there is no way that anyone is going to start fleecing your work. Even in the experimental sciences it can happen that no one has been able to replicate the experiments of very important papers, and it took them fifteen years to discover that maybe they had made it all up. Go google "Alzheimer scandal" LOL! There is no real incentive in the academy to uncover bogus papers, other than literally bragging about them on twitter.


Research funding

Doing research requires money. There is not only laboratory equipment, which is not needed in many disciplines anyway. There is also, and more importantly, the workforce. Laboratories and research in general are run by underpaid people, namely Ph.D. students and post-docs (known in Italy as "assegnisti"). But not only that, there are also expenses for software, dataset purchase, travel to conferences, seminars, and workshops to be organized, with people to be invited and to whom you have to pay travel, hotel, and dinner at a decent restaurant. Consider the important PR side of this: inviting an important professor, perhaps an editor, for a two-day vacation and making a "friend" of him or her can always come in handy.

In addition to all this, there is the fact that universities generally keep a share of the grants won by each professor, and with this share, they do various things. If the professor who does not win grants needs to change his chair, or his computer, or wants to go present at a loser conference where he is not invited, he will have to go hat in hand to the department head to ask for the money, which will largely come from the grants won by others. I don't know how it works in Italy, but in the rest of the world, it certainly does. This obviously means that a professor who wins grants will have more power and prestige in the department than one who does not win them.

In short, winning grants is a key goal for any ambitious researcher. Not winning grants, even for a professor with tenure, implies becoming something of an outcast. The one who does not win grants is the odd one out, with shaggy hair, who has the office down the hall by the toilet. But how do you win grants?

Grants are given by special agencies -- public or private -- for the disbursement of research funds, which publish calls, periodic or special. The professor, or research group, writes a research project in which he or she says what he or she wants to do, why it is important, how he or she intends to do it, with what resources, and how much money he or she needs. The committee, at this point, appoints "anonymous" referees who are experts in the field, and peer review the project. It doesn't take long to realize that if you are an expert in the field, you know very well who you are going to evaluate. If you have read this far, you will also know full well that referees have a gigantic conflict of interest. In fact, all it takes is one supercilious remark to scuttle the rival band's project, or to favor the friend with whom you share the same research agenda, while no one will have the courage to scuttle the project of the true "raìs" of the field. Finally, the committee will evaluate the applicant's academic profile, obviously counting the number and prestige of publications, as well as the totemic H-index.

So we have a system where those who get grants publish, and those who publish get grants. All is  governed by an inextricable web of conflicts of interest, where it is the informal, and ultimately self-interested, connections of the individual researcher that win out. Informal connections that, let us remember, start with the Ph.D. What is presented as an aseptic, objective, and informal system of meritocratic evaluation resembles at best the system for awarding the contract to resurface the bus parking lot of a small town in the Pontine countryside in the 1980s.


The research agenda

We have mentioned it several times, but what really is a research agenda? Synthetically we can say that the research agenda is a line of research in a certain sub-field of a sub-discipline, linked to a starting hypothesis, and/or a particular methodology. This starting hypothesis will always tend to be confirmed by those pursuing the agenda. The methodology, on the other hand, will always be presented as decisive and far superior to the alternatives. A research agenda, to continue with the example from before, could be the relationship between color and specific gravity of cattle excrement and milk production. Or non-parametric methods to estimate excreta production given diet composition.

Careers are built or blocked around the research agenda: a researcher with a "hot" agenda, perhaps in a relatively new field, will be much more likely to publish well, and thus obtain grants, and thus publish well. You are usually launched on the hot agenda during your doctoral program, if the advisor advises you well. For example, it may be that the advisor has a strand in mind, but he doesn't feel like setting out to learn a new methodology at age 50, so he sends the young co-author ahead. Often, then, the 50-year-old prof, now a "full professor," finds himself becoming among the leaders of a "cutting-edge" research strand without ever really having mastered its methodologies and technicalities, thus limiting himself to the managerial and marketing side of the issue.

As already explained, real gangs form around the agendas, acting to monopolize the debate on the topic, giving rise to a real controlled pseudo-debate. The bosses' "seminal" articles can never be totally demolished; if anything, they can be enriched, and expanded. One will be able to explore the issues from another point of view, under other dimensions, using new analytical techniques, different data, which will lead to increasingly different conclusions. The key thing is that no one will ever say "sorry guys but we are wasting time here." The only time wasted in the academy is time that does not lead to publications and, therefore, grants.

But then who dictates the agenda? "They" dictate it - directly - the big players in research, that is, the professors at the top of their careers internationally, editors of the most prestigious journals. Indirectly, then, the ultimate funders of research, direct and indirect, dictate it: multinational corporations and governments, both directly and indirectly, through the actions of lobbies and various international potentates.  

The big misconception underlying science, and thus the justification of its funding in the face of public opinion, is that this incessant and chaotic "rush to publish" nonetheless succeeds in adding building blocks to knowledge. This is largely false.

In fact, research agendas never talk to each other, and above all, they never seem to come to a conclusion. After years of pseudo-debate the agenda will have been so "enriched" by hundreds of published articles that trying to make sense of it would be hard and thankless work. Thankless because no one is interested in doing this work. Funds have been spent, and chairs have been taken. In fact, the strand sooner or later dries up: the topic stops being fashionable, thus desirable to top journals, it gradually moves on to smaller journals, new Ph.D. students will stop caring about it, and if the leaders of the strand care about their careers, they will move on to something else. And the merry-go-round begins again.

The fundamental questions posed at the beginning of the agenda will not have been satisfactorily answered, and the conceptual and methodological issues raised during the pseudo-academic debate will certainly not have been resolved. An outside observer who were to study the entire literature of a research strand could not help but notice that there are very few "take-home" results. Between inconclusive results, flawed or misapplied methodologies, the net contribution brought to knowledge is almost always zero, and the qualitative, or "common sense" answer always remains the best.


Conclusions

We have thus far described science. Its functioning, the actors involved in it and their recruitment. We have described how conflicts of interest - and moral and substantive corruption - govern every aspect of the academic profession, which is thus unable, as a whole, to offer any objective, unbiased viewpoint, on anything.

The product of science is a giant pile of crap: wrong, incomplete, blatantly false and/or irrelevant answers. Answers to questions that in most cases no one in their right mind would want to ask. A mountain of shit where no one among those who wallow in it knows a shit. No one understands shit, and poorly asked questions are given in return for payment-piloted answers. A mountain of shit within which feverish hands seek-and find-the mystical-religious justification for contemporary power, arbitrariness and tyranny.

Sure, there are exceptions. Sure, there is the prof. who has gone through the net of the system, and now thanks to the position he gained, he has a platform to say something interesting. But he is an isolated voice, ridiculed, used to stir up the usual two minutes of TV or social hatred. There is no baby to be saved in the dirty water. The child is dead. Drowned in the sewage.

The "academic debate" is now a totally self-referential process, leading to no tangible quantitative (or qualitative) results. All academic research is nothing but a giant Ponzi scheme to increase citations, which serve to get grants and pump up the egos and bank accounts -- but mostly egos -- of professional careerists.

Science as an institution is an elephantine, hypertrophied apparatus, corrupt to the core, whose only function - besides incestuously reproducing itself - is to provide legitimacy for power. Surely at one time it was also capable of providing the technical knowledge base necessary for the reproduction and maintenance of power itself. No longer today, the unstoppable production of the shit mountain makes this impossible. At most, it manages to select, nominally and by simply assigning an attendance token through the most elite schools, the scions of the new ruling class.

When someone magnifies the latest scientific breakthrough to you, the only possible response you can give is, "I'm not buying anything, thank you." If someone tells you that they are acting "according to science," run away.

[1] Advising Galilei to talk about "hypothesis" was Bellarmine. In response, Galilei published the ridiculous "dialogue," where the evidence brought to support his claims about heliocentrism was nothing more than conjecture, completely wrong, and without any empirical basis. The Holy Office's position was literally, "say whatever you like as long as you don't pass it off as Truth." Galilei got his revenge: now they say whatever they like and pass it off as Truth. Sources? Look them up. 

[2] Paradoxically, in the lower-middle tier of journals, where cutthroat competition is minimal, one can still find a few rare examples of peer review done well. For example, I was once asked to referee a paper for a smaller journal, with which I had never had anything to do and whose editor I did not know even indirectly. The paper was sent without a title page therefore anonymous, and strangely I could not find the working paper online. It was objectively rubbish, and I recommended rejection.

________________________________________________________________





Thursday, April 13, 2023

What's Wrong With Science? Mostly, it is how we Mismanage it

 


"A scientist, Dr. Hans Zarkov, works night and day, perfecting the tool he hopes to save the world... His great mind is strained by the tremendous effort" (From Alex Raymond's "Flash Gordon")


We tend to see science as the work of individual scientists, maybe of the "mad scientist" kind. Great minds fighting to unravel the mysteries of nature with the raw power of their minds. But, of course, it is not the way science works. Science is a large network of people, institutions, and facilities. It consumes huge amounts of money from government budgets and private enterprises. And most of this money, today, is wasted on useless research that benefits no one. Science has become a great paper-churning machine whose purpose seems to be mainly the glorification of a few superstar scientists. Their main role seems to be to speak glibly and portentously about how what they are doing will one day benefit humankind, provided that more money is poured into their research projects.

Adam Mastroianni makes a few simple and well-thought considerations in his blog about why science has become the disastrous waste of money and human energy it is today. The problem is not with science itself: the problem is how we manage large organizations. 

You may have experienced the problem in your career. Organizations seem to work nicely for the purpose they were built up to when they include a few tens of people, maybe up to a hundred members. Then, they devolve into conventicles whose main purpose seems to be to gather resources for themselves, even at the cost of damaging the enterprise as a whole. 

Is it unavoidable? Probably yes. It is part of the way Complex Adaptive Systems (CAS) work, and, by all means, human organizations are CASs. These systems are evolution-driven: if they exist, it means they are stable. So, the existing ones are those who managed to attain a certain degree of stability. They do that by ruthlessly eliminating the inefficient parts of the system. The best example is Earth's ecosystem: You may have heard that evolution means the "survival of the fittest." But no, it is not like that. It is the system that must survive, not individual creatures. The "fittest" creatures are nothing if the system they are part of does not survive. So, ecosystems survive by eliminating the unfit. Gorshkov and Makarieva call them "decay individuals." You can find these considerations in their book "Biotic Regulation of the Environment."

It is the same for the CAS we call "Science." It has evolved in a way that maximizes its own survival and stability. That's evident if you know just a little about how Science works. It is a rigid, inflexible, self-referencing organization refractory to all attempts to reform from the inside. It is a point that Mastroianni makes very clear in his post. A huge amount of resources and human efforts are spent by the scientific enterprise to weed out what's defined as "bad science," seen as anything that threatens the stability of the whole system. That includes the baroque organization of scientific journals, the gatekeeping control by the disastrously inefficient "peer review" system, the distribution of research funds by rigid old-boy networks, the beastly exploitation of young researchers, and more. All this tends to destroy both the very bad (which is a good thing) and the very good (which is not a good thing at all). But both the very good and the very bad threaten the stability of the entrenched scientific establishment. Truly revolutionary discoveries that really could change the world would reverberate through the established hierarchies and make the system collapse. 

Matroianni makes these points from a different viewpoint that he calls the "weak links -- strong links" problem. It is a correct way if you frame Science not as a self-referencing system but as a subsystem of a wider system which is human society. In this sense, Science exists to serve useful purposes and not just to pay salaries to scientists. What Mastroianni says is that we should strive to encourage good science instead of discouraging bad science. What we are doing is settling on mediocrity, and we just waste money in the process. Here is how he summarizes his idea. 

I strongly encourage you to read the whole Mastroianni's post because it is very well argumented and convincing. It is what we should do to turn science into something useful that we badly need in this difficult moment for humankind. But the fact that we should do that doesn't mean it will be done. Note in Mastroianni's post the box that says "Accept Risk." This is anathema for bureaucrats, and the need for it nearly guarantees that it will not be done. 

Yet, we might at least try to push science into doing something useful. Prizes could be a good idea: by offering prices, you pay only for success, but not for failure. But in Science, prizes are rare; apart from the Nobel prize and a few others, scientists do not compete for prizes. That's something we could work on. And, who knows, we might succeed in improving science, at least a little! 





 



Friday, April 7, 2023

Why we Can't Change Anything Before it is Too Late.

 


Yours truly, Ugo Bardi, in a recent interview on a local TV station. note the "Limits to Growth" t-shirt and, as a lapel pin, the ASPO-Italy logo. 

A few days ago, I was invited to an interview on a local TV about the energy transition. I prepared myself by collecting data. I was planning to bring to the attention of viewers a few recent studies that showed how urgent and necessary it is to move away from conventional engines, including a recent paper by Roberto Cazzolla-Gatti(*) that shows how the combustion of fossil fuels is one of the main causes of tumors in Italy. 

And then I had a minor epiphany in my mind. 

I saw myself from the other side of the camera, appearing on the screen in someone's living room. I saw myself as one more of those white-haired professors who tell viewers, "look, there is a grave danger ahead. You must do as I say, or disaster will ensue."

No way. 

I could see myself appearing to people as more or less the same as one of the many TV virologists who had terrorized people with the Covid story during the past three years. "There is a grave danger caused by a mysterious virus. If you don't do as I say, disaster will ensue." 

It scared people a lot, but only for a while. And now the poor performance of TV virologists, Tony Fauci and the others, cast a shade over the general validity of science. As a result, we now see a wave of anti-science sweeping the discussion while carrying along the flotsam of decades of legends. Fake lunar landings, earthquakes as weapons, how Greenland was green at the time of Erik the Red, and don't you know that climate has always been changing? Besides, Greta Thumberg is a bitch.

But it is not so much a fault of the TV virologists, although they have done their part in creating the damage. It is the human decisional system that works in a perverse way. More or less, it works like this:

  1. Scientists identify a grave problem and try to warn people about it. 
  2. The scientists are first demonized, then ignored.
  3. Nothing is done about the problem.
  4. When it is discovered that the warning was correct, it is too late. 

Do you remember the story of the boy who cried "wolf"? Yes, it works exactly like that in the real world. One of the first modern cases in real history was that of "The Limits to Growth" in 1972. 

  1. A group of scientists sponsored by the Club of Rome discovered that unrestrained growth of the global economic system would lead to its collapse.
  2. The scientists and the Club of Rome were demonized, then ignored.
  3. Nothing was done about the problem.
  4. Now that we are discovering that the scientists were right, collapse is already starting.
More recently, we saw how, 
  1. Scientists tried to alert people about the dangers of climate change.
  2. Scientists were demonized and then ignored.
  3. Nothing was done about climate change.
  4. When it was discovered that the warning was correct, it was too late. (it is).
There are many more examples, but it almost always works like this. Conversely, when, for some reason, people take heed of the warning, the results may be even worse, as we saw with the Covid epidemic. In that case, you can add a 1b line to the list that says, "people become scared and do things that worsen the problem." After a while, line 2 (scientists are demonized) takes over, and the cycle goes on.  

So, what are the conclusions? The main one, I'd say, is: 

Avoid being a white-haired scientist issuing warnings about grave dangers from a TV screen

Then, what should you say when you appear on TV (and you happen to be a white-haired scientist)? Good question. My idea for that TV interview was to present change as an opportunity rather than an obligation. I was prepared to explain how there are many possible ways to improve the quality of our life by moving away from fossil fuels. 

How did it go? It was one of the best examples that I experienced in my life of the general validity of the principle that says, "No battle plan survives contact with the enemy." The interview turned out to be a typical TV ambush in which the host accused me of wanting to beggar people by taking away their cars and their gas stoves, of trying to poison the planet with lithium batteries, and of promoting the exploitation of the 3rd world poor with coltan mines. I didn't take that meekly, as you may imagine. 

The interview became confrontational, and it quickly degenerated into a verbal brawl. I am not linking to the interview; it is not so interesting. Besides, it was all in Italian. But you can get some idea of how these things go from a similar ambush against Matt Taibbi on MSNBC. What did the viewers think? Hopefully, they switched channels. 

In the end. I am only sure that if something has to happen, it will. 


(*) The paper by Roberto Cazzolla-Gatti on the carcinogenic effects of combustion is truly impressive. Do read it, even if you are not a catastrophist. You'll learn a lot. 

(**) CJ Hopkins offers some suggestions on how to behave when you are subjected to this kind of attack. He says that you should refuse to answer some questions, answer with more questions, avoid taking the interviewer seriously, and things like that. It is surely better than trying to just defend oneself, but it is extremely difficult. It was not the first time that I faced this kind of ambush, and when you are in the crossfire you have little or no chances to avoid a memetic defeat. 

How to Make Your Google Masters Happy: Fixing the Privacy Policy of Your Blog

  


As I told you in a previous post, for months, Google has been pestering me with notices that there was something wrong with the privacy policy of my blog and that if I wouldn't fix it, they would start doing dark and dire things, such as making my blog invisible to search engines. Now, after many attempts and much struggle, I can tell you that the saga is over.  So, I am posting these notes that may be useful for you in case you find yourself in the same situation.

The problem had to do with the privacy regulations of the EU and the EEA, aka the "General Data Protection Regulation (GDPR): I had to obtain consent from the user for something not explicitly described in the ominous messages I was receiving. Fixing the problem turned out to be a small Odyssey. 

1) Using search engines The first thing you normally do in these cases is look over the Web to see if someone has already solved the problem that plagues you. About this specific question, I immediately found myself facing a wall of sites claiming that they can solve the problem for you if you just pay some money. Mostly, they looked like traps, but I was dumb enough to pay $29 for a "personalized policy declaration" that came with the request of a further payment for hosting it on their site. I took care myself to create a subpage of the blog to host it at no cost. 

First lesson learnedskip the sites that ask you money to fix this problem unless you are a commercial site and you need to do it quickly. 


2. The text I downloaded may have been a good policy declaration, but Google still wasn't satisfied and I later learned that they didn't give a hockey stick about that. 

Second lesson learned: you can spend a lot of time (and also money) fixing the wrong problem.


3. I contacted Google's customer service at ddp-gdpr-escalations@google.com -- yes, they have a customer service to help people fixing exactly the GDPR problem. Amazingly, I got in contact with someone who seemed to be a real person -- the messages were signed "Gargi," which is an Indian male name. After a few interactions, he finally told me what Google wanted. It was simple: I just had to add the sentence "cookies are used for ads personalization" in the "consent banner." And that was it. Gargi even sent me a screenshot of what the banner should look like. It was a step forward. 

Third lesson learned. Human beings can still be useful for something. 



4But who controls the cookie banner? I had never placed a cookie banner on my blog, and I saw no such a thing appearing when I loaded the blog. Other people told me that they didn't see any banners on the first page of my blog. I had always interpreted the lack of a banner as a consequence of my blog not being a commercial one. But, no, the trick was a different one. After much tinkering and head-scratching, I discovered that my browser (Chrome) keeps track of previous decisions and didn't show the banner again to people who had already accepted the cookies. I could see the banner if I erased the cookies from my main browser, or used a "virgin" browser. The beauty of this trick is that not even the people from Google's customer service seemed to know it; so, at some moment, they started telling me that my blog had no cookie banner, and I had to explain to them that they just weren't seeing it, but it was there. Once they understood this, it was no more a problem. But it took time. 

Fourth lesson learned: Truth may be hidden, and often is. 


5. How do you change the text of the cookie consent banner? One of those things that look easy but are not easy at all. First, you have to access the HTML code of your blog, which is not an easy task by itself. It is like open heart surgery: you make a mistake, and the patient dies. Then, even if you know how to manage HTML, you soon discover a little problem. There is NO CODE for the cookie consent banner on the HTML page of Google's blogs. The banner is dynamically generated from somewhere, Google knows where, and it is not accessible with the tools provided by Google's blogger. 

Fifth lesson learned: Google plays with you like a cat plays with a mouse.

6. It means that there has to be a widget for the cookie banner, right? Yes, there is such a widget that you can set as showing a cookie banner as you like it to be. The problems are that 1) it cannot show the banner at top of the page, where these banners normally are, and 2) it doesn't replace the Google-generated banner. So, the result is that you have two different banners in different areas of the screen at the same time. Apart from the awful effect on the way your blog looks, it is not surprising that Google was still not happy with this solution

Sixth lesson learned: Some solutions are not. 


7. How about trying chatGPT? Eventually, chatGPT gave me the right hint. It said that it was possible to insert a cookie banner script in the main HTML page of the blog. I tried the scripts provided by chatGPT and none worked, but those provided by helpful human bloggers did. I found that scripts (unlike widgets) can supersede the Google-created banner. After some tweaking, Google was finally happy. 

Seventh Lesson learned. ChatGPT is your friend, but it is a bad programmer   

________________________________________________

Conclusion. 

The good thing about this story is that I learned something, but it was also a sobering experience. The way Google managed it was so bad that I can only understand it as an explicit attempt to discourage small bloggers who are not making money from their blogs and who can't afford a professional maintenance service. Just why harass poor bloggers to do something that Google could do easily on a banner that it is wholly managed by Google? I mean, do you realize the time lost to do such a simple thing as adding a single sentence to a banner? 

It seems clear to me that at Google they don't like blogs in general. Even though they offer a blogging platform, it is a poor service for several reasons. Yet, Blogger also has several good points, the main one being that it is free. Then, it offers you possibilities of customization that other "bare-bones" platforms (e.g. substack) do not provide. For someone who just wants to express his/her ideas in public, it can still be a good choice. But, after this experience, I am wary. Google knows what they have in mind next in Mountain View. So, I may switch platforms in the near future. For now, "The Seneca Effect" blog is still there, alive and reasonably well, even though shadow-banned by the Powers That Be. And maybe these notes could be useful for you.

Final lesson I learned: I, for one, welcome our new Google masters 

_________________________________________________________

Here is the script to control the text of the cookie consent banner to be cut and pasted into the HTML code of a blogger blog after the </head> tag. It is simple, but it wasn't so simple to understand what was needed. 

<script type='text/javascript'> 

  cookieOptions = { 

    msg: &quot;This site uses cookies for ad personalization, to analyse traffic and to deliver some Google services. By using this site, you agree to its use of cookies.&quot;, 

    link: &quot;https://www.senecaeffect.com/p/privacy-policy-for-seneca-effect-blog.html&quot;, 

    close: &quot;Okay!&quot;, 

    learn: &quot;Learn More&quot; }; 

</script>