The Roman Philosopher Lucius Anneaus Seneca (4 BCE-65 CE) was perhaps the first to note the universal trend that growth is slow but ruin is rapid. I call this tendency the "Seneca Effect."

Monday, April 17, 2023

One step away from the Library of Babel : How Science is Becoming Random Noise


It is said that if you have a monkey pounding at the keys of a typewriter, by mere chance, eventually it will produce all the works of Shakespeare. The Library of Babel, a story by Jorge Luis Borges, is another version of the same idea: a nearly infinite repository of books formed by all the possible combinations of characters. Most of these books are just random combinations of characters that make no sense but, somewhere in the library, there is a book unraveling the mysteries of the universe, the secrets of creation, and providing the true catalog of the library itself. Unfortunately, this book is impossible to find and even if you could find it you would not be able to separate it from the infinite number of books that claim to be it but are not. 

The Library of Babel (or a large number of typing monkeys) may be a fitting description of the sad state of "science" as it is nowadays. An immense machine that mostly produces nonsense and, perhaps, some gems of knowledge, unfortunately nearly impossible to find. 

Below, I am translating a post that appeared in Italian on the "Laterum" ("bricks") blog, with the signature of “Birbo Luddynski.” Before getting to this text, a few comments of mine. Obviously, "science" is a huge enterprise formed by, maybe, 10 million scientists. There exist a large number of different fields, different cultures, different languages, and these differences surely affect the way science works. So, you have to take the statements by Mr. "Luddynski" with a certain caution. The way he describes science is approximately valid for the Western world, and the Western scientific rules are spilling to the rest of the world, just like McDonald's fast food joints do. Today, if it is not Western Science, it is not science -- but is Western Science really science?

Mr. Luddyinski is mostly correct in his description, but he is missing some facets of the story that are even more damning than others. For instance, in his critique of science publishing, he does not mention that the scientists working as editors are paid by the publishers. So, they have an (often undeclared) conflict of interest in supporting a huge organization that siphons public money into the pockets of private organizations. 

On the other hand, Luddyinski is too pessimistic about the capability of individual scientists to do something good despite all the odds. In science, there holds the general rule that things done illegally are done most efficiently. So, scientists must obey the rules if they want to be successful but, once they attain a certain degree of success, they can bend the rules a little -- sometimes a lot -- and try to rock the boat by doing something truly innovative. It is mainly in this way that science still manages to progress and produce knowledge. Some fields like astronomy, artificial intelligence, ecosystem science, biophysical economics, and several others are alive and well, only marginally affected by corruption. 

Of course, the bureaucrats that govern these things are working hard at eliminating all the possible spaces where creative escapades are possible. Even assuming that they won't be completely successful, there remains the problem that an organization that works only when it ignores its own rules is hugely inefficient. For the time being, the public and the decision-makers haven't yet realized what kind of beast they are feeding, but certain things are percolating outside the ivory tower and are becoming known. Mr. Luddynski's paper is a symptom of the gradual dissemination of this knowledge. Eventually, the public will stop being mesmerized by the word "science" and may want something done to make sure that their tax money is spent on something useful rather than on a prestige competition among rock-star scientists.  

Here is the text by Luddynsky. I tried to translate it into English the best I could, maintaining its ironic and scathing (and a little scatologic) style.


Science is a Mountain of Shit

by Birbo Luddynsky - Feb 8, 2023

(translated by Ugo Bardi)

Foreword

Science is not the scientific method. "Science" - capitalized in quotation marks - is now an institution, stateless and transnational, that has fraudulently appropriated the scientific method, made it its exclusive monopoly, and uses it to extort money from society - on its own or on behalf of third parties - after having self-proclaimed itself as the new Church of the Certification of Truth [1].

An electrician trying to locate a fault or a cook aiming to improve a recipe are applying the scientific method without even knowing what it is, and it works! It has always worked for thousands of years, that is, before anyone codified its algorithm, and it will continue to do so, despite modern inquisitors.

This writer is not a scholar, he does not cite sources out of ideological conviction, but he learned at an early age how to spell "epistemology." He then wallowed for years in the sewage of the aforementioned crime syndicate until, without having succeeded in habituating himself to the mephitic stench, he found an honorable way out. Popper, Kuhn, and Feyerabend, were all thinkers who fully understood the perversion of certain institutional mechanisms but who could not even have imagined that the rot would run so unchecked and tyrannical through society, to the point where a scientist could achieve the power to prevent you from leaving your house, or owning a car, or forcing you to eat worms. Except for TK, he had it all figured out, but that is another matter.

This paper will not discuss the role that science plays in contemporary society, its gradual becoming a cult, the gradual eroding of spaces of freedom and civic participation in its name. It will not discuss the relationship with the media and the power they have to pass off the rants of a mediocre bumptious professor as established truths. Thus, there will be no talk about the highest offices of the state declaring war on anti-science, declaring victories at plebiscites that were never called, where people are taken to the polls by blackmail and thrashing.

There will be no mention of how "real science"- that is, that which pertains to respect for the scientific method-is being raped by the international virologists and scientific and technical committees of the world, who make incoherent decisions, literally at the drop of a hat, and demand that anyone comply without question.

Nor will we discuss the tendency to engage in sterile Internet debates about the article that proves us right in the Lancet or the Brianza Medical Journal or the Manure Magazine. Nor about the obvious tendency to cherry-pick by the deboonker of the moment. "Eh, but Professor Giannetti of Fortestano is a known waffler, lol." The office cultists of the ipse dixit are now meticulous enforcers of a strict hierarchy of sources and opinions, based on improbable as rigorous qualitative prestige rankings, or worse, equally improbable as arbitrary quantitative bibliometric indices.

Here you will be told why science is not what it says it is and precisely why it has assumed such a role in society.

Everything you will find written here is widely known and documented in books, longforms, articles in popular weeklies, and even articles in peer-reviewed journals (LOL). Do a Google search for such terms as  "publish or perish," "reproducibility crisis," "p-hacking," "publication bias," and dozens of other related terms. I don't provide any sources because, as I said, it is contrary to my ideology. The added value of what I am going to write is an immediate, certainly partisan and uncensored, description of the obscene mechanisms that pervade the entire scientific world, from Ph.D. student recruitment to publications.

There is also no "real science" to save, as opposed to "lasciviousness." The thesis of this paper is that science is structurally corrupt and that the conflicts of interest that grip it are so deep and pervasive, that only a radical reconstruction -after its demolition- of the university institution can save the credibility of a small circle of scholars, those who are trying painstakingly, honestly and humbly to add pieces to the knowledge of Creation.

The scientist and his career

"Scientist" today means a researcher employed in various capacities at universities, public or private research centers, or think tanks. That of the scientist is an entirely different career from that of any worker and is reminiscent of that of the cursus honorum of the Roman senatorial class. Cursus honorum my ass. A scientist's CV is different from that of any other worker. Incidentally, the hardest thing for a scientist is to compile a CV intelligible to the real world, should he or she want to return to earning an honest living among ordinary mortals. Try doing a Europass after six years of post-doc. LOL.

The scientist is a model student throughout his or her school career first and college career later. Model student means disciplined, with grades in the last percentiles of standardized tests, and also good interpersonal skills. He manages to get noticed during the last years of university (so-called "undergraduate," which however strangely, in Italy is called the magistrale or specialistica or whatever the fuck it is called nowadays), until he finds a recommendation (intended in the bad sense of the term) from a professor to enroll in a Ph.D. program, or doctorate, as it is called outside the Anglosphere. The Ph.D. is an elite university program, three to six years long, depending on the country and discipline, where one is trained to be a "scientist."

During the Ph.D. in the first few years, the would-be scientist takes courses at the highest technical level, taught by the best professors in his department, which are complemented by an initial phase of initiation into research, in the form of becoming a "research assistant," or RA. This phase, which marks the transition from "established knowledge" of textbooks to the "frontier of scientific research," takes place explicitly under the guidance of a professor-mentor, who will most likely follow the doctoral student through the remainder of the doctoral course, and almost always throughout his or her career. The education of crap. Not all doctoral students will, of course, be paired with a "winning" mentor, but only those who are more determined, ambitious, and show vibrancy in their courses.

During this RA phase, the student is weaned into what are the real practices of scientific research. It is a really crucial phase; it is the time when the student goes from the Romantic-Promethean-Faustian ideals of discovering the Truth, to nights spent fiddling with data that make no sense, amid errors in the code and general bewilderment at the misplaced meaning. Why am I doing all this? Why do I have to find just THAT result? Doesn't this go against all the rationalist schemata I was sharing on Facebook two years ago when I was pissing off the flat-earthers on duty? What do you mean if the estimate does not support the hypothesis, then I have to change the specification of the model? It is at this point that the young scientist usually asks the mentor some timid questions with a vague "epistemological" flavor, which are usually evaded with positivist browbeating about science proceeding by trial and error, or with a speech that sounds something like this: "You're going to get it wrong. There are two ways of not understanding: not understanding because you are trying to understand, or not understanding by pissing off others. Take your pick. In October, your contract runs out."

The purest, at this point, fall into depression, from which they will emerge by finding an honest job, with or without a piece of paper. The careerists, psychopaths, and naive fachidioten, on the other hand, will pick up speed and run along the tracks like bullet trains, unstoppable in their blazing careers devoted to prestige.

After the first phase of RA, during which the novice contributes to the mentor's publications-with or without mention among the authors, he or she moves on to the next, more mature phase of building his/her own research pipeline, collaborating as a co-author with the mentor and his other co-authors, attending conferences, weaving a relational network with other universities. The culmination of this phase is the attainment of the Ph.D. degree, which, however, at this point, is nothing more than a formality, since the doctoral student's achievements speak for themselves. The "dissertation," or "defense," in fact, will be nothing more than a seminar where the student presents his or her major work, which is already publishable.

But now the student is already launched on a meteoric path: depending on discipline and luck, he will have already signed a contract as an "assistant professor" in a "tenure track," or as a "post-doc." His future career will thus be determined solely by his ability to choose the right horses, that is, the research projects to bet on. Indeed, he has entered the hellish world of "publish or perish," in which his probability of being confirmed "for life" depends solely on the number of articles he manages to publish in "good" peer-reviewed journals. In many hard sciences, the limbo of post-docs, during which the scientist moves from contract to contract like a wandering monk, unable to experience stable relationships, can last as long as ten years. The "tenure track," or the period after which they can decide whether or not to confirm you, lasts more or less six years. A researcher who manages to become an employee in a permanent position at a university before the age of 35-38 can consider himself lucky. And this is by no means a peculiarity of the Italian system.

After the coveted "permanent position" as a professor, can the researcher perhaps relax and get down to work only on sensible, quality projects that require time and patience and may not even lead anywhere? Can he work on projects that may even challenge the famous consensus? Does he have time to study so that he can regain some insight? Although the institution of "tenure," i.e., the permanent faculty position, was designed to do just that, the answer is no. Some people will even do that, aware that they end up marginalized and ignored by their peers, in the department and outside. Not fired, but he/she ends up in the last room in the back by the toilet, sees not a penny of research funding, is not invited to conferences, and is saddled with the lamest PhD students to do research. If he is liked, he is treated as an eccentric uncle and good-naturedly teased, if disliked he is first hated and then ignored.

But in general, if you've survived ten years of publish or perish -- and not everyone can -- it's because you like doing it. So you will keep doing it, partly because the rewards are coveted. Salary increases, committee chairs, journal editor positions, research funds, consultancies, invitations to conferences, million-dollar grants as principal investigator, interviews in newspapers and on television. Literally, money and prestige. The race for publications does not end; in fact, at this point, it becomes more and more ruthless. And the scientist, now a de facto manager of research, with dozens of Ph.D. students and postdocs available as workforce, will increasingly lose touch with the subject of research to become interested in academic politics, money, and public relations. It will be the doctoral student now who will "get the code wrong" for him until he finds the desired result. He won't even have to ask explicitly, in most cases. Nobody will notice anyway, because nobody gives a damn. But what about peer review?

The peer-review

Thousands of pages have been written about this institution, its merits, its flaws, and how to improve it. It has been ridiculed and trolled to death, but the conclusion recited by the standard bearers of science is always the same: it is the best system we have, and we should keep it. Which is partly true, but its supposed aura of infallibility is the main cause of the sorry state the academy is in. Let's see how it works.

Our researcher Anon has his nice pdf written in Latex titled "The Cardinal Problem in Manzoni: is it Betrothed or Behoofed?" (boomer quote) and submits it to the Journal of Liquid Bullshit via its web platform. The JoLB chief editor takes the pdf, takes a quick look, and decides which associate editor to send it to, depending on the topic. The chosen editor takes another quick look and decides whether to 1) send a short letter where he tells Anon that his work is definitely nice and interesting but, unfortunately it is not a good fit for the Journal of Liquid Bullshit, and he recommends similar outlets such as Journal of Bovine Diarrhea, or Liquid Bovine Stool Review or 2) send it to the referees. But let's pause for a moment.

What is the Journal of Liquid Bullshit?

The Journal of Liquid Bullshit is a "field journal" within the general discipline of "bullshit" (Medicine, Physics, Statistics, Political Science, Economics, Psychology, etc.), dealing with the more specialized field of Liquid Bullshit (Virology, Astrophysics, Machine Learning, Regime Change, Labor Economics, etc.). It is not a glorious journal like Science or Nature, and it is not a prestigious journal in the discipline (Lancet, JASA, AER, etc). But it is a good journal in which important names in the field of liquid crap nonetheless publish regularly. In contrast to the more prestigious ones, which are generally linked to specific bodies or university publishers, most of these journals are published (online, that is) by for-profit publishers, who make a living by selling subscription packages to universities at great expense.

Who is the chief editor? The editor is a prominent personality in the Liquid Bullshit field. Definitely, a Full Professor, minimum of 55 years old, dozens of prominent publications, highly cited, has been keynote speaker at several annual conferences of the American Association of Liquid Bovine Stools. During conferences, he constantly has a huddle of people around him.


Who are the Associate editors?

They are associate or full professors with numerous publications, albeit relatively young. Well connected, well launched. The associate editor's job is to manage the manuscript at all stages, from choosing referees to decisions about revisions. The AE is the person who has all the power over the paper, not only of course for the final decision, but also for the choice of referees.


Who are the referees

Referees are academic researchers formally contacted by the AE to evaluate the paper and write a short report. Separately, they also send a private recommendation to the AE on the fate of the paper: acceptance, revision, or rejection. The work of the referees is unpaid; it is time taken away from research or very often from free time. The referee receives only the paper; he or she has no way to evaluate the data or codes, unless (which is very rare) the authors make them available on their personal web pages before publication. It is difficult in any case for the referee to waste time fiddling around. In fact, the evaluation required of the referee is only of a methodological-qualitative nature, and not of merit. In fact, the "merit" evaluation would be up to the "scientific community," which, by replicating the authors' work, under the same or other experimental conditions, would judge its validity. Even a child would understand that there is a conflict of interest as big as a house. In fact, the referee can actively sabotage the paper, and the author can quietly sabotage the referees when it is their turn to judge. 

In fact, when papers were typed and shipped by mail, the system used was "double-blind" review. The title page was removed from the manuscript: the referee did not know who was judging, and the author did not know who the referee was. Since the Internet has existed, however, authors have found it more and more convenient to publicize their work in the "workings paper" format. There are many reasons for doing so, and I won't go into them here, but it is now so widespread that referees need only do a very brief google search to find the working paper and, with it, the authors' names. By now, more and more journals have given up pretending to believe in double-blind reviewing, and they send the referees the pdf complete with the title page. Thus referees are no longer anonymous referees, especially since they have strong conflicts of interest. For example, the referee may get hold of the paper of a friend or colleague of his or her, or a stranger who gives right - or wrong - to his or her research. A referee may also "reveal" himself years later to the author, e.g. over a drink, during alcoholic events at a conference, obviously in case of a favorable report. You know, I was a referee of your paper at Journal X. It can come in handy if there is some confidence. Let's not forget that referees are people who are themselves trying to get their papers published in other journals, or even the same ones. And not all referees are the same. A "good" referee is a rare commodity for an editor. The good referee is the one who responds on time, and writes predictable reports. I have known established professors who boasted that they receive dozens of papers a month to referee "because they know I reject them all right away."

So how does this peer review process work?

We should not imagine editors as people who are removed from the real world, just with their hands extremely full. An editor, in fact, has enormous power, and one does not become an editor unless he or she shows that he or she covets this power, as well as having earned it, according to the perverse logic of the cartel. The editor's enormous power lies in influencing the fate of a paper at all stages of revision. He can choose referees favorable or unfavorable to a paper: scientific feuds and their participants are known, and are especially known to editors. It can also side with the minority referee and ask for a review (or rejection).

Every top researcher knows which editors are friendly, and knows which potential referees can be the most dangerous enemies. Articles are often calibrated, trying to suck up to the editor or potential referees by winking at their research agendas. Indeed, it is in this context that the citation cow market develops: if I am afraid of referee Titius, I will cite as many of his works as possible. Or another strategy (valid for smaller papers and niche works) is to ignore him completely, otherwise the neutral editor might choose him as referee simply by scrolling through the bibliography. Many referees also go out of their way at the review stage to pump up their citations essentially pushing authors to cite their work, even if it is irrelevant. But this happens in smaller journals.  [2]

The most relevant thing to understand about the nature of peer review is how it is a process in which individuals who are consciously and structurally conflicted participate. The funny thing is that this same system is also used to allocate public research funds, as we shall see.

The fact that the occasional "inconvenient" article manages to break through the peer review wall should not, of course, deceive: the process, while well-guided, still remains partly random. A referee may recommend acceptance unexpectedly, and the editor can do little about it. In any case, little harm: an awkward article now and then will never contribute to the creation of consensus, the one that suits the ultimate funding apparatus, and indeed gives the outside observer the illusion that there is frank debate.


But so can one really rig research results and get away with it?

Generally yes, partly because you can cheat at various levels, generally leaving behind various screens of plausible deniability, i.e., gimmicks that allow you to say that you were wrong, that you didn't know, that you didn't do it on purpose, that it was the Ph.D. student, that the dog ate your raw data. Of course, in the rare case that you get caught you don't look good, and the paper will generally end up "retracted." A fair mark of infamy on a researcher's career, but if he can hide the evidence of his bad faith he will still retain his professorship, and in a few years everyone will have forgotten. After all, these are things that everyone does: sin, stones, etc.

Every now and then, more and more frequently, retraction of some important paper happens, where it turns out that the data of some well-published work was completely made up, and that the results were too good to be true. This happens when some young up-and-coming psychopath exaggerates, and draws too much attention to himself. These extreme cases of outright fraud are certainly more numerous than those discovered, but as mentioned, you don't need to invent the data to come up with a publishable result, just pay a monkey to try all possible models on all possible subsets. The a posteriori justification of why you excluded subset B can always be found. You can even omit it if you want to, because if you haven't stepped on anyone's toes, there is no way that anyone is going to start fleecing your work. Even in the experimental sciences it can happen that no one has been able to replicate the experiments of very important papers, and it took them fifteen years to discover that maybe they had made it all up. Go google "Alzheimer scandal" LOL! There is no real incentive in the academy to uncover bogus papers, other than literally bragging about them on twitter.


Research funding

Doing research requires money. There is not only laboratory equipment, which is not needed in many disciplines anyway. There is also, and more importantly, the workforce. Laboratories and research in general are run by underpaid people, namely Ph.D. students and post-docs (known in Italy as "assegnisti"). But not only that, there are also expenses for software, dataset purchase, travel to conferences, seminars, and workshops to be organized, with people to be invited and to whom you have to pay travel, hotel, and dinner at a decent restaurant. Consider the important PR side of this: inviting an important professor, perhaps an editor, for a two-day vacation and making a "friend" of him or her can always come in handy.

In addition to all this, there is the fact that universities generally keep a share of the grants won by each professor, and with this share, they do various things. If the professor who does not win grants needs to change his chair, or his computer, or wants to go present at a loser conference where he is not invited, he will have to go hat in hand to the department head to ask for the money, which will largely come from the grants won by others. I don't know how it works in Italy, but in the rest of the world, it certainly does. This obviously means that a professor who wins grants will have more power and prestige in the department than one who does not win them.

In short, winning grants is a key goal for any ambitious researcher. Not winning grants, even for a professor with tenure, implies becoming something of an outcast. The one who does not win grants is the odd one out, with shaggy hair, who has the office down the hall by the toilet. But how do you win grants?

Grants are given by special agencies -- public or private -- for the disbursement of research funds, which publish calls, periodic or special. The professor, or research group, writes a research project in which he or she says what he or she wants to do, why it is important, how he or she intends to do it, with what resources, and how much money he or she needs. The committee, at this point, appoints "anonymous" referees who are experts in the field, and peer review the project. It doesn't take long to realize that if you are an expert in the field, you know very well who you are going to evaluate. If you have read this far, you will also know full well that referees have a gigantic conflict of interest. In fact, all it takes is one supercilious remark to scuttle the rival band's project, or to favor the friend with whom you share the same research agenda, while no one will have the courage to scuttle the project of the true "raìs" of the field. Finally, the committee will evaluate the applicant's academic profile, obviously counting the number and prestige of publications, as well as the totemic H-index.

So we have a system where those who get grants publish, and those who publish get grants. All is  governed by an inextricable web of conflicts of interest, where it is the informal, and ultimately self-interested, connections of the individual researcher that win out. Informal connections that, let us remember, start with the Ph.D. What is presented as an aseptic, objective, and informal system of meritocratic evaluation resembles at best the system for awarding the contract to resurface the bus parking lot of a small town in the Pontine countryside in the 1980s.


The research agenda

We have mentioned it several times, but what really is a research agenda? Synthetically we can say that the research agenda is a line of research in a certain sub-field of a sub-discipline, linked to a starting hypothesis, and/or a particular methodology. This starting hypothesis will always tend to be confirmed by those pursuing the agenda. The methodology, on the other hand, will always be presented as decisive and far superior to the alternatives. A research agenda, to continue with the example from before, could be the relationship between color and specific gravity of cattle excrement and milk production. Or non-parametric methods to estimate excreta production given diet composition.

Careers are built or blocked around the research agenda: a researcher with a "hot" agenda, perhaps in a relatively new field, will be much more likely to publish well, and thus obtain grants, and thus publish well. You are usually launched on the hot agenda during your doctoral program, if the advisor advises you well. For example, it may be that the advisor has a strand in mind, but he doesn't feel like setting out to learn a new methodology at age 50, so he sends the young co-author ahead. Often, then, the 50-year-old prof, now a "full professor," finds himself becoming among the leaders of a "cutting-edge" research strand without ever really having mastered its methodologies and technicalities, thus limiting himself to the managerial and marketing side of the issue.

As already explained, real gangs form around the agendas, acting to monopolize the debate on the topic, giving rise to a real controlled pseudo-debate. The bosses' "seminal" articles can never be totally demolished; if anything, they can be enriched, and expanded. One will be able to explore the issues from another point of view, under other dimensions, using new analytical techniques, different data, which will lead to increasingly different conclusions. The key thing is that no one will ever say "sorry guys but we are wasting time here." The only time wasted in the academy is time that does not lead to publications and, therefore, grants.

But then who dictates the agenda? "They" dictate it - directly - the big players in research, that is, the professors at the top of their careers internationally, editors of the most prestigious journals. Indirectly, then, the ultimate funders of research, direct and indirect, dictate it: multinational corporations and governments, both directly and indirectly, through the actions of lobbies and various international potentates.  

The big misconception underlying science, and thus the justification of its funding in the face of public opinion, is that this incessant and chaotic "rush to publish" nonetheless succeeds in adding building blocks to knowledge. This is largely false.

In fact, research agendas never talk to each other, and above all, they never seem to come to a conclusion. After years of pseudo-debate the agenda will have been so "enriched" by hundreds of published articles that trying to make sense of it would be hard and thankless work. Thankless because no one is interested in doing this work. Funds have been spent, and chairs have been taken. In fact, the strand sooner or later dries up: the topic stops being fashionable, thus desirable to top journals, it gradually moves on to smaller journals, new Ph.D. students will stop caring about it, and if the leaders of the strand care about their careers, they will move on to something else. And the merry-go-round begins again.

The fundamental questions posed at the beginning of the agenda will not have been satisfactorily answered, and the conceptual and methodological issues raised during the pseudo-academic debate will certainly not have been resolved. An outside observer who were to study the entire literature of a research strand could not help but notice that there are very few "take-home" results. Between inconclusive results, flawed or misapplied methodologies, the net contribution brought to knowledge is almost always zero, and the qualitative, or "common sense" answer always remains the best.


Conclusions

We have thus far described science. Its functioning, the actors involved in it and their recruitment. We have described how conflicts of interest - and moral and substantive corruption - govern every aspect of the academic profession, which is thus unable, as a whole, to offer any objective, unbiased viewpoint, on anything.

The product of science is a giant pile of crap: wrong, incomplete, blatantly false and/or irrelevant answers. Answers to questions that in most cases no one in their right mind would want to ask. A mountain of shit where no one among those who wallow in it knows a shit. No one understands shit, and poorly asked questions are given in return for payment-piloted answers. A mountain of shit within which feverish hands seek-and find-the mystical-religious justification for contemporary power, arbitrariness and tyranny.

Sure, there are exceptions. Sure, there is the prof. who has gone through the net of the system, and now thanks to the position he gained, he has a platform to say something interesting. But he is an isolated voice, ridiculed, used to stir up the usual two minutes of TV or social hatred. There is no baby to be saved in the dirty water. The child is dead. Drowned in the sewage.

The "academic debate" is now a totally self-referential process, leading to no tangible quantitative (or qualitative) results. All academic research is nothing but a giant Ponzi scheme to increase citations, which serve to get grants and pump up the egos and bank accounts -- but mostly egos -- of professional careerists.

Science as an institution is an elephantine, hypertrophied apparatus, corrupt to the core, whose only function - besides incestuously reproducing itself - is to provide legitimacy for power. Surely at one time it was also capable of providing the technical knowledge base necessary for the reproduction and maintenance of power itself. No longer today, the unstoppable production of the shit mountain makes this impossible. At most, it manages to select, nominally and by simply assigning an attendance token through the most elite schools, the scions of the new ruling class.

When someone magnifies the latest scientific breakthrough to you, the only possible response you can give is, "I'm not buying anything, thank you." If someone tells you that they are acting "according to science," run away.

[1] Advising Galilei to talk about "hypothesis" was Bellarmine. In response, Galilei published the ridiculous "dialogue," where the evidence brought to support his claims about heliocentrism was nothing more than conjecture, completely wrong, and without any empirical basis. The Holy Office's position was literally, "say whatever you like as long as you don't pass it off as Truth." Galilei got his revenge: now they say whatever they like and pass it off as Truth. Sources? Look them up. 

[2] Paradoxically, in the lower-middle tier of journals, where cutthroat competition is minimal, one can still find a few rare examples of peer review done well. For example, I was once asked to referee a paper for a smaller journal, with which I had never had anything to do and whose editor I did not know even indirectly. The paper was sent without a title page therefore anonymous, and strangely I could not find the working paper online. It was objectively rubbish, and I recommended rejection.

________________________________________________________________





Thursday, April 13, 2023

What's Wrong With Science? Mostly, it is how we Mismanage it

 


"A scientist, Dr. Hans Zarkov, works night and day, perfecting the tool he hopes to save the world... His great mind is strained by the tremendous effort" (From Alex Raymond's "Flash Gordon")


We tend to see science as the work of individual scientists, maybe of the "mad scientist" kind. Great minds fighting to unravel the mysteries of nature with the raw power of their minds. But, of course, it is not the way science works. Science is a large network of people, institutions, and facilities. It consumes huge amounts of money from government budgets and private enterprises. And most of this money, today, is wasted on useless research that benefits no one. Science has become a great paper-churning machine whose purpose seems to be mainly the glorification of a few superstar scientists. Their main role seems to be to speak glibly and portentously about how what they are doing will one day benefit humankind, provided that more money is poured into their research projects.

Adam Mastroianni makes a few simple and well-thought considerations in his blog about why science has become the disastrous waste of money and human energy it is today. The problem is not with science itself: the problem is how we manage large organizations. 

You may have experienced the problem in your career. Organizations seem to work nicely for the purpose they were built up to when they include a few tens of people, maybe up to a hundred members. Then, they devolve into conventicles whose main purpose seems to be to gather resources for themselves, even at the cost of damaging the enterprise as a whole. 

Is it unavoidable? Probably yes. It is part of the way Complex Adaptive Systems (CAS) work, and, by all means, human organizations are CASs. These systems are evolution-driven: if they exist, it means they are stable. So, the existing ones are those who managed to attain a certain degree of stability. They do that by ruthlessly eliminating the inefficient parts of the system. The best example is Earth's ecosystem: You may have heard that evolution means the "survival of the fittest." But no, it is not like that. It is the system that must survive, not individual creatures. The "fittest" creatures are nothing if the system they are part of does not survive. So, ecosystems survive by eliminating the unfit. Gorshkov and Makarieva call them "decay individuals." You can find these considerations in their book "Biotic Regulation of the Environment."

It is the same for the CAS we call "Science." It has evolved in a way that maximizes its own survival and stability. That's evident if you know just a little about how Science works. It is a rigid, inflexible, self-referencing organization refractory to all attempts to reform from the inside. It is a point that Mastroianni makes very clear in his post. A huge amount of resources and human efforts are spent by the scientific enterprise to weed out what's defined as "bad science," seen as anything that threatens the stability of the whole system. That includes the baroque organization of scientific journals, the gatekeeping control by the disastrously inefficient "peer review" system, the distribution of research funds by rigid old-boy networks, the beastly exploitation of young researchers, and more. All this tends to destroy both the very bad (which is a good thing) and the very good (which is not a good thing at all). But both the very good and the very bad threaten the stability of the entrenched scientific establishment. Truly revolutionary discoveries that really could change the world would reverberate through the established hierarchies and make the system collapse. 

Matroianni makes these points from a different viewpoint that he calls the "weak links -- strong links" problem. It is a correct way if you frame Science not as a self-referencing system but as a subsystem of a wider system which is human society. In this sense, Science exists to serve useful purposes and not just to pay salaries to scientists. What Mastroianni says is that we should strive to encourage good science instead of discouraging bad science. What we are doing is settling on mediocrity, and we just waste money in the process. Here is how he summarizes his idea. 

I strongly encourage you to read the whole Mastroianni's post because it is very well argumented and convincing. It is what we should do to turn science into something useful that we badly need in this difficult moment for humankind. But the fact that we should do that doesn't mean it will be done. Note in Mastroianni's post the box that says "Accept Risk." This is anathema for bureaucrats, and the need for it nearly guarantees that it will not be done. 

Yet, we might at least try to push science into doing something useful. Prizes could be a good idea: by offering prices, you pay only for success, but not for failure. But in Science, prizes are rare; apart from the Nobel prize and a few others, scientists do not compete for prizes. That's something we could work on. And, who knows, we might succeed in improving science, at least a little! 





 



Friday, April 7, 2023

Why we Can't Change Anything Before it is Too Late.

 


Yours truly, Ugo Bardi, in a recent interview on a local TV station. note the "Limits to Growth" t-shirt and, as a lapel pin, the ASPO-Italy logo. 

A few days ago, I was invited to an interview on a local TV about the energy transition. I prepared myself by collecting data. I was planning to bring to the attention of viewers a few recent studies that showed how urgent and necessary it is to move away from conventional engines, including a recent paper by Roberto Cazzolla-Gatti(*) that shows how the combustion of fossil fuels is one of the main causes of tumors in Italy. 

And then I had a minor epiphany in my mind. 

I saw myself from the other side of the camera, appearing on the screen in someone's living room. I saw myself as one more of those white-haired professors who tell viewers, "look, there is a grave danger ahead. You must do as I say, or disaster will ensue."

No way. 

I could see myself appearing to people as more or less the same as one of the many TV virologists who had terrorized people with the Covid story during the past three years. "There is a grave danger caused by a mysterious virus. If you don't do as I say, disaster will ensue." 

It scared people a lot, but only for a while. And now the poor performance of TV virologists, Tony Fauci and the others, cast a shade over the general validity of science. As a result, we now see a wave of anti-science sweeping the discussion while carrying along the flotsam of decades of legends. Fake lunar landings, earthquakes as weapons, how Greenland was green at the time of Erik the Red, and don't you know that climate has always been changing? Besides, Greta Thumberg is a bitch.

But it is not so much a fault of the TV virologists, although they have done their part in creating the damage. It is the human decisional system that works in a perverse way. More or less, it works like this:

  1. Scientists identify a grave problem and try to warn people about it. 
  2. The scientists are first demonized, then ignored.
  3. Nothing is done about the problem.
  4. When it is discovered that the warning was correct, it is too late. 

Do you remember the story of the boy who cried "wolf"? Yes, it works exactly like that in the real world. One of the first modern cases in real history was that of "The Limits to Growth" in 1972. 

  1. A group of scientists sponsored by the Club of Rome discovered that unrestrained growth of the global economic system would lead to its collapse.
  2. The scientists and the Club of Rome were demonized, then ignored.
  3. Nothing was done about the problem.
  4. Now that we are discovering that the scientists were right, collapse is already starting.
More recently, we saw how, 
  1. Scientists tried to alert people about the dangers of climate change.
  2. Scientists were demonized and then ignored.
  3. Nothing was done about climate change.
  4. When it was discovered that the warning was correct, it was too late. (it is).
There are many more examples, but it almost always works like this. Conversely, when, for some reason, people take heed of the warning, the results may be even worse, as we saw with the Covid epidemic. In that case, you can add a 1b line to the list that says, "people become scared and do things that worsen the problem." After a while, line 2 (scientists are demonized) takes over, and the cycle goes on.  

So, what are the conclusions? The main one, I'd say, is: 

Avoid being a white-haired scientist issuing warnings about grave dangers from a TV screen

Then, what should you say when you appear on TV (and you happen to be a white-haired scientist)? Good question. My idea for that TV interview was to present change as an opportunity rather than an obligation. I was prepared to explain how there are many possible ways to improve the quality of our life by moving away from fossil fuels. 

How did it go? It was one of the best examples that I experienced in my life of the general validity of the principle that says, "No battle plan survives contact with the enemy." The interview turned out to be a typical TV ambush in which the host accused me of wanting to beggar people by taking away their cars and their gas stoves, of trying to poison the planet with lithium batteries, and of promoting the exploitation of the 3rd world poor with coltan mines. I didn't take that meekly, as you may imagine. 

The interview became confrontational, and it quickly degenerated into a verbal brawl. I am not linking to the interview; it is not so interesting. Besides, it was all in Italian. But you can get some idea of how these things go from a similar ambush against Matt Taibbi on MSNBC. What did the viewers think? Hopefully, they switched channels. 

In the end. I am only sure that if something has to happen, it will. 


(*) The paper by Roberto Cazzolla-Gatti on the carcinogenic effects of combustion is truly impressive. Do read it, even if you are not a catastrophist. You'll learn a lot. 

(**) CJ Hopkins offers some suggestions on how to behave when you are subjected to this kind of attack. He says that you should refuse to answer some questions, answer with more questions, avoid taking the interviewer seriously, and things like that. It is surely better than trying to just defend oneself, but it is extremely difficult. It was not the first time that I faced this kind of ambush, and when you are in the crossfire you have little or no chances to avoid a memetic defeat. 

How to Make Your Google Masters Happy: Fixing the Privacy Policy of Your Blog

  


As I told you in a previous post, for months, Google has been pestering me with notices that there was something wrong with the privacy policy of my blog and that if I wouldn't fix it, they would start doing dark and dire things, such as making my blog invisible to search engines. Now, after many attempts and much struggle, I can tell you that the saga is over.  So, I am posting these notes that may be useful for you in case you find yourself in the same situation.

The problem had to do with the privacy regulations of the EU and the EEA, aka the "General Data Protection Regulation (GDPR): I had to obtain consent from the user for something not explicitly described in the ominous messages I was receiving. Fixing the problem turned out to be a small Odyssey. 

1) Using search engines The first thing you normally do in these cases is look over the Web to see if someone has already solved the problem that plagues you. About this specific question, I immediately found myself facing a wall of sites claiming that they can solve the problem for you if you just pay some money. Mostly, they looked like traps, but I was dumb enough to pay $29 for a "personalized policy declaration" that came with the request of a further payment for hosting it on their site. I took care myself to create a subpage of the blog to host it at no cost. 

First lesson learnedskip the sites that ask you money to fix this problem unless you are a commercial site and you need to do it quickly. 


2. The text I downloaded may have been a good policy declaration, but Google still wasn't satisfied and I later learned that they didn't give a hockey stick about that. 

Second lesson learned: you can spend a lot of time (and also money) fixing the wrong problem.


3. I contacted Google's customer service at ddp-gdpr-escalations@google.com -- yes, they have a customer service to help people fixing exactly the GDPR problem. Amazingly, I got in contact with someone who seemed to be a real person -- the messages were signed "Gargi," which is an Indian male name. After a few interactions, he finally told me what Google wanted. It was simple: I just had to add the sentence "cookies are used for ads personalization" in the "consent banner." And that was it. Gargi even sent me a screenshot of what the banner should look like. It was a step forward. 

Third lesson learned. Human beings can still be useful for something. 



4But who controls the cookie banner? I had never placed a cookie banner on my blog, and I saw no such a thing appearing when I loaded the blog. Other people told me that they didn't see any banners on the first page of my blog. I had always interpreted the lack of a banner as a consequence of my blog not being a commercial one. But, no, the trick was a different one. After much tinkering and head-scratching, I discovered that my browser (Chrome) keeps track of previous decisions and didn't show the banner again to people who had already accepted the cookies. I could see the banner if I erased the cookies from my main browser, or used a "virgin" browser. The beauty of this trick is that not even the people from Google's customer service seemed to know it; so, at some moment, they started telling me that my blog had no cookie banner, and I had to explain to them that they just weren't seeing it, but it was there. Once they understood this, it was no more a problem. But it took time. 

Fourth lesson learned: Truth may be hidden, and often is. 


5. How do you change the text of the cookie consent banner? One of those things that look easy but are not easy at all. First, you have to access the HTML code of your blog, which is not an easy task by itself. It is like open heart surgery: you make a mistake, and the patient dies. Then, even if you know how to manage HTML, you soon discover a little problem. There is NO CODE for the cookie consent banner on the HTML page of Google's blogs. The banner is dynamically generated from somewhere, Google knows where, and it is not accessible with the tools provided by Google's blogger. 

Fifth lesson learned: Google plays with you like a cat plays with a mouse.

6. It means that there has to be a widget for the cookie banner, right? Yes, there is such a widget that you can set as showing a cookie banner as you like it to be. The problems are that 1) it cannot show the banner at top of the page, where these banners normally are, and 2) it doesn't replace the Google-generated banner. So, the result is that you have two different banners in different areas of the screen at the same time. Apart from the awful effect on the way your blog looks, it is not surprising that Google was still not happy with this solution

Sixth lesson learned: Some solutions are not. 


7. How about trying chatGPT? Eventually, chatGPT gave me the right hint. It said that it was possible to insert a cookie banner script in the main HTML page of the blog. I tried the scripts provided by chatGPT and none worked, but those provided by helpful human bloggers did. I found that scripts (unlike widgets) can supersede the Google-created banner. After some tweaking, Google was finally happy. 

Seventh Lesson learned. ChatGPT is your friend, but it is a bad programmer   

________________________________________________

Conclusion. 

The good thing about this story is that I learned something, but it was also a sobering experience. The way Google managed it was so bad that I can only understand it as an explicit attempt to discourage small bloggers who are not making money from their blogs and who can't afford a professional maintenance service. Just why harass poor bloggers to do something that Google could do easily on a banner that it is wholly managed by Google? I mean, do you realize the time lost to do such a simple thing as adding a single sentence to a banner? 

It seems clear to me that at Google they don't like blogs in general. Even though they offer a blogging platform, it is a poor service for several reasons. Yet, Blogger also has several good points, the main one being that it is free. Then, it offers you possibilities of customization that other "bare-bones" platforms (e.g. substack) do not provide. For someone who just wants to express his/her ideas in public, it can still be a good choice. But, after this experience, I am wary. Google knows what they have in mind next in Mountain View. So, I may switch platforms in the near future. For now, "The Seneca Effect" blog is still there, alive and reasonably well, even though shadow-banned by the Powers That Be. And maybe these notes could be useful for you.

Final lesson I learned: I, for one, welcome our new Google masters 

_________________________________________________________

Here is the script to control the text of the cookie consent banner to be cut and pasted into the HTML code of a blogger blog after the </head> tag. It is simple, but it wasn't so simple to understand what was needed. 

<script type='text/javascript'> 

  cookieOptions = { 

    msg: &quot;This site uses cookies for ad personalization, to analyse traffic and to deliver some Google services. By using this site, you agree to its use of cookies.&quot;, 

    link: &quot;https://www.senecaeffect.com/p/privacy-policy-for-seneca-effect-blog.html&quot;, 

    close: &quot;Okay!&quot;, 

    learn: &quot;Learn More&quot; }; 

</script> 




Sunday, April 2, 2023

"Flattening the Curve." The Origins of a Bad Idea


In 2003, the "Anthrax scare" led many people to use duct tape to seal the windows of their homes to protect themselves from the deadly germs. It would have been a good idea if the purpose was to suffer even more than usual from indoor pollution while at the same time doing little or nothing against a hypothetical biological attack. Yet, this folly was recommended and encouraged by the national government and local ones. It was the first taste of more to come. I already mentioned this story in a previous post, but here I'll go more in-depth into the matter. 


We are still reeling from three years of madness, but it seems that many of us are starting to make a serious effort to try to understand what happened to us and why. How could it be that the reaction to the Covid epidemic involved a set of "measures" of dubious effectiveness, from national lockdowns to universal masking, that had never been tried before in humankind's history? 

For everything that happens, there is a reason for it to happen, and the "non-pharmaceutical interventions" (NPIs) that were adopted in 2020 have their reasons, too. Some people speak of global conspiracies and some even of evil deities, but the origin of the whole story may be more prosaic. It can be found in the development of modern genetic manipulation technologies; a new branch of science that started being noticed in the 1990s. As it normally happens, scientific advances have military consequences; genetic manipulation was not an exception. 

Not that "bioweapons" are anything new. The chronicles of ancient warfare report how infected carcasses of animals were thrown inside the walls of besieged cities and other similar niceties. More recently, during the 19th century, it is reported that blankets infected with smallpox were distributed to Native Americans by British government officials. Overall, though, biological warfare never was very effective, and not even smallpox-infected blankets seem to have been able to kill a significant number of Natives. Besides, biological weapons suffer from a basic shortcoming: how can you damage your enemies only while sparing your population? Because of this problem, the history of biological warfare does not include cases where bioweapons were used on a large scale, at least so far. The low effectiveness of bioweapons is the probable reason why it was not difficult to craft an international agreement on banning their use ("BWC", biological weapon convention), ratified in 1975. 

Up to recent times, bioweapons were not seen as much more dangerous than normal germs, and the generally accepted view on how to face epidemics favored a soft approach: letting the virus run in the population with the objective of reaching the natural "herd immunity." For instance, in a 2007 paper, four respected experts in epidemiology still rejected such ideas as confinement, travel bans, distancing, and others. On quarantines, they stated that "There are no historical observations or scientific studies that support the confinement by quarantine of groups of possibly infected people for extended periods in order to slow the spread of influenza." But military planners were working on the idea that bioweapons would be orders of magnitude deadlier than the seasonal flu, and that changed everything. 

Genetic engineering technologies were said to be able to create new, enhanced germs, an approach known as "gain of function." Really nasty ideas could also become possible, such as "tailoring" a virus to attack only a specific ethnic group. Even without this feature, a country or a terroristic group could develop a vaccine against the germs they created and, in this way, inflict enormous damage on an enemy while protecting the country's population (or the select few of an elite aiming at global depopulation). 

Fortunately, none of these ideas turned out to be feasible. Or, at least, there is no evidence that they could be put into practice. That doesn't mean they were not explored, even though research on biological weapons is prohibited by the BWC convention. Researching new germs is orders of magnitude less expensive than making nuclear weapons, and so it is also feasible for relatively poor governments. (biological warfare is said to be the weapon of the poor). Whether truly effective bioweapons can actually exist is at least doubtful, but supposing that they could exist, then it makes sense to prepare for a possible attack. 

The "anthrax scare" of 2001 was the first example of what a modern terror attack using bioweapons could look like, even though it was nothing like a weapon of mass destruction, and nobody to this date can say who spread the germs using mail envelopes. Nevertheless, it was taken seriously by the authorities, and it led to the "bioterrorism act" in 2002. Shortly afterward, Iraq was accused of having been developing biological weapons of mass destruction, and you may remember how, in 2003, Colin Powell, then US secretary of state, showed on TV a vial of baby powder, saying that it could have been a biological weapon. Over the years, several government agencies became involved in planning against bioweapon attacks, and the military approach to biological warfare gradually superseded the traditional plans on how to deal with ordinary epidemics. 

The idea was that a truly deadly virus attack would cripple the infrastructure of a country and cause immense damage before the germs could be stopped by a specifically developed vaccine. So, it was imperative to react swiftly and decisively with non-pharmaceutical interventions ("NPIs") or "measures" to stop the epidemic or at least slow it down. This idea was rarely expressed explicitly in public documents, but it was clearly the inspiration for several studies that examined the effects of an abnormally deadly virus. One was prepared by the Department of Homeland Security in 2006. Another one comes from the Rockefeller Foundation in 2010, where you can read of a scenario called "Operation Lockstep" that described something very similar to what came to pass in 2020 in terms of restrictions. 

These military-oriented studies were mostly qualitative. They were based on typical military ideas such as color-coded emergencies, say, "red alert," "orange alert," and the like. For each color, there were a series of recommended measures, but little was said about why exactly certain colors were chosen to be coupled with certain actions. But there were also attempts to quantify the effect of NPIs. A paper on this subject was published in 2006 by the group of Neil Ferguson of the Imperial College in London, where the authors endeavored the task of disentangling the effects of different measures on the spreading of a pathogen. Factors such as home quarantines, school closures, border closings, reduced mobility, and others were examined (interestingly, the widespread use of face masks was not considered). The study didn't expend much of an effort to compare guesswork with real-world data, but it was not so bad in comparison with the harmless mumbo-jumbo that scientists normally publish. Let's say that it might have been an interesting exercise in epidemiological modeling but nothing more. The problem was that it came at a moment in which "biological warfare" was all the rage and that it may have influenced later military planning. 

A study that had an enormous influence on the (mis)management of the 2020 pandemic was directly inspired by Ferguson's paper. It was proposed as a CDC report by Rajeev Venkayya in 2007, who presented his model in the form of the "double curve" that later became famous. Here it is; it is the origin of the "flatten the curve" meme that became popular 13 years later. 



Remarkably, Venkaya's model was completely qualitative. The curves were just a rendition of those proposed by Ferguson et al. without any attempt of quantification. Venkayya's paper didn't attain great popularity, and it remained dormant for more than one decade until the Covid-19 pandemic arrived. Then, suddenly, the double curve became highly fashionable. It went viral and literally "exploded" on the Web and in the media.  

During the early days of the Covid-19 epidemic, one of the main supporters of the double curve model was Tomas Pueyo, who was perhaps the first to use the term "flattening the curve." His post of March 10, 2020, on the matter had more than 40 million visualizations. Notably, Pueyo's previous expertise was in engineering and communication, and he was known for a discussion about the characters of the "Star Wars" series as role models in business. So, you might be reasonably perplexed about the authority that he claimed to have on epidemiology. Nor is it clear who pushed his blog to the first places of the search engines. In any case, Pueyo showed no signs of undue modesty in the way he considered himself an expert. For instance, he wrote: 

As a politician, community leader or business leader, you have the power and the responsibility to prevent this.

You might have fears today: What if I overreact? Will people laugh at me? Will they be angry at me? Will I look stupid? Won’t it be better to wait for others to take steps first? Will I hurt the economy too much?

But in 2–4 weeks, when the entire world is in lockdown, when the few precious days of social distancing you will have enabled will have saved lives, people won’t criticize you anymore: They will thank you for making the right decision.

Pueyo was even invited in TV, where he had a chance to vocally disparage the concept of "herd immunity," which has been a staple concept in epidemiology for at least a century. If you listen to him in that TV show, you'll probably notice that he provided no evidence that he understood how epidemics spread in a population, but that's not surprising for someone whose expertise is mainly in marketing and communication. Look at minute 12:30 of the TV interview to see Pueyo's face and gesture when the scientist being interviewed mentions herd immunity. Clearly, Pueyo had no idea of what herd immunity is. It is amazing that this bizarre character was allowed to have so much influence on global policies. But so it went, and the double curve was accepted as scientific wisdom and the undisputable target for governmental interventions everywhere. 

I discussed the shortcomings of the double curve model in a previous post. Basically, the model was flawed because it didn't include methods to verify whether the measures were doing something or not. So, the results of the attempt to "flatten the curve" were modest, if they existed at all. But the story didn't end there. There does not exist something so bad that someone can't make it worse.

Little more than a week after launching his "flatten the curve" post, on March 19, 2020, Tomas Pueyo was on the march again, and he went rapidly forward in the direction of the worse. He got rid of the only thing that had some contact with reality in Venkayya's model: the fact that epidemic curves are bell-shaped. Now Pueyo said that the natural shape of epidemic curves is exponential growth and that only specific measures could force it to bend down into a bell-shaped curve. That was probably a necessary consequence of the fact that Pueyo never really understood the basics of epidemiology. So, having started with the wrong assumption (exponential growth), he moved to the wrong conclusions. Starting with an unrealistically large value for the fatality rate (3.4%), he proposed the graph below, defined as "The Hammer and the Dance." The idea was that the epidemic curve would grow to infinity unless it was brought down by means of harsh and immediate measures of containment (the hammer) and then by keeping it low with lighter measures (the dance). The data showing that this could be done..... data? What data? 






This diagram, evidently, doesn't come from an epidemiological model, not even a highly simplified one. It is just drawn accordingly to what Pueyo thought the behavior of the system should be. 

The story of the hammer and the dance was the start of a disastrous debate (let's call it this way) where plenty of people came up claiming that herd immunity was a flawed concept and that the only way to avoid being all infected, and many killed, was to enable containment measures. It also led to the concept that the curve could not just be flattened but "crushed." Again, we can see the influence of the military approach: terms such as "flattened" or "crushed" are typical of warfare, but viruses cannot be killed using military weapons. In any case, the concept of "crushing the curve" gave rise to the idea of "Covid Zero." It was another disaster that befell us, sending entire countries to a human and economic disaster in the desperate search for an unattainable goal. 

Mostly, the whole disaster was due to our inability to understand how models work. Formal mathematical models are a recent feature of the human way of dealing with reality. They are supposed to help you understand how the world works and even predict how it will evolve. But you have to be careful: the model is not the reality, just like a map is not the territory. A wrong model is not necessarily dangerous, but it can be. A military model that tells you that attacking Russia in winter is a good idea is a good example. The idea of "crushing the curve" is another example. 

Will we ever learn how to use models? Maybe. But, for the time being, models play the role of guns handled by children. 

Saturday, April 1, 2023

A Revolution is Coming from Detroit! Car Tailfins are Back!

 


Today, a consortium of Detroit automakers announced plans to reintroduce tailfins in a new generation of dream cars. "We know what customers want," explained an industry representative, "people don't want those silly electric cars. They want real cars that look like cars, smell like cars, and make the noise of cars. Tailfins are the essence of the American car, which is the essence of the American dream. And we are pleased to announce that they are coming back." 

Representatives of the US oil industry expressed their satisfaction at the announcement from Detroit, noting how the "shale revolution" has brought back the US to the position of top-level oil producer in the world, a position that, with new and substantial investments, can be maintained, while shale oil production can continue growing, demonstrating how unfounded were the silly worries about "peak oil." 
President Biden commented on the announcement with a statement that was recorded as "car funs are finny, and we welcome them back. They are part of the American democratic project and we are sure that this technological solution will also be adopted in other countries." 

Former President Donald Trump declared that tail fins made America great and they are part of the MAGA concept. Dr. Anthony Fauci, Former Chief Medical Advisor to the President of the United States, noted that science says that tailfins are a good thing and that the turbulence they generate can remove viruses from the air. "Two superimposed sets of fins," Fauci said, "are better than a single one."

NATO Secretary-General Jens Stoltenberg declared that the new trend has interesting and useful military applications and that the new generation of Leopard tanks in Europe will be equipped with tailfins. "These tailfins," Stoltenberg explained, "are a symbol of the American commitment to keep Europe free and will be mounted on all NATO tanks. These military fins will be equipped with the capability of shooting depleted uranium projectiles." 



(image from Dezgo.com)


Incredibly, some people took this post seriously! It shows, at least, that most people read only the title, or at most the first few lines, of a post. 


Thursday, March 30, 2023

Help Needed: can someone check the Privacy Policy of the "Seneca Effect" blog?


Folks, Google keeps pestering me with requests for an appropriate "EU User Content Policy"; but it never tells me what exactly is wrong with it. The only message I receive from them is

Customer ID: ca-pub-7714140901520074

Our policy review indicates that while the site(s)/app(s) below have a consent notice in place, its wording fails to meet the requirements of our policy:

senecaeffect.com

So, maybe someone more knowledgeable than me could take a look at the page where I have the policy specifications. It is here https://www.senecaeffect.com/p/privacy-policy-for-seneca-effect-blog.html I tried several versions of it, but Google never seems to be happy with it. 

Thanks a lot for any help you can provide!


UB

This is the letter I keep receiving from them. 

Google
Client ID: ca-pub-7714140901520074
 
Dear Publisher

On May 25, 2018, we updated Google’s EU User Consent Policy to coincide with the General Data Protection Regulation (GDPR) coming into force. This Policy outlines your responsibility for making disclosures to, and obtaining consent from, end users of your sites and apps in the European Economic Area (EEA) along with the UK.

It has come to our attention that the attached site(s)/app(s) do not comply with our Policy, because they:

do not seek to obtain consent from users, and/or
do not correctly disclose which third parties (including Google) will also have access to the user data that you collect on your site/app. You can view these controls and the list of ad technology providers in your Ad Manager, AdSense or Admob account.
do not otherwise meet the requirements of Google's EU User Consent Policy.

 

If you have guidance or confirmation from a Data Protection Authority that the domains listed do not require a consent notice or that they otherwise already comply with applicable privacy laws, including GDPR, please contact us. We will review any guidance you have received from a regulatory body and take action accordingly.
Action Needed

Please check the site(s) or app(s) listed in the attached file and take action to ensure they comply with our Policy. We will re-review your site(s) or app(s) regularly and monitor your account. We may take action, including suspension, if the Policy violations have not been resolved by Apr 5, 2023.

Policy Requirements

The EU User Consent Policy outlines your responsibility as a user of our ad technology to: 

Obtain EEA along with UK end users’ consent to:
the use of cookies or other local storage where legally required; and
the collection, sharing, and use of personal data for personalization of ads.
Identify each party that may collect, receive or use end users’ personal data as a consequence of your use of a Google product.
Provide end users with prominent and easily accessible information about those parties’ use of personal data
Find Out More
You can refer to our Policy help page for more information on complying with Google’s Policy including a checklist for partners to avoid common mistakes when implementing a consent mechanism. We also recommend that you consult with your legal department regarding your compliance with the GDPR and Google’s policies.
Need some help?
Contact us, we are here to help!
Sincerely, 
The EU User Consent Policy Team

Update "Ticket" received from Google

Google Logo
Hi Ugo Bardi,

Thank you for reaching us out.



Your website  senecaeffect.com <Customer ID: ca-pub-7714140901520074> is flagged for non-compliance with the EU User Consent Policy.

Please check the website flagged and take action to ensure you comply with our Policy by referring to the resources shared. We will re-review your websites regularly, monitor your account and you will be contacted if Google has confirmed the issue has been fully resolved.

Please note when we review a website, we will accept different approaches to compliance as long as they meet our policy requirements.

  • We expect users to see a clear consent notice on your website so that they can take affirmative action to indicate their decision, e.g. clicking an “OK” button or an “I agree” button.
  • We expect users to be told how the site will use data. We suggest you make clear that their personal data will be used for personalization of ads and that cookies may be used for personalized and non-personalized advertising on the first layer of Consent notice.
  • We expect users to be informed about how Google will use their personal data when they give consent on your site. We encourage you to link the Google’s Privacy & Terms site to your Privacy Policy page provided it is accessible through your consent notice.

This is not an exhaustive list, please refer to the external checklist to avoid common mistakes when implementing a consent mechanism, and the help page cookiechoices.org has examples of consent notices. 

You can also update the consent notice and let us know so that we can request the Google’s Policy team to re-audit.



Sincerely,

Gargi
 
NB: If you need to reference this support ticket in the future, the ID number is 3-4137000033827

Your feedback is crucial; it helps us understand how we can better meet or exceed your needs.