The Roman Philosopher Lucius Anneaus Seneca (4 BCE-65 CE) was perhaps the first to note the universal trend that growth is slow but ruin is rapid. I call this tendency the "Seneca Effect."
Showing posts with label nuclear energy. Show all posts
Showing posts with label nuclear energy. Show all posts

Monday, October 17, 2022

The Dark Side of Nuclear Fusion: A New Generation of Weapons of Mass Destruction?



In December 1938 the atomic era was born in Otto Hahns beakers at the Kaiser-Wilhelm-Gesellschaft zur Förderung der Wissenschaften in Berlin, now Max Planck Institutes, where fission of uranium and thorium was discovered. In the image, the discovery as shown in Walt Disney's movie "Our Friend, the Atom" in 1956


This is a guest post by Giuseppe ("Pepi") Cima, retired nuclear researcher. It summarizes a number of facts that are known, in principle, but largely hidden from the public. Basically, research on nuclear fusion, sometimes touted as a benign technology able to produce energy "too cheap to meter," is often financed because of its military applications. The search is for an "inertial confinement device," that would detonate without the need for a trigger in the form of a conventional fission bomb. These devices could cover a range of destructive power that could go from tactical warheads to planet-bursting weapons. Fortunately, we are not there, yet, but Cima correctly notes how the current situation is similar to the way things were in the 1930s, when a group of bright scientists started working on nuclear chain reactions with the objective of unleashing the awesome power of nuclear fission. At the time, it was an enormously difficult challenge, but the task could be accomplished by means of the lavish financial support provided by the psychopathic criminals who were in power at the time, who were motivated by the perspective of developing an enormously powerful weapon. Today, we do not lack money for military research, nor do we lack criminals at the top, so we can only hope that the task of turning nuclear fusion into even more powerful weapons of mass destruction will turn out to be unfeasible. Unfortunately, we can't be sure about that and, if they are making good progress at that, surely they won't tell us. (U.B.)


By Giuseppe Cima

In February 1939, Leo Szilard, who had already thought of the chain reactions for energy in 1934, conceived the possibility of a bomb of extraordinary power. In September 1939 Szilard, with Eugene Wigner and Edward Teller, the soul of the H-bomb and the only one with a driving license, all Hungarians, went to see Albert Einstein who was on vacation on the New Jersey shore: it was already clear what to be afraid of. Szilard knew Einstein well from the Berlin years; they had jointly patented a new type of refrigerator. This time the idea was a device that could destroy an entire city in one blast. Together, they wrote a letter to President Roosevelt and almost nothing happened for about two years.

I can visualize the three Hungarians with a strong European accent, in Washington, trying to convince the Uranium Committee: a general, an admiral, and some mature scientists. "We canna make a little bomba and it will blow up a whola city." How could they believe it? But, in August 1945, six years later, nuclear power had changed the world, quickly ended world war two, and started an industry the size of the automotive one.

After a few years, Otto Hahn became a fervent opponent of the use of atomic energy for military purposes. Even before Hiroshima, Szilard, one of the most brilliant minds of the time, was ousted from anything to do with nuclear power, he devoted himself full-time to biology and in 1962 started the Council for a Livable World, an organization dedicated to the elimination of nuclear arsenals. In a 1947 issue of The Atlantic, Einstein claimed that only the United Nations should have atomic weapons at their disposal, as a deterrent to new wars. 

Why should we recall these episodes now? Because something similar is occurring today with nuclear fusion.


The essential fusion

Today, most people probably have some idea of what nuclear fusion is, even the Italian prime minister, Mr. Mario Draghi, spoke about it at a recent parliament session. Although energy can be produced by splitting uranium nuclei in two, it can also be produced by fusing light atomic nuclei. We have all been taught that this is the way the sun works and it has been repeated to boredom by people with a superficial knowledge of these processes, such as the Italian minister for the ecological transition Roberto Cingolani. But not everyone knows that if helium could be readily generated by two hydrogen atoms, our star, made of hydrogen, would have exploded billions of years ago in a giant cosmic bang. Fortunately, the fusion of hydrogen involves a "weak" reaction and is so slow and so unlikely that, even with the extraordinary conditions of the sun's core, the energy density produced by the reaction is about the same as that of a stack of decomposing manure, the kind we see smoking in the fields in winter.  To radiate the low-level energy produced in its giant core the sun, almost a million kilometers in diameter, must shine at twice the temperature of a lightbulb filament when is on.

To do something useful on Earth by means of nuclear fusion, one can't use hydrogen but needs two of its rare isotopes, deuterium and tritium, not by chance the ingredients of H bombs. The promoters of fusion for pacific purposes don't mention bombs, but this is precisely what I want to talk about, the analogies between fusion now and what happened in the 1930s and 1940s.


Peaceful use?

Reading what was written by the scientists who worked in nuclear fusion in the early years of the "atomic age" shows that the development of an energy source for peaceful use, energy "too cheap to meter", is what motivated them more than anything else. The same arguments were brought forward by Claudio Descalzi, CEO of ENI, a major investor in fusion, addressing the Italian Parliamentary Committee for the Security of the Republic (COPASIR) in a hearing of December 9th 2021: fusion will offer humanity large quantities of energy of a safe, clean and virtually inexhaustible kind.

Wishful thinking: with regard to "inexhaustible," we cannot do anything in fusion without tritium (an isotope of hydrogen) which is nonexistent on this planet and most of the theoretical predictions, no experiments to date, say that magnetic confinement, the main hope of fusion, will not self-fertilize. Speaking of "clean" energy, Paola Batistoni, head of ENEA's Fusion Energy Development Division, at reactor shutdown envisages the production of hundreds of thousands of tons of materials unapproachable by humans for hundreds of years.

However, the problem I am worried about here is a military problem, mostly ignored, even by COPASIR, the Parliamentary Committee for the Security of the Republic. There are many reasons to worry about nuclear fusion: the huge amount of magnetic energy in the reactor can cause explosions equivalent to hundreds of kilograms of TNT, resulting in the release of tritium, a very radioactive and difficult to contain gas. On top of it, with the neutrons of nuclear fusion, it is possible to breed fissile materials. But the risks that seem to me most worrisome in the long run will come from new weapons, never seen before.


New Weapons

To better understand this issue, let's review how classical thermonuclear weapons work, the 70-year-old ones. Their exact characteristics are not in the public domain but Wikipedia describes them in sufficient detail. For a more complete introduction, I recommend the highly readable books by Richard Rhodes. There exist today "simple" fission bombs, which use only fissile reactions to generate energy, and "thermonuclear" bombs, which use both fission and fusion for that purpose. Thermonuclear bombs are an example of inertial confinement fusion (ICF), where everything happens so quickly that all the energy is released before the reacting matter has the time to disperse.

The New York Times recently announced advances in the field of inertial fusion at the Lawrence Livermore National Lab in California with an article reporting important findings from NIF, the National Ignition Facility. What really happened was that the 192 most powerful lasers in the world, simultaneously shining the inner walls of a gold capsule of a few centimeters, vaporize it to millions of degrees. The X-rays emitted by this gold plasma in turn heat the surface of a 3 mm fusion fuel sphere which, imploding, reaches ignition. Ignition means that the fusion reactions are self-sustaining until the fuel is used up. As described in the article, without an atom bomb trigger, a few kilograms worth of TNT thermonuclear explosion occurs as in the conceptually analogous, but vastly more powerful, H-bomb of Teller and Ulam from the fifties. 




Fig. 1 Diagram of the Teller-Ulam thermonuclear device. The explosion is contained within a cavity, technically a "hohlraum", in analogy to the gold capsule of the NIF experiment but hundreds of times bigger.


We don't have to worry about these recent results too much, for now, NIF still needs three football fields of equipment to work, nothing which one could place at the tip of a rocket or drop from the belly of an airplane, but its miniaturization is the next step.

In fusion, military and civilian, particles must collide with an energy of the order of 10 keV, ten thousand electron-volts, the 100 million degrees mentioned everywhere speaking of fusion. Regarding the necessary fuel ingredients, deuterium is abundant, stable, and easily available. Tritium on the other hand, with an average life of 10 years, can't be found in nature and only a few fission reactors can produce it in small quantities. The world reserves are around 50 kg, barely enough for scientific experiments, and it's thousands of times more expensive than gold. The fusion bombs solved the tritium procurement issue by transmuting lithium 6, the fusion fuel of Fig. 1, instantaneously, by means of fission neutrons. In civilian fusion, instead, the possibility of extracting enough tritium from lithium is far from obvious. It is one of the important issues expected to be demonstrated by ITER, a gigantic TOKAMAK, the most promising incarnation of magnetic fusion, under construction in the south of France with money from all over the world but mainly from the European community. The Russians, who invented it, and the Americans, the ones with most of the experience in the field, are skeptical partners contributing less money than Italy. The NIF inertial fusion experiment, instead, is financed by the Pentagon with billions of dollars, the most expensive fusion investment to reach ignition. 

Along the lines of NIF, there is also a French program, another country armed with nuclear weapopns. CEA, the Commissariat à l'Energie Atomique, Direction des Application Militaires, finances near Bordeaux the Laser Méga-Joule (LMJ), three billion euros and operational since October 2014. Investments like these show the level of military interest in fission-free fusion and so far they are the only ones who have achieved self-sustaining reactions.


Private enterprises

In the private field, First Light Fusion, a British company, has already invested tens of millions to carry out inertial fusion by striking a solid fuel target with a tennis ball size bullet. The experimental results consist, for now, of just a handful of neutrons. The amount of heat generated is, so far, undetectable, but the energy of the neutrons, 2.45 MeV, corresponds to the fusion of deuterium, the material of the target. I cited First Light Fusion to indicate that there is interest in inertial fusion even in private companies outside nuclear weapons national laboratories. Marvel Fusion, based in Bavaria, is another private enterprise claiming a new way to inertial confinement ignition.

For those wondering if the 12 orders of magnitude of difference for the density of the fuel needed in comparison to that of solid matter, and that of TOKAMAK, the one of a good lab vacuum, hide alternative methods to carry out nuclear fusion for peaceful and military purposes, the answer is certainly positive. Until now, in academia, before the advent of entrepreneurs' fusion, no proposal seemed attractive enough to be seriously pursued experimentally. The panorama could change in years to come, the proposal of General Fusion, Jeff Bezos's company to be clear, is of this type: short pulses at intermediate density. One wonders if the CEO of Amazon is aware of sponsoring research with possible military applications.


Experiments

The idea of ​​triggering fusion in a deuterium-tritium target by concentrating laser radiation, or conventional explosives, has long fascinated those who see it as a potentially unlimited source of energy and also those who consider it an effective and devastating weapon. At the Frascati laboratories of CNEN, the Comitato Nazionale per l'Energia Nucleare, now ENEA, Energia Nucleare e Energie Alternative, we find examples of experimentation of both methods in the 70s, see "50 years of research on fusion in Italy" by Paola Batistoni.

According to some sources, the idea of ​​triggering fusion with conventional explosives, as in the Frascati MAFIN and MIRAPI experiments of the mentioned CNEN review report, was seriously considered by Russian weapon scientists in the early 1950s and vigorously pursued at the Lawrence Livermore Laboratory during the 1958-61, the years of a moratorium on nuclear testing, as part of a program ironically titled "DOVE".

According to Sam Cohen, who worked at the Manhattan Project, DOVE failed in its goal of developing a neutron bomb "for technical reasons, which I am not free to discuss." But Ray Kidder, formerly at Lawrence Livermore, says the US lost interest in the DOVE program when testing resumed because "the fission trigger was a lot easier". It didn't all end there though, it is instructive to read now an article that appeared in the NYT in 1988, which describes a nuclear experiment carried out in order to verify the feasibility of an inertial fusion explosion not triggered by fission, such as Livermore's NIF. In addition to showing the unequivocal military interest in these initiatives, the article gives an idea of ​​the complexity, and slow pace, of their development. Nevertheless, the initiatives of the 80s seem to be bearing fruit now.

Modern nuclear devices are "boosted", they use fusion to enhance their yield and reduce their cost but the bulk of the explosive power still originates from the surrounding fissile material, not from fusion. However, there are devices where energy originates almost exclusively from fusion reactions such as the mother of all bombs, the Russian Tzar Bomb. With its 50 megatons, a multi-stage H, the addition of a tamper of fissile material would have greatly enhanced its yield but it was preferred to keep it “clean”.

It is important to underline that the H component of a thermonuclear device, unlike fissile explosives, contributes little to long-term environmental radioactivity. Uncovering the secrets of the ICF could indicate how to annihilate the enemy while limiting permanent environmental damage. It is the same reason why civilian fusion is claimed to be more attractive than fission: the final products, mostly helium, are much less radioactive than the heavy elements characteristic of fission ashes. As mentioned earlier, radioactivity nonetheless jeopardizes the usefulness of civilian fusion in other ways: a heavy neutron flux reduces the already precarious reliability of the reactor, and radioactivity protection greatly increases its cost.

Despite the rhetoric of some press advertising, the relevance of ICF for energy production is minimal for many reasons: first of all, as in the case of NIF, the primary energy, the supply power of all devices involved, is hundreds of times higher than the thermal energy produced by the reactions, the quasi-breakeven reported refers to the energy of the laser light alone. Even more importantly the micro-explosion repetition rate and the reliability necessary in a power plant constitute insurmountable obstacles.


Where do we stand?

Back to ICF, the Lawrence Livermore National Lab's NIF experiment is funded by the Department Of Defense aiming at new weapons while complying with yield limits imposed by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The Question of Pure Fusion Explosions Under the CTBT, Science & Global Security, 1998, Volume 7. pp.129-150 explains why we should be concerned about pure fusion weapons presently under investigation.

With nuclear fusion, we are witnessing a situation similar to what appeared clear to many of the scientists who participated in the development of weapons at the time of Hiroshima and Nagasaki: nuclear energy is frighteningly dangerous while potentially useful for producing energy and as a war deterrent.

With fusion, the balance between weapons and peaceful uses seems to be even more questionable, making further developments harder to justify. Fusion weapons, which will arrive earlier than reactors, are potentially more devastating than fission with a wider range to both higher and lower yields. Low-power devices, while remaining very destructive, would not carry a strong deterrent power, and the super high-power ones, hundreds and thousands of megatons, would have catastrophic consequences on a planetary level. On the other hand, electricity production by fusion seems now less and less likely to work out, economically less attractive than the already uninviting fission.

The wind and photovoltaic revolution, rendering the already proven nuclear fission obsolete despite the urgency of decarbonization, are making fusion unappealing even before it's proven to work. At the same time, possible military applications should discourage even the investigation of fusion tritium technologies. At the very least, new research regulations are needed.


It's a collective choice

Is "science" unstoppable in this instance?

First of all, I would characterize these developments as a purely technological development than a scientific one. We are talking of applications without general interest, not a frontier of science. Fusion is a "nuclear chemistry" with potentially aberrant applications, in analogy to other fields which are investigated in strict isolation. Fortunately, fusion is an economically very demanding technology, impossible to develop in a home garage. Working on fusion can be, at least for now, only a collective choice that reminds the story of the atomic bomb at the end of the 30s, but at a more advanced stage of development than when Szilard involved Einstein to reach Roosevelt. Is the genius is about to come out of the lamp?



Author's CV - I researched nuclear fusion in labs and universities in Europe and the US, publishing around 100 peer-reviewed papers in this field. After losing faith that a deconstructionist approach to fusion could yield better reactor performances than already indicated by present day experiments I started an industrial automation company in Texas. I have now retired in Venezia, Italy, where I pursue my lifetime interests: environmental protection, energy conservation, teaching technology and science, and, more recently, mechanical watches. Giuseppe Cima

Previously published in Italian on Scenari per il Domani, sep 14 2022

Monday, July 11, 2022

The Mystery of the Mousetrap: Of Chain Reactions and Complex Systems




The "mousetrap chain reaction" from Disney's 1957 movie "Our friend, the Atom." A fascinating experiment that brings a curious question: Why is the mousetrap the only thing you can buy at a hardware store that can create a chain reaction? Another mousetrap-related mystery is why, with so many experiments done, so far nobody had tried to make measurements to quantify the results? Eventually, two Italian researchers, Ilaria Perissi and Ugo Bardi re-examined this old experiment, showing how it can be seen as much more than a representation of a nuclear reaction, but a paradigm of the behavior of complex systems. 


Walt Disney's 1957 movie, "Our Friend the Atom," was an absolute masterpiece in terms of the dissemination of scientific knowledge. It was, of course, sponsored by the US government. It was supposed to promote their energy policy which, at the time, was based on the concept of "atoms for peace." So, the movie was propaganda but, at the same time, it is stunning to think that in the 1950s, the US government was making an effort to obtain an informed consent from its citizens, instead of just scaring them into submission! Things change, indeed. But we can still learn a lot from this old movie. 

So, "Our Friend, the Atom" is a romp through what was known about atomic physics at the time. The images are stunning, the explanations clear, and the story is fascinating with a mix of hard science and fantasy, such as the story of the genie and the fisherman. I went through my studies in chemistry having in mind the images from the movie. Still today, I tend to see in my mind protons as red, neutrons as white, and electrons as green, as they were shown in the book. 

One of the fascinating elements of the story was the chain reaction made with mousetraps. I was so impressed by that experiment that I always had in mind to redo it and, finally, last year, my colleague Ilaria Perissi agreed to give me a hand. Together, we built our wonderful, new, improved, mousetrap machine! We braved the risks of flying balls and we managed to make our experiments with only minor damage to our knuckles. And we were the first, it seems, to make quantitative measurements of this old experiment. 

I will tell you about our results below but, first, a bit of history. The idea of the mousetrap chain reaction was proposed for the first time in 1947 by Richard Sutton (1900-1966). He was a physicist working at Haverford College, in Pennsylvania: a maverick physics teacher who loved to create demonstrations of scientific phenomena. And, no doubt, the idea to use mousetraps to simulate a nuclear chain reaction was nothing less than a stroke of genius. Too bad that Sutton is not mentioned at all in Disney's movie. 
 
Here is how Sutton proposed the experiment: 



Sutton seems to have actually performed his demonstration in front of his students, although we have no pictures or records of it. We tried to use the same setup, but we found that the corks are too light to trigger the traps, and the reaction dies out immediately. It works only if the traps are not fixed to the table and are left free to fly around, Indeed, Sutton doesn't mention that he fixed the traps to the table. The "flying trap problem" plagues most of the experimental setups of this experiment. But if the chain reaction is generated by flying traps, it is not anymore a simulation of an atomic chain reaction. 

After that Sutton published his idea, performing the mousetrap experiment in public seems to have become fashionable.  You can find another illustration of the setup in the 1955 book by Margaret Hyde: "Atoms today and Tomorrow.


Note how the experiment has changed, probably because of the problems to make it work with corks. Now there are no corks, but a marble is used to trigger one trap, which is linked to other mousetraps by a "heavy thread." Maybe it works, but it is not what Sutton had proposed, and it is hard to present it as a simulation of anything. 

So, in 1956, the filmmakers at Disney were probably scratching their heads and thinking of how they could make the mousetrap experiment work. Eventually, they decided to use ping-pong balls and a large number of mousetraps. You can see the results in the movie: traps are flying all over. Same problem: this is not what the experiment was supposed to do. And there is a reason: also, in this case, we tried to use the same setup, and we found that ping pong balls are too light to cause traps to snap. If the traps are fixed to the table, the experiment just fizzles out after triggering one or two traps at most. 

Strangely, so few people noted the problem: an exception was the nuclear physicist Ivan Oelrich, but that was in 2010! Most of the mousetrap experiments you can find on the Web (and there are many) are of the "flying-traps" type. It is a problem with science for the public: it is often flashy and spectacular, and signifying nothing. 

We found only two experiments on the Web where the traps were fixed to the supporting plate, as they should have been. But, even in these two cases, no quantitative measurements were performed. Strange, but there is this curse with popular science to be often despised and, sometimes, carry a negative mark on a physicist's career. 

But never mind that. Your dream team, Ilaria and Ugo, engaged in making the experiment in the correct way, with fixed traps, and at the same time measuring the parameters of the experiment. Our trick was to use relatively heavy wooden balls that could nicely trigger the traps. We also enlarged the area of the metal triggers using cardboard disks. Then, we used commercial cell phone cameras to record the results. 


It took a lot of patience: it is not easy to load 50 traps with 100 wooden balls, avoiding that they start going off when you don't want them to go off. To say nothing about the gate snapping directly onto the experimenter's fingers. Painful, but not a cause of permanent damage. We did that in the name of science, and it worked! Of course, some reviewers were horrified by a paper that was not using expensive equipment and complicated and mysterious calculations. But, with patience, we succeeded in seeing it published in a serious scientific journal. 

Excuse me for being proud of our brainchild, but I truly found it elegant how we could fit our data with a simple mathematical model. And how the trap setup mirrors not only the chain reaction in a nuclear explosion, but also several other phenomena that flare up and then subside. For instance, the trap array may be seen as a mechanical simulator of the Hubbert curve, with the traps as oil wells and the balls as extracted oil. It can also simulate whaling, various cases of overexploitation of natural resources, the diffusion of memes in cyberspace, and more. Not bad for an object, the mousetrap, that had been developed with just one purpose: killing mice. 


We conclude our paper on "Systems" with the following paragraph: 
Mousetraps seem to be the only simple mechanical device that can be bought at a hardware store that can be used to create a chain reaction. We do not know why this phenomenon is so rare in hardware stores, but chain reactions are surely common in complex adaptive systems. We believe that the results we reported in this paper can be helpful to understanding such systems and, if nothing else, to illustrate how chain reactions can easily go out of control, not only in a critical mass of fissile uranium but also in similar dynamics occurring in the ecosystem that go under the name of “overshoot” and “overexploitation”.
Yes, really, why are mousetraps so exceptional? Who would have thought?
 
Here is the post that I published a few months ago on this subject. 

The Mousetrap Experiment: Modeling the Memesphere

 Reposted from "The Seneca Effect" Nov 22, 2021

 Ilaria Perissi with our mousetrap-based mechanical model of a fully connected network. You can find a detailed description of our experiment on ArXiv


You may have seen the "mousetrap experiment" performed as a way to demonstrate the mechanism of the chain reaction that takes place in nuclear explosions. One of its earliest versions appeared in the Walt Disney movie "Our Friend, the Atom" in 1957. 


We (myself and Ilaria Perissi) recently redid the experiment with 50 mousetraps and 100 wooden balls. And here it is.

But why bother redoing this old experiment (proposed for the first time in1947)? One reason was that nobody had ever tried a quantitative measurement. That is, measuring the number of triggered traps and flying balls as a function of time. So, we did exactly that. We used cell phone slow-motion cameras to measure the parameters of the experiment and we used a system dynamics model to fit the data. It worked beautifully. You can find a pre-print of the article that we are going to publish on ArXiv. As you can see in the figure, below, the experimental data and the model go reasonably well together. It is not a sophisticated experiment, but it is the first time that it was attempted.



But the main reason why we engaged in this experiment is that it is not just about nuclear reactions. It is much more general and it describes a kind of network that's called "fully connected," that is where all nodes are connected to all other nodes. In the set-up, the traps are nodes of the network, the balls are elements that trigger the connection between nodes. It is a kind of communication based on "enhanced" or "positive" feedback.

This experiment can describe a variety of systems. Imagine that the traps are oil wells. Then, the balls are the energy created by extracting the oil. And you can use that energy to dig and exploit more wells. The result is the "bell-shaped" Hubbert curve, nothing less!  You can see it in the figure above: it is the number of flying balls "produced" by the traps.

We found this kind of curve for a variety of socioeconomic systems, from mineral extraction to fisheries (for the latter, you can see our (mine and Ilaria's) book "The Empty Sea." So, the mousetraps can describe also the behavior of fisheries and have something to do with the story of Moby Dick as told by Melville.

You could also say the mousetrap network is a holobiont because holobionts are non-hierarchical networks of entities that communicate with each other. It is a kind of holobiont that exists in nature, but it is not common. Think of a flock of birds foraging in a field. One bird sees something suspicious, it flies up, and in a moment all the birds are flying away. We didn't have birds to try this experiment, but we found a clip on the Web that shows exactly this phenomenon.

It is a chain reaction. The flock is endowed with a certain degree of intelligence. It can process a signal and act on it. You can see in the figure our measurement of the number of flying birds. It is a logistic function, the integral of the bell-shaped curve that describes the flying balls in the mousetrap experiments



In Nature, holobionts are not normally fully connected. Their connections are short-range, and signals travel more slowly through the network. It is often called "swarm intelligence" and it can be used to optimize systems. Swarm intelligence does transmit a signal, but it doesn't amplify it out of control, as a fully connected network does, at least normally. It is a good control system: bacterial and ant colonies use it. Our brains are much more complicated: they have short-range connections but also long-range ones and probably also collective electromagnetic connections. 

One system that is nearly fully connected is the world wide web. Imagine that traps are people while the balls are memes. Then what you are seeing with the mousetrap experiment is a model of a meme going viral on the Web. Ideas (also called memes) flare-up on the Web when they are stimulated it is the power of propaganda that affects everybody.

It is an intelligent system because it can amplify a signal. That is that's the way it reacts to an external perturbation. You could see the mousetraps as an elaborate detection system for stray balls. But it can only flare up and then decline. It can't be controlled. 

That's the problem with our modern propaganda system: it is dominated by memes flaring up out of control. The main actors in this flaring are those "supernodes" (the Media) that have a huge number of long-range connections. That can do a lot of damage: if the meme that goes out of control is an evil meme and it implies, say, going to war against someone, or exterminating someone. It happened and keeps happening again as long as the memesphere is organized the way it is, as a fully connected network. Memes just go out of control.

All that means we are stuck with a memesphere that's completely unable to manage complex systems. And yet, that's the way the system works. It depends on these waves of out-of-control signals that sweep the web and then become accepted truths. Those who manage the propaganda system are very good at pushing the system to develop this kind of memetic waves, usually for the benefit of their employers. 

Can the memesphere be re-arranged more effectively -- turning it into a good holobiont? Probably yes. Holobionts are evolutionary entities that nobody ever designed. They have been designed by trial and error as a result of the disappearance of the unfit. Holobionts do not strive for the best, they strive for the less bad. It may happen that the same evolutionary pressure will act on the human memesphere. 

The trick should consist in isolating the supernodes (the media) in such a way as to reduce their evil influence on the Web. And, lo and behold, it may be happening: the great memesphere may be rearranging itself in the form of a more efficient, locally connected holobiont.  Haven't you heard how many people say they don't watch TV anymore? Nor do they open the links to the media on the Web. That's exactly the idea. Do that, and maybe you will start a chain reaction in which everyone will get rid of their TV. And the world will be much better. 




Thursday, March 24, 2022

The Tortuous Way to Nuclear Fusion


Giuseppe Cima discusses the real perspectives of fusion energy



By Giuseppe Cima


Newspapers make you think that nuclear fusion for electricity production is within reach and that, unlike fission, it is cheaper, cleaner, and safer. Okay, it's not online yet, it is argued, but it's up to us how fast it will supply useful energy. Things appear more complicated than that as soon as we take a look at the extraordinary variety of methods adopted by all the initiatives proposed.





Some use fuel from nuclear warheads, such as Commonwealth Fusion Systems, of which ENI is a shareholder. Others use the boron-proton reaction, such as the TAE, financed by ENEL, Tokamak Energy promotes magnetic confinement, and First Light Fusion of Oxford has been inspired by the "claw" of Alpheus heterochaelis, the gun shrimp claw. There are about thirty companies in this market and just as many, very different, methods. For the most part, these methods have been already studied, and discarded, by the academic community, and yet the industry has not decided which path to follow. This confusion indicates how distant the goal is. Let's revisit the story of fusion and the origin of this explosion of promises.

The inspiration for nuclear fusion came from looking at the sky, it originated with the mystery of the energy that powers the stars. At first they thought of gravity, a non-trivial idea for a body as large as the Sun. Jupiter, for example, a gaseous planet, has a surface temperature twice that of the Earth, powered by gravity while the planet shrinks of a few millimeters per year. Gravity was also Lord Kelvin's hypothesis for the Sun's power during a famous meeting of the Royal Society on January 21, 1887. The surface temperature of the Sun made him deduce that our star had been shining for about twelve million years, much longer than the Bible states.


William Thomson, 1st Baron Kelvin


Paleontologists were upset, gravitational energy doesn't fit the data, fossils on Earth are evidence of a Sun that has been shining for at least several hundred million years. The discussion remained on hold for a while until Arthur Eddington in the 1920s hypothesized that the heat originates from the fusion of hydrogen into helium. It was already known that helium weighs significantly less than four hydrogen atoms and we now know that this difference accounts for the solar radiation. But something was still wrong, the internal temperature seemed too low to support the relevant nuclear reactions. A few years later, new hypotheses emerged on the decay of hydrogen into helium and in Cambridge, in the 1930s the first accelerators experimentally proved the existence of fusion reactions. In the 1940s some illustrious veterans of Los Alamos - the physicists who worked on the atomic bomb - studied the problem, correctly identifying the main reactions and stellar nucleosynthesis was satisfactorily understood. Recent observations of solar neutrinos confirm the hypotheses of the 1940s: fusion of ordinary hydrogen, the same of H2O, provides all the energy radiated by the Sun.

What does this story have to do with the nuclear fusion that is now being talked about so much in the newspapers? Almost nothing. It's surprising but the stars, thanks to their enormous size and mind-blowing pressure, shine at 5,000 degrees while burning exasperatingly slowly, with the power intensity (ratio of power to mass) of a compost pile. If not, they would rapidly explode and disappear. We, on Earth, with our energy range of ten kilowatts per capita, would not know what to do with stellar fusion which has the power of our basal metabolic rate. What could be used on Earth comes from another branch of science: weapons.

After the development of fission nuclear devices in World War 2, we also developed the fusion of light atoms and in the 1950s, in order to overcome the power limits of fission, the first fusion bombs were developed in secrecy. This time hydrogen, the main component of the Sun and of the water molecule, is not the fuel but two of its rare isotopes are. From these isotopes, deuterium and tritium, come the most powerful weapons, such as the mother of all bombs, the Soviet 40 megaton Tsar Bomb (hopefully none is left in stock) that uses a conventional fission bomb to trigger the hydrogen fusion. Cheaper fission weapons, the so called boosted bombs, use the same principle. Now we know what's the ideal fuel for applications of fusion, the one that burns most easily, a mixture of deuterium and tritium, (DT). Almost immediately people started to think of using fusion for electricity production and started research programs, unclassified, from the 1960s.

Magnetic Confinement Fusion, MCF, turned out to be the method that attracted the favor of those looking for continuous combustion of deuterium-tritium and until now, it has remained the preferred way. The most popular MCF device is called TOKAMAK, a Russian acronym attributed to its inventors, Igor Tamm and Andrei Sakharov, (the latter is the same person who developed the H-bomb and civil rights fame- the world of physicists 50 years ago was smaller.)

Fusion is closely linked to thermonuclear weapons: they have in common the raw material, sophisticated and rare components - deuterium and tritium - certainly not just seawater, as we often hear. Deuterium, about .02 % of hydrogen, is readily available and costs only $ 4 per gram. Tritium, on the other hand, is not present in nature, has an average life of ten years, and is produced only by CANDU nuclear reactors. Stocks of tritium in the world consist of about 50 kg accumulated over the years, barely enough for future experiments and it is a thousand times more expensive than gold.

The weapons solved the tritium scarcity by creating it from an isotope of lithium bombarded by their own neutrons. It is the same lithium that is used for modern cell phone batteries and electric cars, but fusion would burn it in such modest quantities as not to increase its scarcity. Nevertheless, for civilian applications of fusion, the possibility of extracting enough tritium from lithium is far from obvious, the theoretical predictions are not good, experiments have never been made, and it is one of the most relevant results we expect from ITER, a gigantic magnetic confinement experiment whose construction in southern France will produce results hopefully in a couple of decades.

It must also be borne in mind that, if fusion ever proved possible, these 1 GW reactors would have to line up by the thousands to supply themselves with the tritium they need to contribute to the 16 TW global energy hunger.

It is already clear that three common statements about fusion, namely that it is near, cheap, and safe, are premature at best. The fuel is not seawater but it is hard to find and is the same stuff as most nuclear devices use, those same weapons which would be so dangerous in the wrong hands.

For now, there are still no fusion weapons, currently, they need a fission trigger, but explosions of pure deuterium-tritium would be very attractive because, at equal destructive power, they are "cleaner", with less radioactive leftover than the plutonium ones. It's no coincidence that the second-largest fusion project in the world, nearly $ 10 billion in funding, the Lawrence Livermore National Laboratory's NIF in California, has been arranged by the Department of Defense and not by the Department of Energy. NIF has shown it can detonate millimeter-diameter deuterium-tritium capsules triggered by the world's largest laser, the size of three football fields. For now they are not explosions of a kiloton, a thousand tons of TNT, but of a milliton, a thousandth of a ton, a stick of TNT. However, we know that, as in all detonations, the difficult operation is to trigger them. NIF is also promoted as a method suitable for a continuous energy production reactor. I leave it to you to imagine how realistic it would be to string hundreds of explosions per minute with sub-millimeter precision for decades, explosions that individually already stress to the limit a steel vacuum vessel the size of a gymnasium.

Inertial Confinement Fusion, ICF, of which the NIF is the best-known example, is not the only alternative to ITER's magnetic confinement, MCF. Between the very high fuel densities of the ICF, hundreds of times the solid, and the very low densities of the MCF, tens of thousands less dense than our atmosphere, intermediate densities could be employed by fusion by exploiting the magnetic field of MCF and the fast compression of ICF. There have been a few attempts in this direction in the history of fusion but at Colleferro, Italy, in the CNEN laboratories, in the 1960s, experiments were carried out with promising results. High-intensity sources of fusion neutrons were produced by imploding magnetized plasmas with the aid of conventional chemical explosives. The reason for the limited popularity of this method as a potential energy source is found again in how difficult it would be to produce high-frequency explosions, for years and years, as would be required in a reactor. Low-powered, low fallout, tactical bombs immediately come to mind as a possibility for the intermediate-density method. From the 1960s onwards there has been little mention of medium density fusion, until a few years ago when a small private initiative was born in Canada: General Fusion (GF).

GF offers something similar to those earlier Colleferro experiments, but with mechanical pistons instead of explosives. According to General Fusion, the new technique would allow the combustion of deuterium-tritium to be repeated cyclically at a potentially attractive rate and cost. In Colleferro, using explosives, they had not even reached ignition and no one would have noted General Fusion, one of the dozens of private fusion initiatives, were it not that Jeff Bezos, of Amazon fame, decided to invest up to two billion dollars in this venture. Regarding the General Fusion proposal, it is a real shame we can't seek the opinion of the CNEN, now ENEA, researchers who carried out the Colleferro experiments. Unfortunately, the signatories of the publications of those times are no longer with us. This observation reminds us of the very long development times of fusion experiments, a crucial problem in this field. Almost equally interesting it would be to ask Jeff Bezos if he ever noticed that the devices whose development he finances could also help to invent new nuclear armaments.

Military applications of civil fusion should certainly be a major drawback, but the inevitable radioactive activation of fusion reactor structures is even more so. Unlike the solid core of a fission reactor, the thin fuel of a magnetic confinement fusion reactor is transparent to the neutrons it produces and they are stopped only by the first solid wall they encounter. Even ITER, just an experiment, will generate tens of thousands of tons of radioactive material whose disposal would certainly contribute to the cost of the kilowatt-hour of a conceptually similar reactor.

The price of energy will ultimately decide the development of nuclear fusion. The comparison, for now, is with natural gas, which produces electricity in the US at the unbeatable cost of 2 ¢ per kilowatt-hour and with the new renewables, wind and solar, now only three or four times more expensive than gas and, in some cases, even less expensive. Nuclear plants are too large to enjoy the economies of scale of mass-produced gas turbines, solar panels and wind turbines. This feature penalizes fusion enormously and there are solid physical and safety reasons to make us think that this situation will not change. Fusion is now too far behind in its industrial development to be able to participate in decarbonization in this century, which is why it must be pursued with a very long-term perspective and kept away from financial speculation.




Giuseppe Cima, I researched nuclear fusion in labs and universities in Europe and the US : Culham labs in UK, ENEA in Frascati, CNR in Milan, University of Texas in Austin and published more than 70 peer reviewed papers in this field. After loosing faith that a deconstructionist approach to fusion could yield better reactor performances than already indicated by present day experiments I started an industrial automation company in Texas. I have now retired in Venezia, Italy, where I pursue my lifetime interests: environmental protection, energy conservation, teaching technology and science, the physics of mechanical horology.



Monday, June 14, 2021

Stereocene: The Future of Earth's Ecosytsem

Here is a post that appeared four years ago on "Cassandra's Legacy" and that I think is worth republishing here, on "The Seneca Effect," after some minor revisions. It is part of a series of rather speculative posts on the future of humankind and of the universe. Here are the links to the series

The Next Ten Billion Years

Star Parasites

The Long-Term Perspectives of Nuclear Energy

The Great Turning Point For Humankind: What if Nuclear Energy had not been Abandoned? 

 

______________________________________________________________________________

Stereocene: The Future of  Earth's Ecosystem



 During the "golden age" of science fiction, a popular theme was that of silicon-based life. Above, you can see a depiction of a silicon creature described by Stanley Weinbaum in his "A Martian Odyssey" of 1934. The creature was endowed with a metabolism that would make it "breathe" metallic silicon, oxidizing it to silicon dioxide, hence it would excrete silica bricks: truly a solid-state creature. It is hard to think of an environment where such a creature could evolve, surely not on Mars as we know it today. But, here, on Earth, some kind of silicon-based metabolism seems to have evolved during the past decades. We call it "photovoltaics." Some reflections of mine on how this metabolism could evolve in the future are reported below, where I argue that this new metabolic system could usher a new geological era which we might call "Stereocene", the era of solid-state devices.
 


An abridged version of a paper published in 2016 in 
"Biophysical Economics and Resource Quality"

Ugo Bardi
Dipartimento di Chimica - Università di Firenze

The history of the earth system is normally described in terms of a series of time subdivisions defined by discrete (or “punctuated”) stratigraphic changes in the geological record, mainly in terms of biotic composition (Aunger 2007ab). The most recent of these subdivisions is the proposed “Anthropocene,” a term related to the strong perturbation of the ecosystem created by human activity. The starting date of the Anthropocene is not yet officially established, but it is normally identified with the start of the large-scale combustion of fossil carbon compounds stored in the earth’s crust (“fossil fuels”) on the part of the human industrial system. In this case, it could be located at some moment during the eighteenth century CE (Crutzen 2002; Lewis and Maslin 2015). So, we may ask the question of what the evolution of the Anthropocene could be as a function of the decreasing availability of fossil carbon compounds. Will the Anthropocene decline and the earth system return to conditions similar to the previous geologic subdivision, the Holocene?

The Earth system is a nonequilibrium system whose behavior is determined by the flows of energy it receives. This kind of systems tend to act as energy transducers and to dissipate the available energy potentials at the fastest possible rate (Sharma and Annila 2007). Nonequilibrium systems tend to attain the property called “homeostasis” if the potentials they dissipate remain approximately constant (Kleidon 2004). In the case of the Earth system, by far, the largest flow of energy comes from the sun. It is approximately constant (Iqbal 1983), except for very long timescales, since it gradually increases by a factor of about 10 % per billion years (Schroeder and Connon Smith 2008). Therefore, the earth’s ecosystem would be expected to reach and maintain homeostatic conditions over timescales of the order of hundreds of millions of years. However, this does not happen because of geological perturbations that generate the punctuated transitions observed in the stratigraphic record.

The transition that generated the Anthropocene is related to a discontinuity in the energy dissipation rate of the ecosystem. This discontinuity appeared when the ecosystem (more exactly, the “homo sapiens” species) learned how to dissipate the energy potential of the carbon compounds stored in the earth’s crust, mainly in the form of crude oil, natural gas, and coal). These compounds had slowly accumulated as the result of the sedimentation of organic matter mainly over the Phanerozoic era over a timescale of the order of hundreds of millions of years (Raupach and Canadell 2010). The rate of energy dissipation of this fossil potential, at present, can be estimated in terms of the “primary energy” use per unit time at the input of the human economic system. In 2013, this amount corresponded to ca. 18 TW (IEA 2015). Of this power, about 86 % (or ca. 15 TW) were generated by the combustion of fossil carbon compounds.

The thermal energy directly produced by combustion is just a trigger for other, more important effects that have created the Anthropocene. Among these, we may list as the dispersion of large amounts of heavy metals and radioactive isotopes in the ecosphere, the extended paving of large surface areas by inorganic compounds (Schneider et al. 2009), the destruction of a large fraction of the continental shelf surface by the practice known as “bottom trawling” (Zalasiewicz et al. 2011), and more. The most important indirect effect on the ecosystem of the combustion of fossil carbon is the emission of greenhouse gases as combustion products, mainly carbon dioxide, CO2, (Stocker et al. 2013). The thermal forcing generated by CO2 alone can be calculated as approximately 900 TW or about 1 % of the solar radiative effect (Zhang and Caldeira 2015), hence a nonnegligible effect that generates an already detectable greenhouse warming of the atmosphere. This warming, together with other effects such as oceanic acidification, has the potential of deeply changing the ecosystem in the same way as, in ancient times, LIPs have generated mass extinctions (Wignall 2005; Bond and Wignall 2014).

Burning fossil fuels generate the exergy needed to create industrial structures which, in turn, are used to extract more fossil fuels and burn them. In this sense, the human industrial system can be seen as a metabolic system akin to biological ones (Malhi 2014). The structures of this nonbiological metabolic system can be examined in light of concepts such as “net energy” (Odum 1973) defined as the exergy generated by the transduction of an energy stock into another form of energy stock. A similar concept is the “energy return for energy invested” (EROI or EROEI), first defined in 1986 (Hall et al. 1986) [see also (Hall et al. 2014)]. EROEI is defined as the ratio of the exergy obtained by means of a certain dissipation structure to the amount of exergy necessary to create and maintain the structure. If the EROEI associated with a dissipation process is larger than one, the excess can be used to replicate the process in new structures. On a large scale, this process can create the complex system that we call the “industrial society.” The growth of the human civilization as we know it today, and the whole Anthropocene can be seen as the effect of the relatively large EROEI, of the order of 20–30 and perhaps more, associated with the combustion of fossil carbon compounds (Lambert et al. 2014).

A peculiarity of the dissipation of potentials associated with fossil hydrocarbons is that the system cannot attain homeostasis. The progressive depletion of the high-EROEI fossil resources leads to the progressive decline of the EROEI associated with fossil potentials. For instance, Hall et al. (2014) show that the EROEI of oil extraction in the USA peaked at around 30 in the 1960s, to decline to values lower than 20 at present. A further factor to be taken into account is called “pollution,” which accelerates the degradation of the accumulated capital stock and hence reduces the EROEI of the system as it requires more exergy for its maintenance (Meadows et al. 1972).

Only a small fraction of the crustal fossil carbon compounds can provide an EROEI >  1, the consequence is that the active phase of the Anthropocene is destined to last only a relatively short time for a geological time subdivision, a few centuries and no more. Assuming that humans will still exist during the post-Anthropocene tail, they would not have access to fossil fuels. As a consequence, their impact on the ecosystem would be mainly related to agricultural activities and, therefore, small in comparison with the present one, although likely not negligible, as it has been in the past (Ruddiman 2013; Mysak 2008).

However, we should also take into account that fossil carbon is not the only energy potential available to the human industrial system. Fissile and fissionable nuclei (such as uranium and thorium) can also generate potentials that can be dissipated. However, this potential is limited in extent and cannot be reformed by Earth-based processes. Barring radical new developments, depletion of mineral uranium and thorium is expected to prevent this process from playing an important role in the future (Zittel et al. 2013). Nuclear fusion of light nuclei may also be considered but, so far, there is no evidence that the potential associated with the fusion of deuterium nuclei can generate an EROEI sufficient to maintain an industrial civilization, or even to maintain itself. Other potentials exist at the earth’s surface in the form of geothermal energy (Davies and Davies 2010) and tidal energy (Munk and Wunsch 1998); both are, however, limited in extent and unlikely to be able to provide the same flow of exergy generated today by fossil carbon compounds.

There remains the possibility of processing the flow of solar energy at the earth surface that, as mentioned earlier on, is large [89,000 TW (Tsao et al. 2006) or 87,000 TW (Szargut 2003)]. Note also that the atmospheric circulation generated by the sun’s irradiation produces some 1000 TW of kinetic energy (Tsao et al. 2006). These flows are orders of magnitude larger than the flow of primary energy associated with the Anthropocene (ca. 17 TW). Of course, as discussed earlier, the capability of a transduction system to create complex structures depends on the EROEI of the process. This EROEI is difficult to evaluate with certainty, because of the continuous evolution of the technologies. We can say that all the recent studies on photovoltaic systems report EROEIs larger than one for the production of electric power by means of photovoltaic devices (Rydh and Sandén 2005; Richards and Watt 2007; Weißbach et al. 2013; Bekkelund 2013; Carbajales-Dale et al. 2015; Bhandari et al. 2015) even though some studies report smaller values than the average reported ones (Prieto and Hall 2011). In most cases, the EROEI of PV systems seems to be smaller than that of fossil burning systems, but, in some cases, it is reported to be larger (Raugei et al. 2012), with even larger values being reported for CSP (Montgomery 2009; Chu 2011). Overall, values of the EROEI of the order of 5–10 for direct transduction of solar energy can be considered as reasonable estimates (Green and Emery 2010). Even larger values of the EROEI are reported for wind energy plants (Kubiszewski et al. 2010). These values may increase as the result of technological developments but also decline because of the progressive occupation of the best sites for the plants and the increasing energy costs related to the depletion of the minerals needed to build the plants.

The current photovoltaic technology may use, but do not necessarily need, rare elements that could face near-term exhaustion problems (García-Olivares et al. 2012). Photovoltaic cells are manufactured using mainly silicon and aluminum, both common elements in the earth’s crust. So there do not appear to exist fundamental barriers to “close the cycle” and to use the exergy generated by human-made solar-powered devices (in particular PV systems) to recycle the systems for a very long time.

Various estimates exist on the ultimate limits of energy generation from photovoltaic systems. The “technical potential” in terms of solar energy production in the USA alone is estimated at more than 150 TW (Lopez et al. 2012). According to the data reported in (Liu et al. 2009), about 1/5 of the area of the Sahara desert (2 million square km) could generate around 50 TW at an overall PV panel area conversion efficiency of 10 %. Summing up similar fractions of the areas of major deserts, PV plants (or CSP ones) could generate around 500–1000 TW, possibly more than that, without significantly impacting on agricultural land. The contribution of wind energy has been estimated to be no more than 1 TW (de Castro et al. 2011) in some assumptions that have been criticized in (Garcia-Olivares 2016) Other calculations indicate that wind could generate as much as about 80 TW, (Jacobson and Archer 2012), or somewhat smaller values (Miller et al. 2011). Overall, these values are much larger than those associated with the combustion of fossil fuels, with the added advantage that renewables such as PV and wind produce higher quality energy in the form of electric power.

From these data, we can conclude that the transduction of the solar energy flow by means of inorganic devices could represent a future new metabolic “revolution” of the kind described by (Szathmáry and Smith 1995). (Lenton and Watson 2011) that could bootstrap the ecosphere to a new and higher level of transduction. It is too early to say if such a transition is possible, but, if it were to take place at its maximum potential, its effects could lead to transformations larger than those associated with the Anthropocene as it is currently understood. These effects are hard to predict at present, but they may involve changes in the planetary albedo, in the weather patterns, and in the general management of the land surface. Overall, the effect might be considered as a new geological transition.

As these effects would be mainly associated with solid-state devices (PV cells), perhaps we need a different term than “Anthropocene” to describe this new phase of the earth’s history. The term “Stereocene” (the age of solid-state devices) could be suitable to describe a new stage of the earth system in which humans could have access to truly gigantic amounts of useful energy, without necessarily perturbing the ecosystem in the highly destructive ways that have been the consequence of the use of fossil fuels during the past few centuries.

References (see original article)

 

Monday, May 31, 2021

The Long Term Perspectives of Nuclear Energy: Revisiting the Fermi Paradox


This is a revisitation of a post that I published in 2011, with the title "The Hubbert hurdle: revisiting the Fermi Paradox" Here, I am expanding the calculations of the previous post and emphasizing the relevance of the paradox on the availability of energy for planetary civilizations, and in particular on the possibility of developing controlled nuclear fusion. Of course, we can't prove that nuclear fusion is impossible simply because we have not been invaded by aliens, so far. But these considerations give us a certain feeling on the orders of magnitude involved in the complex relationship between energy use and civilization. Despite the hype, nuclear energy of any kind may remain forever a marginal source of energy. (Above, an "Orion" spaceship, being pushed onward by the detonation of nuclear bombs at the back).
 
 

The discovery of thousands of extrasolar planets is revolutionizing our views of the universe. It seems clear that planets are common around stars and, with about 100 billion stars in our galaxy, organic life cannot be that rare. Of course, "organic life" doesn't mean "intelligent life," and the latter doesn't mean "technologically advanced civilization." But, with so many planets, the galaxy may well be teeming with alien civilizations, some of them technologically as advanced as us, possibly much more.

The next step in this line of reasoning is called the "Fermi Paradox," said to have been proposed for the first time by the physicist Enrico Fermi in the 1950s: "if aliens exist, why aren't they here?" Even at speeds slower than light, nothing physical prevents a spaceship from crossing the galaxy from end to end in a million years or even less. Since our galaxy is more than 10 billion years old, intelligent aliens would have had plenty of time to explore and colonize every star in the galaxy. But we don't see aliens around, and that's the paradox. 

One possible interpretation of the paradox is that we are alone as sentient beings in the galaxy, perhaps in the whole universe. There may be a bottleneck, also known as the "Great Filter," that stops organic life from developing into the kind of civilization that engages in space-faring. 

Paradoxes are often extremely useful scientific tools. They state that two contrasting beliefs cannot be both true, and that's usually powerful evidence that some of our assumptions are not correct. The Fermi paradox is not so much about whether alien civilizations are common or not, but about the idea that interstellar travel is possible. It may simply be telling us that traveling from one star to another is very difficult, perhaps impossible. It is not enough to say that a future civilization will know things we can't even imagine. Any technology must obey the laws of physics. And that puts limits to what it can achieve. 

The problem of interstellar travel is not so much about how to build an interstellar spaceship. Already in the 1950s, some designs had been proposed that could do the job. An "Orion" starship would move by riding nuclear explosions at its back, and it was calculated that it could reach the nearest stars in a century or so. Of course, it would be a daunting task to build one, but there is no reason to think that it would be impossible. More advanced versions might use more exotic energy sources: antimatter or even black holes.

The real problem is not technology, it is cost. Building a fleet of interstellar spaceships requires a huge expenditure of resources that should be maintained for a time sufficiently long to carry out an interstellar exploration program - thousands of years at least. An estimate of the minimum power that a civilization needs to engage in sustained interstellar travel is of the order of 1000 terawatts (TW). It is just a guess, but it has some logic. The power installed today on our planet is approximately 18 TW and the most we could do with that was to explore the planets of our system, and even that rather sporadically. Clearly, to explore the stars, we need much more.

Of course, we are not getting close, and we may well soon start moving in the opposite direction. John Greer and Tim O'Reilly may have been the first to note that the "great filter" that generates the Fermi paradox could be explained in terms of the limitations of fossil fuels on Earth-like planets. Because of the "bell-shaped" production curve of a limited resource, a civilization flares up and then collapses. I dubbed this phenomenon the "Hubbert Hurdle" in 2011. The hurdle may be especially difficult to overcome if the Seneca effect kicks in, making the decline even faster, a true collapse.

But let's imagine that an alien civilization, or our own in the future, avoids an irreversible collapse and that it moves to nuclear energy. Let's assume it can avoid the risk of nuclear annihilation. Can nuclear energy provide enough energy for interstellar travel? There are many technical problems with nuclear energy, but a fundamental one is the availability of nuclear fuel. Without fuel, not even the most advanced spaceship can go anywhere.
 
Let's start with the technology we know: nuclear fission. Fissile elements (more exactly, "nuclides") are those that can create the kind of chain reaction that can be harnessed as an energy source. Only one of these nuclides occurs naturally in substantial amounts in the universe: the 235 isotope of uranium. It is a curious quirk of the laws of physics that this nuclide exists, alone. It is created in the explosions of supernova stars and also in the merging of neutron stars. It has accumulated on Earth's surface in amounts sufficient for humans to exploit to build tens of thousands of nuclear warheads and to currently produce about 0.3 TW of power. Fission could power a simple version of the Orion spaceship, but could it power a civilization able to explore the galaxy? Probably not. 
 
The uranium reserves on Earth are estimated at about 6 million tons. Currently, we burn some 60.000 tons of uranium per year to produce 0.3 TW of energy.  It means we would need 200 million tons per year (600,000 tons per day) to stay at the 1000 TW level estimated as needed for interstellar travel. At this rate, and with the current technology, the reserves would last for about 10 days (!)
 
This is no surprise: it was already known in the 1950s that the uranium reserves are not sufficient even to keep our current civilization going using the fission of U(235) nuclides. Imagine engaging in the colonization of the galaxy! But, of course, we know that we are not limited to U(235) for fission energy. There also exist "fissionable" nuclides that cannot sustain a chain reaction, but that can be turned ("bred") into fissile nuclides when bombarded with neutrons (usually generated by fissile isotopes). We never deployed this technology on a large scale, but we know that it can work at the level of prototypes. So, in principle, it could be expanded and become the main source of energy for a civilization.
 
The naturally occurring fissionable nuclides are isotopes of uranium and thorium: U(338) and Th(232), both much more abundant than U(235). Let's say that, using these nuclides, the resources needed for energy production could be increased by a factor of 100 or 1,000 in comparison to what we can do now. But, even in the most optimistic estimate, at an output of 1000 TW, we would simply pass from 10 days of supply to a few decades. No way!

We can think of ways to find more uranium and thorium, but it is hard to think that bodies in the solar system could be a source. You need an active plate tectonic condition in order for geological forces to accumulate ores and, on bodies such as the Moon and the asteroids, there are no uranium ores. Only extremely tiny amounts, of the order of parts per billion. And that makes extracting it an impossible task. We also know that there are some 4 billion tons of uranium dissolved in seawater, an amount that would change the game, at least in principle. But the hurdles are enormous: uranium is so diluted that you are thinking of filtering quintillions (10^18) of tons of water to get at those dispersed amounts. Would a planetary civilization destroy its oceans in order to build interstellar spaceships? 

Maybe we can stretch things in more optimistic ways, but within reasonable hypotheses, we remain at least a couple of orders of magnitude short of what is needed. Fission is not something that can sustain an interstellar civilization. At most, it can sustain a few interstellar probes, just like fossil fuels have been able to create a limited number of interplanetary probes. (BTW, the Oamuamua object might be one of these probes sent by an alien civilization). But, sorry, no fission-based galactic empire

There is one more possibility: nuclear fusion, the poster child of the Atomic Age.  The idea that was common in the 1950s is that nuclear fusion was the obvious next step after fission. We would have had energy "too cheap to meter." And not only that: fusion can use hydrogen isotopes, and hydrogen is the most abundant element in the universe. A hydrogen-powered starship could refuel almost anywhere in the galaxy. Hopping from one star to another, a fusion-based galactic empire would be perfectly possible. 
 
But controlled nuclear fusion turned out to be much more difficult than expected. In more than half a century of attempts, we have never been able to get more energy from a fusion process than we pumped into it. And, as time goes by, the task starts looking steeper and steeper.
 
Maybe there is some trick that we can't see to get nuclear fusion working; maybe we are just dumber than the average galactic civilization. But we may have arrived at a fundamental point: the Fermi paradox may be telling us that controlled nuclear fusion is NOT possible.

All this is very speculative, but we arrived at a concept entirely different from the one that is at the basis of the Fermi paradox: the idea, typical of the 1950s, that a civilization keeps always expanding and that it rapidly arrives to master energy flows several orders of magnitude larger than what we can do now (sometimes called the "Kardashev Scale."). 

Maybe we'll arrive to exploit solar energy so well that we'll be able to use it to build interstellar spaceships, but we are talking of a future so remote that we can't say much about it. For the time being, we don't have to think that the Fermi Paradox is telling us that we are alone in the universe. It just tells us that we shouldn't expect miracles from nuclear technology.